NFL Survivor Picks Historical Performance Review
September 26, 2018 – by David Hess
In 2017, our NFL Survivor Pool Picks customers reported winning more than four times as much prize money as one would expect, based on their pool size and their own number of entries.
This post reviews in detail how we measure the success of our subscribers in NFL survivor pools, and breaks down how our picks did compared to expectations across a number of different performance angles.
Why This Post Focuses On 2017 Survivor Pick Results
Content:
ToggleFirst, a quick history. We started posting survivor pool pick advice on TeamRankings.com in 2010, starting with Week 1 of the 2010 NFL season. At first we would just make a single “official pick” in our survivor blog post each week, assuming a modestly-sized, standard-rules survivor pool.
(Although that kind of one-size-fits-all pick strategy is far from optimal for many types of pools, as skill and luck would have it, our official survivor picks finished the 2011 season 17-0. Still, we didn’t have a great idea of how many people were using them, how much money they won as a result, etc.)
After three seasons of blogging about survivor picks, we then made the resource investment to build a full-fledged “product” featuring a range of survivor pool related data and features. We released the first version of our first premium survivor picks product for the 2013 NFL season.
However, before 2017, we didn’t have a precise way of knowing what types of pools our subscribers were playing in, how they were applying our advice to real-world survivor pools, or how they ended up doing compared to expectations for an “average” player. Early iterations of our survivor picks product simply focused on providing a ranked list of pick options by pool, along with some pretty generic written advice on how to split up multiple survivor entries. At first we weren’t even saving each user’s past weekly picks in our product, and taking those past picks into account to make pick recommendations for the current week.
Most recently, our NFL Survivor Picks product underwent a huge redesign and upgrade before the 2017 season. The level of data we collect and pick customization we apply is now leagues beyond what we offered in previous years. As a result, 2017 is the first year for which we can provide highly detailed breakdowns of our survivor pick performance.
NFL Survivor Picks Product Overview
In case you aren’t familiar with our NFL Survivor Pool Picks product, here’s a quick overview:
We’ve built proprietary analytics to p out your best picking strategy for winning NFL Survivor pools.Our data-driven approach is based on thousands of computer simulations of pools similar to yours, and factors in details like the size of your pool, your pool rules (like strikes or multiple picks), and what teams you’ve already used.We support “portfolios” of multiple entries across multiple pools (up to 30 total picks per week), and provide a pick suggestion for each specific entry.
As far as we know, we are the only site that has a system to optimize survivor pool picks for such a broad range of survivor pool rules, and secondarily, for multiple-entry survivor pick portfolios.
The Implications Of Customized Picks
Because we take so many strategy factors into account, the weekly survivor picks we suggest can differ by pool, by entry, and by customer.
First, the rules of your survivor pool can have a massive effect on what the best pick is each week. For example, in a pool that requires multiple picks per week late in the season, saving teams with cushy late season matchups is more important. The size of your pool also matters, with far-in-the-future matchups being less important in smaller pools, which are more likely to end earlier in the season.
Secondly, every Survivor entry is different, because teams can only be picked once. Even if the Patriots are the best pick in a given week for a given pool, some of a customer’s entries may have already used them earlier in the season, so the best available pick for each entry could be different.
Finally, even for entries with the same teams available, every survivor pick portfolio is different. A player with only a single entry is generally going to want to pick the best available team for that entry. But if a player has a 10-entry portfolio, and the Rams look like the best available pick for all 10 entries, it generally makes sense to pick a team other than the Rams with some of those entries, in order to avoid putting all of their eggs in one basket.
The combined effect of having all of these strategy factors accounted for in pick advice leads to a very high level of complexity in terms of calculations. It can also lead to a fairly wide variety of recommended picks depending on a specific subscriber’s situation. That’s especially true late in the season, when the available teams on each entry dwindle. In Week 17 of the 2017 season, for example, we suggested 17 different teams as a pick to at least one customer. (Of course, most of those were extreme corner cases. Only 8 different teams were suggested to more than 1% of users. And only 3 different teams were common pick suggestions in standard pools.)
How We Measure Survivor Pick Success
The high level of customization that we now apply to survivor picks means that there is no “official TeamRankings survivor pick” in a given week. Instead, we have a distribution of picks that represents the sum of all the various picks we recommended across our entire subscriber base, based on their individual pool rules, past weekly picks, etc.
Consequently, there is no single set of picks we can track to tell us whether our suggestions did well. In addition, all we really care about is whether our pick recommendations give our subscribers an edge in their survivor pools. It’s impossible to determine that if all we know is that the top pick we advised to a given person survived 5 weeks, or 7, or 15.
So there’s only really one good way to measure the effectiveness of our NFL Survivor Pool Picks advice. We ask subscribers directly how our picks recommendations did for them, via a survey we email out at the end of the season.
In order to get custom pick advice from our NFL Survivor Picks product, customers have to set up their pool(s) on the site. That involves telling us their pool rules, as well as the overall pool size, and how many entries they personally are entering in the pool.
The end of season survey asks customers how they did in each specific pool they set up in our system. This allows us to not only get an idea of the overall performance of our pick suggestions, but also to look at how they fared based on various splits of the data (by pool rules, by pool size, etc).
Calculating Survivor Pool Win Expectations
Knowing how many customers won their pool is nice. But to get a real sense of whether our picks are providing an edge, we need to know what the baseline expectation should be. Is winning a pool 5% of the time good? 10%? 20%?
To define our baseline expectations, we assume every player in a given survivor pool is equally skilled. Then we calculate what percent of the prize pool our subscriber would expect to win, based on the number of entries they submitted and the overall pool size. That math is simply the number of customer entries divided by the total number of pool entries.
For example:
1 entry in a 10-entry pool … 1/10 … 10% expected prize share1 entry in a 100-person pool … 1/100 … 1% expected prize share5 entries in a 100-person pool … 5/100 … 5% expected prize share10 entries in a 5,000-person pool … 10/5000 … 0.2% expected prize share
This gives us the expected prize share for every customer in every pool. It tells us how much our customers would expect to win if our pick advice was not providing any edge in the pool.
To calculate the actual prize share, we ask customers (1) if they won their survivor pool(s), and (2) how many other entries they had to split the pot with.
If they won, then their actual prize share is simply 100% divided by the total number of entries splitting the pot. If they won the whole pot, their prize share is 100%. If they split the pot with 1 other entry, their prize share is 50%. If they split the pot with 2 other entries, their prize share is 33.3%. And so on.
Dividing the actual prize share by the expected prize share gives us a “Winnings Multiplier,” like 2 or 3. This Multiplier number tells us that our customers won 2x or 3x as much prize money as you’d expect an average person in the pool to win.
If our Multiplier is greater than 1, that means our pick advice has been delivering an edge, on average.
2017 NFL Survivor Picks Performance Results
Now that we’ve explained our methodology for measuring success, let’s examine how our NFL Survivor Picks customers did in 2017, compared to an “average” pool player:
Prize Share | |||||
---|---|---|---|---|---|
Year | % Won Pool | Avg % of Pot Won | Actual | Expected | Multiplier |
2017 | 24.3% | 49.4% | 12.0% | 2.8% | 4.3 |
Our customers won a prize in 24% of their pools in 2017. Their average “% of Pot Won” was 49%, which indicates that on average the winning customers split the pot with one other person.
That gave our customers an average Prize Share of 12%. Based on their number of entries, and the overall size of their pools, we’d expect them to earn only a 2.8% prize share, if our advice was providing no edge over the rest of the pool. What we actually saw, though, was that our customers won over 4 times as much as expected.
Survivor Pick Performance Splits
The numbers above show overall performance. However, we provide picks for a wide variety of pool rules and sizes. It’s worth looking at performance by pool type or by other factors, to see if only certain types of pools perform well, or if the edge holds across various types and sizes.
By Type Of Survivor Pool
First, here is customer performance by type of pool. This table is sorted from the most common pool type to the least. Also, note that we support combinations of these types, but if we break it down any further, the sample size gets too small to be meaningful:
Prize Share | |||||
---|---|---|---|---|---|
Pool Features … | % Won Pool | Avg % of Pot Won | Actual | Expected | Multiplier |
Standard Rules | 20.1% | 52.1% | 10.4% | 2.3% | 4.6 |
Multiple Picks | 22.1% | 29.0% | 6.4% | 1.4% | 4.5 |
Starts Midseason | 22.1% | 52.5% | 11.6% | 4.2% | 2.8 |
Strikes | 31.3% | 59.0% | 18.4% | 4.0% | 4.6 |
Buybacks | 30.7% | 43.6% | 13.4% | 3.1% | 4.4 |
Season Wins Tiebreaker | 21.3% | 51.5% | 11.0% | 1.9% | 5.9 |
Continues Into Playoffs | 31.6% | 59.7% | 18.9% | 6.7% | 2.8 |
Byes | 6.3% | 3.0% | 0.2% | 3.8% | 0.0 |
As you can see, in 2017 our picks delivered an edge in all types of supported pools, except for pools featuring Byes. It’s worth noting that:
Bye pools are our smallest sample, so this could just be noisePerformance across pool types is bound to vary by season, so this could just be noise for that reason as well.We made major improvements to the Bye pool logic midway through last season, so that now the relative value of a Bye pick versus other picks changes dynamically each week, rather than being fixed at a constant value. This should improve Bye pool performance, but may have been implemented too late last season to make a difference.
By Survivor Pool Size
Now, here is performance by pool size:
Prize Share | |||||
---|---|---|---|---|---|
Pool Size | % Won Pool | Avg % of Pot Won | Actual | Expected | Multiplier |
0-24 | 32.3% | 69.6% | 22.5% | 10.6% | 2.1 |
25-49 | 29.0% | 65.5% | 19.0% | 4.5% | 4.2 |
50-99 | 27.0% | 62.5% | 16.9% | 2.6% | 6.5 |
100-249 | 21.3% | 50.1% | 10.7% | 2.0% | 5.3 |
250-499 | 30.3% | 39.2% | 11.9% | 1.1% | 11.0 |
500-999 | 24.1% | 31.9% | 7.7% | 0.7% | 11.4 |
1000-9999 | 19.6% | 18.9% | 3.7% | 0.2% | 16.6 |
10000+ | 0.0% | n/a | 0.0% | 0.0% | 0.0 |
This is a pattern we’ve seen before in our office pool product performance. As pool sizes go up, the absolute win rate goes down, but the edge delivered by our picks goes up. This makes some sense. If you start a pool with, say, 20% win odds, realistically there’s an upper bound on how much we can improve that. We also suspect there is more “dead weight” in huge sized pools — players who just make dumb picks because they either don’t know any better or don’t put in the effort required to do so.
One note on the 10,000+ pool size bin, which shows a 0% win rate. The sample size in that bin is small enough (less than 100 pools) that even if we delivered a 10x multiplier, we wouldn’t expect to see any wins. A 10x multiplier would move your win odds from 1 in 10,000 to 1 in 1,000. So this sample size is simply too small to tell us anything very meaningful about our edge in giant pools.
By Number Of User Survivor Pool Entries
Finally, here is performance by number of user entries in a pool:
Prize Share | |||||
---|---|---|---|---|---|
Number of User Entries | % Won Pool | Avg % of Pot Won | Actual | Expected | Multiplier |
1 | 20.2% | 55.9% | 11.3% | 2.9% | 3.9 |
2 | 24.6% | 47.9% | 11.8% | 2.7% | 4.4 |
3 | 31.2% | 52.6% | 16.4% | 3.2% | 5.1 |
4 | 30.6% | 46.5% | 14.2% | 2.2% | 6.5 |
5 | 31.3% | 37.1% | 11.6% | 2.3% | 5.0 |
6-10 | 32.6% | 45.5% | 14.8% | 1.6% | 9.0 |
11-30 | 20.0% | 18.0% | 3.6% | 2.9% | 1.2 |
31-65 | 33.3% | 66.7% | 22.2% | 12.6% | 1.8 |
We delivered an edge for our customers no matter how many picks they entered in a pool. The sweet spot seems to be around 6 to 10 picks.
Smaller edges with even more entries makes some logical sense — if there is one ideal entry, then every successive entry you place in a pool has a lower expected return-on-investment than the previous one. The sample sizes (not shown) on some of these bins are fairly low, though. So we’re not totally sure how much of this trend is real, and how much is random.
Year 1 Survivor Pick Results: So Far, So Good
Our first year of highly customized, automatically-updating survivor portfolio picks covering a huge variety of pool types is in the books.
Based on these subscriber survey results, moving from generic weekly write ups (which by their nature can’t cover every little rules wrinkle, and can’t update as input data changes) to a customized, automated system was almost certainly a strongly profitable refinement for our customers. That was, of course, the motivation for making some massive improvements to our NFL Survivor Picks product during the summer of 2017, so it was great to see an immediate impact.
Even in great years for our picks overall, every customer isn’t going to win their pool — not even close. But our customer base on average winning over four times as much as expected is a clear demonstration of the edge our product delivers. If that edge holds for long term customers, the investment in TeamRankings survivor picks should pay off extremely well.
If you liked this post, please share it. Thank you! Twitter Facebook
NFL Football Pool Picks NFL Survivor Pool Picks NCAA Bracket Picks College Bowl Pool Picks College Football Pool Picks NFL Picks NBA Picks MLB Picks College Football Picks College Basketball Picks NFL Predictions NBA Predictions MLB Predictions College Football Predictions College Basketball Predictions NFL Spread Picks NBA Spread Picks MLB Spread Picks College Football Spread Picks College Basketball Spread Picks NFL Rankings NBA Rankings MLB Rankings College Football Rankings College Basketball Rankings NFL Stats NBA Stats MLB Stats College Football Stats College Basketball Stats NFL Odds NBA Odds MLB Odds College Football Odds College Basketball Odds A product ofTeamRankings BlogAboutTeamJobsContact
© 2005-2024 Team Rankings, LLC. All Rights Reserved. Statistical data provided by Gracenote.
TeamRankings.com is not affiliated with the National Collegiate Athletic Association (NCAA®) or March Madness Athletic Association, neither of which has supplied, reviewed, approved or endorsed the material on this site. TeamRankings.com is solely responsible for this site but makes no guarantee about the accuracy or completeness of the information herein.
Terms of ServicePrivacy Policy