Category Archives: Research

A Quick Look at the Odds of Three-Setters

In the comments to my match-fixing post earlier this week, Elihu Feustel commented:

There are almost no situations where a best of 3 match is a favorite to go to three sets. If the market priced a player as greater than 50% to win in exactly 3 sets, that alone is compelling evidence of match fixing.

In Monday’s questionable Challenger match, not only did the betting markets believe that the match was likely to go three sets, it picked a specific winner in three sets.

It takes only a bit of arithmetic to see why Elihu’s point is correct. Let’s say two players, A and B, are exactly evenly matched. Each one has a 50% chance of winning the match and a 50% chance of winning each set. Thus, the odds that A wins the match in straight sets are 25% (50% for the first set multiplied by 50% for the second). The odds that B wins in straights are the same. The probability that the match finishes in straight sets, then, is 50% (25% for an A win + 25% for B), meaning that the odds of a three-set match are also 50%.

As soon as one player has an edge, the probability of a three-set match goes down. Consider the scenario in which A has a 70% chance of winning each set. The odds that player A wins in straight sets are 49% (70% times 70%) and the odds that B wins in straight sets are 9% (30% times 30%). Thus, there’s a 58% chance of a straight-set victory, leaving a 42% chance of a three-setter.

This simple approach makes one major assumption: each player’s chances don’t change from one set to another. That probably isn’t true. It seems most likely to me that the player who wins the first set gets stronger relative to his opponent, perhaps because he gains confidence, or because his opponent loses confidence, or because he figures he doesn’t have much chance of winning. (I’m sure this isn’t true in all matches, but I suspect it applies often enough.)

If it’s true that the probability of the second set is dependent–even slightly–on the outcome of the first set, the likelihood of a three-setter decreases even further.

Probability in practice

As expected, far fewer than half of tour-level matches go three sets. (I’m considering only best-of-three matches.) So far this year, 36% of ATP best-of-threes have gone the distance, while only 32% of Challenger-level matches have done so.

In fact, men’s tennis has even fewer three-set matches than expected. For every match, I used a simple rankings-based model to estimate each player’s chances of winning a set and, as shown above, the odds that the match would go three sets. For 2014 tour-level matches, the model–which assumes that set probabilities are independent–predicts that 44% of matches would go three sets. That’s over 20% more third sets that we see in practice.

There are two factors that could account for the difference between theory and practice. I think both play a part:

  1. Sets aren’t independent. If winning the first set makes a player more likely to win the second, there would be fewer three-setters than predicted.
  2. There’s usually a bigger gap between players than aggregate numbers suggest. On paper, one player might have a 60% chance of winning the match, but on the day, one player might be tired, under the weather, unhappy with his racquets, uncomfortable with the court … or playing his best tennis, in a honeymoon period with a new coach, enjoying friendly calls from home line judges. The list of possible factors is endless. The point is that for any matchup, there are plenty of effectively random, impossible to predict variables that affect each player’s performance. I suspect that those variables are more likely to expand the gap between players–and thus lower the likelihood of a three-setter–than shrink it.

A note on outliers

Despite the odds against three-setters, some players are more likely go three than others. Among the 227 players who have contested 100 or more ATP best-of-threes since 1998, 20 have gone the distance in 40% of more of their matches. John Isner, tennis’s most reliable outlier, tops the list at 47.4%.

Big servers don’t dominate the list, but Isner’s presence at the top isn’t entirely by chance. After John, Richard Fromberg is a close second at 46.7%, while Goran Ivanisevic is not far behind at 43.0%. Mark Philippoussis and Sam Querrey also show up in the top ten.

It’s no surprise to see these names come up. One-dimensional servers are more likely to play tiebreaks, and tiebreaks are as close to random as a set of tennis can get. Someone who plays tiebreaks as often as Isner does will find himself losing first sets to inferior opponents and winning first sets against players who should beat him.

That randomness not only makes it more likely the match will go three sets, it’s also something the players are aware of. If Isner drops a first-set tiebreak, he realizes that he still has a solid chance to win the match–losing the breaker doesn’t mean he’s getting outplayed. If there is a mental component that partially explains the likelihood of the first-set winner taking the second set, it doesn’t apply to players like him.

Still, even Big John finishes sets in straights more than half the time. Every other tour regular does so as well, so it would take a very unusual set of circumstances for a betting market–or common sense–to favor a three-set outcome.

3 Comments

Filed under Research

Jo-Wilfried Tsonga and the (Extremely Specific) Post-Masters Blues

Two days after winning a Masters title in Toronto, Jo-Wilfried Tsonga played his first match in Cincinnati. Betting odds had the Frenchman as a heavy favorite over the unseeded Mikhail Youzhny, but a sluggish Tsonga dropped the match in straight sets.

An explanatory narrative springs to mind immediately: After playing six matches in a week, it’s tough to keep a winning streak going. Losing the match, even against a lesser opponent, is predictable. (Of course, it’s more predictable in hindsight.)

As usual with such “obvious” storylines, it’s not quite so straightforward. On average, ATP title winners who enter a tournament the following week perform exactly as well as expected in their first match of the next event. The same results hold for finalists, who typically played as many matches the previous week, and must also bounce back from a high-stakes loss.

To start with, I looked at the 1,660 times since the 2000 season that an ATP finalist took the court again within three weeks. Those players won, on average, 1.93 matches in their post-title event, losing their first matches 29% of the time. Their 71% next-match winning percentage is virtually identical to what a simple ranking-based model would predict. In other words, there’s no evidence here of a post-final letdown.

More relevant to Tsonga’s situation is the set of 1,055 finalists who played the following week. Those men won 1.7 matches in their next event, losing 31% of the their first matches at the follow-up event. That’s about 1% worse than expected–not nearly enough to allow us to draw any conclusions. Narrowing the set even further to the 531 tournament winners who played the next week, we find that they won 2.0 matches in their next event, losing 26.3% of their first matches, just a tick better than the simple model would predict.

Some of these numbers–1.7 match wins in a tournament; a 70% winning percentage–don’t sound particularly impressive. But we need to keep in mind that the majority of ATP tournaments don’t feature Masters-level fields, and plenty of finalists are well outside the top ten. Plus, the players who play an event the week after winning a title tend to be lower ranked. Masters events occupy back-to-back weeks on the schedule only a couple of times each season.

If we limit our scope to the more uncommon back-to-back tourneys for Masters winners, a bit of a trend emerges. The week after winning a Masters tournament, players win an average of 2.9 matches, losing their first match only 20.4% of the time. That sounds pretty good, except that, in the last 15 years, the group of Masters winners has been extremely good. That 80% first-match winning percentage is 5% below what a simple model would’ve predicted for them.

If we limit the sample even further, to Masters winners ranked outside the top five–a group that includes Tsonga–we finally find more support for the “obvious” narrative. Since 2000, 17 non-top-fivers have shown up for a tournament the week after winning a Masters event. They’ve won only 1.8 matches in their next events, losing their first match more than 40% of the time. That’s 20% worse than expected.

This small set of non-elite Masters winners is the only group I could find that fit the narrative of a post-title or post-final blues. (I tested a lot more than I’ve mentioned here, and nearly all showed players performing within a couple percent of expectations.)

Tsonga cited low energy in his post-match press conference, but we shouldn’t forget that there are plenty of other reasons the Frenchman might lose a first-round match. He’s split his six career matches against Youzhny, and 7 of his 19 losses in the past year have come to players ranked lower than the Russian. Losses don’t always need precedents, and in this case, the precedents aren’t particularly predictive.

1 Comment

Filed under Cincinnati, Jo-Wilfried Tsonga, Research

Erratic Results and the Fate of Jerzy Janowicz

When Jerzy Janowicz defeated Victor Estrella in the first round at Roland Garros on Sunday, it was the Pole’s first win since Februrary, breaking a string of nine consecutive losses. Janowicz’s results have been rather pedestrian ever since his semifinal run at Wimbledon last year, yet the 720 points he earned for that single performance have kept his ranking in the top 25 and given him a seed at the Grand Slams.

As we’ve discussed many times on this site, occasional greatness trumps consistent mediocrity, at least as far as ranking points are concerned. The system rewards players who bunch wins together–Janowicz current holds about 1500 points, barely double what he earned from that single event last year.

In the short term, bunching wins is a good thing, as Janowicz has learned. But from an analytical perspective, how should we view players with recent histories like his? Does the Wimbledon semifinal bode well for the future? Does the mediocre rest of his record outweigh a single excellent result? Does it all come out in the wash?

It’s a question that doesn’t pertain only to Janowicz. While 48% of Jerzy’s points come from Wimbledon, 49% of Andy Murray‘s current ranking point total comes from winning Wimbledon. Another reigning Slam champion, Stanislas Wawrinka, owes 34% of his point total to a single event.  By contrast, for the average player in the top 50, that figure is only 21%. Rafael Nadal and Novak Djokovic are among the most consistent on tour, at 16% and 10%, respectively.

Since 2010, there has only been one top-40 player who earned more than half of his year-end ranking points from a single event: Ivan Ljubicic, whose 1,000 points for winning Indian Wells dominated his 1,965 point total. His top-16 ranking at the end of that year didn’t last. He didn’t defend his Indian Wells points or make up the difference elsewhere, falling out of the top 30 for most of 2011. Of course, he was in his 30s at that point, so we shouldn’t draw any conclusions from this extreme anecdote.

When we crunch the numbers, it emerges that there has been no relationship between “bunched” ranking points and success the following year. I collected several data points for every top-40 players from the 2010, 2011, and 2012 seasons: their year-end ranking, the percentage of ranking points from their top one, two, and three events, and the year-end ranking the following year.  If bunching were a signal of an inflated ranking–that is, if you suspect Janowicz’s abilities don’t jibe with his current ranking–we would see following-year rankings drop for players who fit that profile.

Take Jerzy’s 2012, for example. He earned 46% of his points from his top event (the Paris Masters final), 53% from his top two, and 57% from his top three.  (Corresponding top-40 averages are 21%, 34%, and 44%.)  He ended the year with 1,299 ranking points. At the end of 2013, his ranking no longer reflected his 600 points from Paris. But unlike Ljubicic in 2010, Janowicz boosted his ranking, improving 24% to 1,615 points.

The overall picture is just as cloudy as the juxtaposition of Ljubicic and Janowicz. There is no correlation between the percentage of points represented by a player’s top event (or top two, or top three) and his ranking point change the following year.

For the most extreme players–the ten most “bunched” ranking point totals in this dataset–there’s a small indication that the following season might disappoint. Only three of the ten (including Janowicz in 2012-13) improved their ranking, while three others saw their point total decrease by more than 40%. On average, the following-year decrease of the ten most extreme player-seasons was approximately 20%. But that’s a small, noisy subset, and we should take the overall results as a stronger indication of what to expect from players who fit this profile.

There’s still a case to be made that Jerzy is heading for a fall. He hasn’t racked up many victories so far this year that would offset the upcoming loss of his Wimbledon points. And his Wimbledon success was particularly lucky, as he faced unseeded players in both the fourth round and the quarterfinals. Even if he is particularly effective on grass, it’s unlikely the draw will so heavily favor him again.

But however a player earns his disproportionately large point total, the points themselves are no harbinger of doom. On that score, anyway, Janowicz fans can expect another year in the top 25.

1 Comment

Filed under Jerzy Janowicz, Rankings, Research

The Limited Value of Head-to-Head Records

Yesterday at the Australian Open, Ana Ivanovic defeated Serena Williams, despite having failed to take a set in four previous meetings. Later in the day, Tomas Berdych beat Kevin Anderson for the tenth straight time.

Commentators and bettors love head-to-head records. You’ll often hear people say, “tennis is a game of matchups,” which, I suppose, is hardly disprovable.

But how much do head-to-head records really mean?  If Player A has a better record than Player B but Player B has won the majority of their career meetings, who do you pick? To what extent does head-to-head record trump everything (or anything) else?

It’s important to remember that, most of the time, head-to-head records don’t clash with any other measurement of relative skill. On the ATP tour, head-to-head record agrees with relative ranking 69% of the time–that is, the player who is leading the H2H is also the one with the better record. When a pair of players have faced each other five or more times, H2H agrees with relative ranking 75% of the time.

Usually, then, the head-to-head record is right. It’s less clear whether it adds anything to our understanding. Sure, Rafael Nadal owns Stanislas Wawrinka, but would we expect anything much different from the matchup of a dominant number one and a steady-but-unspectacular number eight?

H2H against the rankings

If head-to-head records have much value, we’d expect them–at least for some subset of matches–to outperform the ATP rankings. That’s a pretty low bar–the official rankings are riddled with limitations that keep them from being very predictive.

To see if H2Hs met that standard, I looked at ATP tour-level matches since 1996. For each match, I recorded whether the winner was ranked higher than his opponent and what his head-to-head record was against that opponent. (I didn’t consider matches outside of the ATP tour in calculating head-to-heads.)

Thus, for each head-to-head record (for instance, five wins in eight career meetings), we can determine how many the H2H-favored player won, how many the higher-ranked player won, and so on.

For instance, I found 1,040 matches in which one of the players had beaten his opponent in exactly four of their previous five meetings.  65.0% of those matches went the way of the player favored by the head-to-head record, while 68.8% went to the higher-ranked player. (54.5% of the matches fell in both categories.)

Things get more interesting in the 258 matches in which the two metrics did not agree.  When the player with the 4-1 record was lower in the rankings, he won only 109 (42.2%) of those matchups. In other words, at least in this group of matches, you’d be better off going with ATP rankings than with head-to-head results.

Broader view, similar conclusions

For almost every head-to-head record, the findings are the same. There were 26 head-to-head records–everything from 1-0 to 7-3–for which we have at least 100 matches worth of results, and in 20 of them, the player with the higher ranking did better than the player with the better head-to-head.  In 19 of the 26 groups, when the ranking disagreed with the head-to-head, ranking was a more accurate predictor of the outcome.

If we tally the results for head-to-heads with at least five meetings, we get an overall picture of how these two approaches perform. 68.5% of the time, the player with the higher ranking wins, while 66.0% of the time, the match goes to the man who leads in the head-to-head. When the head-to-head and the relative ranking don’t match, ranking proves to be the better indicator 56.5% of the time.

The most extreme head-to-heads–that is, undefeated pairings such as 7-0, 8-0, and so on, are the only groups in which H2H consistently tells us more than ATP ranking does.  80% of the time, these matches go to the higher-ranked player, while 81.9% of the time, the undefeated man prevails. In the 78 matches for which H2H and ranking don’t agree, H2H is a better predictor exactly two-thirds of the time.

Explanations against intuition

When you weigh a head-to-head record more heavily than a pair of ATP rankings, you’re relying on a very small sample instead of a very big one. Yes, that small sample may be much better targeted, but it is also very small.

Not only is the sample small, often it is not as applicable as you might think. When Roger Federer defeated Lleyton Hewitt in the fourth round of the 2004 Australian Open, he had beaten the Aussie only twice in nine career meetings. Yet at that point in their careers, the 22-year-old, #2-ranked Fed was clearly in the ascendancy while Hewitt was having difficulty keeping up. Even though most of their prior meetings had been on the same surface and Hewitt had won the three most recent encounters, that small subset of Roger’s performances did not account for his steady improvement.

The most recent Fed-Hewitt meeting is another good illustration. Entering the Brisbane final, Roger had won 15 of their previous 16 matches, but while Hewitt has maintained a middle-of-the-pack level for the last several years, Federer has declined. Despite having played 26 times in their careers before the Brisbane final, none of those contests had come in the last two years.

Whether it’s surface, recency, injury, weather conditions, or any one of dozens of other factors, head-to-heads are riddled with external factors. That’s the problem with any small sample size–the noise is much more likely to overwhelm the signal. If noise can win out in the extensive Fed-Hewitt head-to-head, most one-on-one records don’t stand a chance.

Any set of rankings, whether the ATP’s points system or my somewhat more sophisticated (and more predictive) jrank algorithm, takes into account every match both players have been involved in for a fairly long stretch of time. In most cases, having all that perspective on both players’ current levels is much more valuable than a noise-ridden handful of matches. If head-to-heads can’t beat ATP rankings, they would look even worse against a better algorithm.

Some players surely do have an edge on particular opponents or types of opponents, whether it’s Andy Murray with lefties or David Ferrer with Nicolas Almagro. But most of the time, those edges are reflected in the rankings–even if the rankings don’t explicitly set out to incorporate such things.

Next time Kevin Anderson draws Berdych, he should take heart. His odds of beating the Czech next time aren’t that much different from any other man ranked around #20 against someone in the bottom half of the top ten. Even accounting for the slight effect I’ve observed in undefeated head-to-heads, a lopsided one-on-one record isn’t fate.

2 Comments

Filed under Forecasting, Head-to-Heads, Research

Novak Djokovic and a First-Serve Key to the Match

Landing lots of first serves is a good thing, right? Actually, how much it matters–even whether it matters–depends on who you’re talking about.

When I criticized IBM’s Keys To the Match after last year’s US Open, I identified first-serve percentage as one of three “generic keys” (along with first-serve points won and second-serve points won) that, when combined, did a better job of predicting the outcome of matches than IBM’s allegedly more sophisticated markers.  First-serve percentage is the weakest of the three generic keys–after all, the other two count points won which, short of counting sets, is as relevant as you can get.

First-serve percentage is a particularly appealing key because it is entirely dependent on one player. While a server may change his strategy based on the returning skills of his opponent, the returner has nothing to do with whether or not first serves go in the box.  Unlike the other two generic targets and the vast majority of IBM’s keys, a first-serve percentage goal is truly actionable: it is entirely within one player’s control to achieve.

In general, first-serve percentage correlates very strongly with winning percentage.  On the ATP tour from 2010 to 2013, when a player made exactly half of his first serves, he won 42.8% of the time. At 60% first serves in, he won 47.0% of the time. At 70%, the winning percentage is 57.4%.

This graph shows the rates at which players win matches when their first-serve percentages are between 50% and 72%:

1svAs the first-serve percentage increases on the horizontal axis, winning percentage steadily rises as well.  With real-world tennis data, you’ll rarely see a relationship much clearer than this one.

Different players, different keys

When we use the same approach to look at specific players, the message starts to get muddled.  Here’s the same data for Novak Djokovic, 2009-13:

nd1sv

While we shouldn’t read too much into any particular jag in this graph, it’s clear that the overall trend is very different from the first graph. Calculate the correlation coefficient, and we find that Djokovic’s winning percentage has a negative relationship with his first-serve percentage. All else equal, he’s slightly more likely to win matches when he makes fewer first serves.

Djokovic isn’t alone in displaying this sort of negative relationship, either. The three tour regulars with even more extreme profiles over the last five years are Marin Cilic, Gilles Simon, and the always-unique John Isner.

Isner regularly posts first-serve percentages well above those of other players, including 39 career matches in which he topped 75%. That sort of number would be a near guarantee of victory for most players–for instance, Andy Murray is 32-3 in matches when he hits at least 70% of first serves in–but Isner has only won 62% of his 75%+ performances.  He is nearly as good (57%) when landing 65% or fewer of his first serves.

Djokovic, Isner, and this handful of others reveals a topic on which the tennis conventional wisdom can tie itself in knots. You need to make your first serve, but your first serve also needs to be a weapon, so you can’t take too much off of it.

The specific implied relationship–that every player has a “sweet spot” between giving up too much power and missing too many first serves–doesn’t show up in the numbers. But it does seem that different players face different risks.  The typical pro could stand to make more first serves. But a few guys find that their results improve when they make fewer–presumably because they’re take more risks in an attempt to hit better ones.

Demonstrating the key

Of the players who made the cut for this study–at least 10 matches each at 10 different first-serve-percentage levels in the last five years–9 of 21 display relationships between first-serve percentage and winning percentage at least as positive as Isner’s is negative.  The most traditional player in that regard is Philipp Kohlschreiber. His graph looks a bit like a horse:

pk1sv

More than any other player, Kohli’s results have a fairly clear-cut inflection point. While it’s obscured a bit by the noisy dip at 64%, the German wins far more matches when he reaches 65% than when he doesn’t.

Kohlschreiber is joined by a group almost as motley as the one that sits at the other extreme. The other players with the strongest positive relationships between first serve percentage and winning percentage are Richard Gasquet, Murray, Roger Federer, Jeremy Chardy, and Juan Martin del Potro.

These player-specific findings tell us that in some matchups, we’ll have to be a little more subtle in what we look for from each guy. When Murray plays Djokovic, we should keep an eye on the first-serve percentages of both competitors–the one to see that he’s making enough, and the other to check that he isn’t making too many.

2 Comments

Filed under Keys to the match, Novak Djokovic, Research

Analytics That Aren’t: Why I’m Not Excited about SAP in Tennis

It’s not analytics, it’s marketing.

The Grand Slams (with IBM) and now the WTA (with SAP) are claiming to deliver powerful analytics to tennis fans.  And it’s certainly true that IBM and SAP collect way more data than the tours would without them.  But what happens to that data?  What analytics do fans actually get?

Based on our experience after several years of IBM working with the Slams and Hawkeye operating at top tournaments, the answers aren’t very promising.  IBM tracks lots of interesting stats, makes some shiny graphs available during matches, and the end result of all this is … Keys to the Match?

Once matches are over and the performance of the Keys to the Match are (blessedly) forgotten, all that data goes into a black hole.

Here’s the message: IBM collects the data. IBM analyzes the data. IBM owns the data. IBM plasters their logo and their “Big Data” slogans all over anything that contains any part of the data. The tournaments and tours are complicit in this: IBM signs a big contract, makes their analytics part of their marketing, and the tournaments and tours consider it a big step forward for tennis analysis.

Sometimes, marketing-driven analytics can be fun.  It gives some fans what they want–counts of forehand winners, or average first-serve speeds. But let’s not fool ourselves. What IBM offers isn’t advancing our knowledge of tennis. In fact, it may be strengthening the same false beliefs that analytical work should be correcting.

SAP: Same Story (So Far)

Early evidence suggests that SAP, in its partnership with the WTA, will follow exactly the same model:

SAP will provide the media with insightful and easily consumable post-match notes which offer point-by-point analysis via a simple point tracker, highlight key events in the match, and compare previous head-to-head and 2013 season performance statistics.

“Easily consumable” is code for “we decide what the narratives are, and we come up with numbers to amplify those narratives.”

Narrative-driven analytics are just as bad–and perhaps more insidious–than marketing-driven analytics, which are simply useless.  The amount of raw data generated in a tennis match is enormous, which is why TV broadcasts give us the same small tidbits of Hawkeye data: distance run during a point, average rally hit point, and so on.  So, under the weight of all those possibilities, why not just find the numbers that support the prevailing narrative? The media will cite those numbers, the fans will feel edified, and SAP will get its name dropped all over the place.

What we’re missing here is context.  Take this SAP-generated stat from a writeup on the WTA site:

The first promising sign for Sharapova against Kanepi was her rally hit point. Sharapova made contact with the ball 76% of the time behind the baseline compared to 89% for her opponent. It doesn’t matter so much what the percentage is – only that it is better than the person standing on the other side of the net.

Is that actually true? I don’t think anyone has ever published any research on whether rally hit point correlates with winning, though it seems sensible enough. In any case, these numbers are crying out for more context.  Is 76% good for Maria? How about keeping her opponent behind the baseline 89% of the time? Is the gap between 76% and 89% particularly large on the WTA? Does Maria’s rally hit point in one match tell us anything about her likely rally hit point in her next match?  After all, the article purports to offer “keys to match” for Maria against her next opponent, Serena Williams.

Here’s another one:

There is a lot to be said for winning the first point of your own service game and that rung true for Sharapova in her quarterfinal. When she won the opening point in 11 of her service games she went on to win nine of those games.

Is there any evidence that winning your first point is more valuable than, say, winning your second point?  Does Sharapova typically have a tough time winning her opening service point?  Is Kanepi a notably difficult returner on the deuce side, or early in games?  “There is a lot to be said” means, roughly, that “we hear this claim a lot, and SAP generated this stat.”

In any type of analytical work, context is everything.  Narrative-driven analytics strip out all context.

The alternative

IBM, SAP, and Hawkeye are tracking a huge amount of tennis data.  For the most part, the raw data is inaccessible to researchers.  The outsiders who are most likely to provide the context that tennis stats so desperately need just don’t have the tools to evaluate these narrative-driven offerings.

Other sporting organizations–notably Major League Baseball–make huge amounts of raw data available.  All this data makes fans more engaged, not less. It’s simply another way for the tours to get fans excited about the game. Statheads–and the lovely people who read their blogs–buy tickets too.

So, SAP, how about it?  Make your branded graphics for TV broadcasts. Provide your easily consumable stats for the media.  But while you’re at it, make your raw data available for independent researchers. That’s something we should all be able to get excited about.

10 Comments

Filed under Data, Keys to the match, Research

Is There an Advantage To Serving First?

There’s no structural bias toward the player who serves first.  If tennis players were robots, it wouldn’t matter who toed the line before the other.

But the conventional wisdom persists.  Last year, I looked at the first-server advantage in very close matches, and found that depending on the scenario, the player who serves first in the final set may win more than 50% of matches–as high as 55%–but the evidence is cloudy.  And that’s based on serving first at the tail end of the match.  Winning the coin toss doesn’t guarantee you that position for the third or fifth set.

Logically, then, it’s hard to see how serving the first game of the match–and holding that possible slight advantage in the first set–would have much impact on the outcome of the match.  There’s simply too much time, and too many events, between the first game and the pressure-packed crucial moments that decide the match.

Yet, the evidence points to a substantial first-serve advantage.

In ATP main-draw matches this year, the player who served first won 52% of the time.  That edge is confirmed when we adjust for individual players.

39 players tallied at least 10 matches in which they served first and 10 in which they served second.  Of those 39, 21 were more successful when serving first, against 17 who won more often when serving second.  (Marcos Baghdatis didn’t show a preference.)  Weigh their results by their number of matches, and the average tour-level regular was 11% more likely to win when serving first than when serving second.  Converted to the same terms as the general finding, that’s 52.6% of matches in favor of the first server.

That’s not an airtight conclusion, but it is a suggestive one.  One possible problem would arise if lesser players–the guys who play some ATP matches against that top 39, but not enough to show up in the 39 themselves–are more likely to choose returning first.  Then, our top 39 would be winning 52.6% of matches against a lesser pool of opponents.

That doesn’t seem to be the case.  I looked at the next 60 or so players, ranked by how many ATP matches they’ve played this year.  That secondary group served first 51% of the time, indicating that the guys on the fringe of the tour don’t have any kind of consistent tendency when winning the coin toss.

For further confirmation, I ran the same algorithm for ATP Challenger matches this year.  That returned another decent-sized set of players with at least 10 matches serving first and 10 matches serving second–38, in this case.  The end result is almost identical.  The Challenger regulars were 9% more likely to win when serving first, which translates to the first server winning 52.2% of the time.

This is a particularly interesting finding, because in the aggregate, these 38 Challenger regulars prefer to serve second.  Of their 1110 matches so far this year, these guys served first only 503 times–about 45%.  Despite such a strong preference, the match results tell the story.  They are more likely to win when serving first.

When we turn our attention to the WTA tour, the results are so strong as to be head-scratching.  Applying the same test to 2013 WTA matches (though lowering the minimum number of matches to eight each, to ensure a similar number of players), the 35 most active players on the WTA tour are 28% more likely to win when serving first than when serving second.  In other words, when a top player is on the court, the first server wins about 56.3% of the time.  24 of the 35 players in this sample have better winning percentages when serving first than when serving second.

For something that cannot be attributed to a structural bias, a factor that can only be described as mental, I’m reluctant to put too much faith in these WTA results without further research.  However, the simple fact that ATP, Challenger, and WTA results agreed in direction is encouraging.  The first-server advantage may not be overwhelming, but it appears to be real.

12 Comments

Filed under Research

Simpler, Better Keys to the Match

If you watched the US Open or visited its website at any point in the last two weeks, you surely noticed the involvement of IBM.  Logos and banner ads were everywhere, and even usually-reliable news sites made a point of telling us about the company’s cutting-edge analytics.

Particularly difficult to miss were the IBM “Keys to the Match,” three indicators per player per match.  The name and nature of the “keys” strongly imply some kind of predictive power: IBM refers to its tennis offerings as “predictive analytics” and endlessly trumpets its database of 41 million data points.

Yet, as Carl Bialik wrote for the Wall Street Journal, these analytics aren’t so predictive.

It’s common to find that the losing player met more “keys” than the winner did, as was the case in the Djokovic-Wawrinka semifinal.  Even when the winner captured more keys, some of these indicators sound particularly irrelevant, such as “average less than 6.5 points per game serving,” the one key that Rafael Nadal failed to meet in yesterday’s victory.

According to one IBM rep, their team is looking for “unusual” statistics, and in that they succeeded.  But tennis is a simple game, and unless you drill down to components and do insightful work that no one has ever done in tennis analytics, there are only a few stats that matter.  In their quest for the unusual, IBM’s team missed out on the predictive.

IBM vs generic

IBM offered keys for 86 of the 127 men’s matches at the US Open this year.  In 20 of those matches, the loser met as many or more of the keys as the winner did.  On average, the winner of each match met 1.13 more IBM keys than the loser did.

This is IBM’s best performance of the year so far.  At Wimbledon, winners averaged 1.02 more keys than losers, and in 24 matches, the loser met as many or more keys as the loser.  At Roland Garros, the numbers were 0.98 and 21, and at the Australian Open, the numbers were 1.08 and 21.

Without some kind of reference point, it’s tough to know how good or bad these numbers are.  As Carl noted: “Maybe tennis is so difficult to analyze that these keys do better than anyone else could without IBM’s reams of data and complex computer models.”

It’s not that difficult.  In fact, IBM’s millions of data points and scores of “unusual” statistics are complicating what could be very simple.

I tested some basic stats to discover whether there were more straightforward indicators that might outperform IBM’s. (Carl calls them “Sackmann Keys;” I’m going to call them “generic keys.”)  It is remarkable just how easy it was to create a set of generic keys that matched, or even slightly outperformed, IBM’s numbers.

Unsurprisingly, two of the most effective stats are winning percentage on first serves, and winning percentage on second serves.  As I’ll discuss in future posts, these stats–and others–show surprising discontinuities.  That is to say, there is a clear level at which another percentage point or two makes a huge difference in a player’s chances of winning a match.  These measurements are tailor-made for keys.

For a third key, I tried first-serve percentage.  It doesn’t have nearly the same predictive power as the other two statistics, but it has the benefit of no clear correlation with them.  You can have a high first-serve percentage but a low rate of first-serve or second-serve points won, and vice versa.  And contrary to some received wisdom, there does not seem to be some high level of first-serve percentage where more first serves is a bad thing.  It’s not linear, but he more first serves you put in the box, the better your odds of winning.

Put it all together, and we have three generic keys:

  • Winning percentage on first-serve points better than 74%
  • Winning percentage on second-serve points better than 52%
  • First-serve percentage better than 62%

These numbers are based on the last few years of ATP results on every surface except for clay.  For simplicity’s sake, I grouped together grass, hard, and indoor hard, even though separating those surfaces might yield slightly more predictive indicators.

For those 86 men’s matches at the Open this year with IBM keys, the generic keys did a little bit better.  Using my indicators–the same three for every player–the loser met as many or more keys 16 times (compared to IBM’s 20) and the winner averaged 1.15 more keys (compared to IBM’s 1.13) than the loser.  Results for other slams (with slightly different thresholds for the different surface at Roland Garros) netted similar numbers.

A smarter planet

It’s no accident that the simplest, most generic possible approach to keys provided better results than IBM’s focus on the complex and unusual.  It also helps that the generic keys are grounded in domain-specific knowledge (however rudimentary), while many of the IBM keys, such as average first serve speeds below a given number of miles per hour, or set lengths measured in minutes, reek of domain ignorance.

Indeed, comments from IBM’s reps suggest that marketing is more important than accuracy.  In Carl’s post, a rep was quoted as saying, “It’s not predictive,” despite the large and brightly-colored announcements to the contrary plastered all over the IBM-powered US Open site.  “Engagement” keeps coming up, even though engaging (and unusual) numbers may have nothing to do with match outcomes, and much of the fan engagement I’ve seen is negative.

Then again, maybe the old saw is correct: It’s all good publicity as long as they spell your name right.  And it’s not hard to spell “IBM.”

Better keys, more insight

Amid such a marketing effort, it’s easy to lose sight of the fact that the idea of match keys is a good one.  Commentators often talk about hitting certain targets, like 70% of first serves in.  Yet to my knowledge, no one had done the research.

With my generic keys as a first step, this path could get a lot more interesting.  While these single numbers are good guides to performance on hard courts, several extensions spring to mind.

Mainly, these numbers could be improved by making player-specific adjustments.  74% of first-serve points is adequate for an average returner, but what about a poor returner like John Isner?  His average first-serve winning percentage this year is nearly 79%, suggesting that he needs to come closer to that number to beat most players.  For other players, perhaps a higher rate of first serves in is crucial for victory.  Or their thresholds vary particularly dramatically based on surface.

In future posts, I’ll delve into more detail regarding these generic keys and  investigate ways in which they might be improved.  Outperforming IBM is gratifying, but if our goal is really a “smarter planet,” there is a lot more research to pursue.

5 Comments

Filed under Keys to the match, Research, U.S. Open

Avoiding Double Faults When It Matters

The more gut-wrenching the moment, the more likely it is to stick in memory.  We easily recall our favorite player double-faulting away an important game; we quickly forget the double fault at 30-0 in the middle of the previous set.  Which one is more common? The mega-choke or the irrelevancy?

There are three main factors that contribute to double faults:

  1. Aggressiveness on second serve. Go for too much, you’ll hit more double faults.  Go for too little, your opponent will hit better returns.
  2. Weakness under pressure. If you miss this one, you lose the point. The bigger the point, the more pressure to deliver.
  3. Chance. No server is perfect, and every once in a while, a second serve will go wrong for no good reason.  (Also, wind shifts, distractions, broken strings, and so on.)

In this post, I’ll introduce a method to help us measure how much each of those factors influences double faults on the ATP tour. We’ll soon have some answers.

In-game volatility

At 30-40, there’s more at stake than at 0-0 or 30-0.  If you believe double faults are largely a function of server weakness under pressure, you would expect more double faults at 30-40 than at lower-pressure moments.  To properly address the question, we need to attach some numbers to the concepts of “high pressure” and “low pressure.”

That’s where volatility comes in.  It quantifies how much a point matters by considering several win probabilities.  An average server on the ATP tour starts a game with an 81.2% chance of holding serve.  If he wins the first point, his chances of winning the game increase to 89.4%. If he loses, the odds fall to 66.7%.  The volatility of that first point is defined as the difference between those two outcomes: 89.4% – 66.7% = 22.7%.

(Of course, any number of things can tweak the odds. A big server, a fast surface, or a crappy returner will increase the hold percentages. These are all averages.)

The least volatile point is 40-0, when the volatility is 3.1%. If the server wins, he wins the game (after which, his probability of winning the game is, well, 100%). If he loses, he falls to 40-15, where the heavy server bias of the men’s game means he still has a 96.9% chance of holding serve.

The most volatile point is 30-40 (or ad-out, which is logically equivalent), when the volatility is 76.0%.  If the server wins, he gets back to deuce, which is strongly in his favor. If he loses, he’s been broken.

Mixing in double faults

Using point-by-point data from 2012 Grand Slam tournaments, we can group double faults by game score.  At 40-0, the server double faulted 3.0% of points; at 30-0, 4.2%; at ad-out, 2.8%.

At any of the nine least volatile scores, servers double faulted 3.0% of points. At the nine most volatile scores, the rate was only 2.7%.

(At the end of this post, you can find more complete results.)

To be a little more sophisticated about it, we can measure the correlation between double-fault rate and volatility.  The relationship is obviously negative, with an r-squared of .367.  Given the relative rarity of double faults and the possibility that a player will simply lose concentration for a moment at any time, that’s a reasonably meaningful relationship.

And in fact, we can do better.  Scores like 30-0 and 40-0 are dominated by better servers, while weaker servers are more likely to end up at 30-40. To control for the slightly different populations, we can use “adjusted double faults” by estimating how many DFs we’d expect from these different populations.  For instance, we find that at 30-0, servers double fault 26.7% more than their season average, while at 30-40, they double fault 28.6% less than average.

Running the numbers with adjusted double fault rate instead of actual double faults, we get an r-squared of .444.  To a moderate extent, servers limit their double faults as the pressure builds against them.

More pressure on pressure

At any pivotal moment, one where a single point could decide the game, set, or match, servers double fault less than their seasonal average.  On break point, 19.1% less than average. With set point on their racket, 22.2% less. Facing set point, a whopping 45.2% less.

The numbers are equally dramatic on match point, though the limited sample means we can only read so much into them.  On match point, servers double faulted only 4 times in 296 opportunities (1.4%), while facing match point, they double faulted only 4 times in 191 chances (2.2%).

Better concentration or just backing off?

By now, it’s clear that double faults are less frequent on important points.  Idle psychologizing might lead us to conclude that players lose concentration on unimportant points, leading to double faults at 40-0. Or that they buckle down and focus on the big points.

While there is surely some truth in the psychologizing–after all, Ernests Gulbis is in our sample–it is more likely that players manage their double fault rates by changing their second-serve approach.  With a better than 9-in-10 chance of winning a game, why carefully spin it in when you can hit a flashy topspin gem into the corner?  At break point, there’s no thought of gems, just fighting on to play another point.

And here, the numbers back us up, at least a little bit.  If players are avoiding double faults by hitting more conservative second serves on important points, we would expect them to lose a few more second serve points when the serve lands in play.

It’s a weak relationship, but at least the data suggests that it points in the expected direction.  The correlation between in-game volatility and percentage of second serve points won is negative (r = -0.282, r-squared = 0.08).  Complicating the results may be the returner’s conservative approach on such points, when his initial goal is simply to keep the ball in play, as well.

Clearly, chance plays a substantial role in double faults, as we expected from the beginning.  It’s also clear that there’s more to it.  Some players do succumb to the pressure and double fault some of the time, but those moments represent the minority.  Servers demonstrate the ability to limit double faults, and do so as the importance of the point increases.

Continue reading

Leave a comment

Filed under Research, Serve statistics

The Unlikeliness of Inducing Double Faults

Some players are much better returners than others.  Many players are such good returners that everyone knows it, agrees upon it, and changes their game accordingly.  This much, I suspect, we can all agree on.

How far does that go? When players are altering their service tactics and changing their risk calculations based on the man on the other side of the net, does the effect show up in the numbers? Do players double fault more or less depending on their opponent?

Put it another way: Do some players consistently induce more double faults than others?

The conventional wisdom, to the extent the issue is raised , is yes.  When a server faces a strong returner, like Andy Murray or Gilles Simon, it’s not unusual to hear a commentator explain that the server is under more pressure, and when a second serve misses the box, the returner often gets the credit.

Credit where credit isn’t due

In the last 52 weeks, Jeremy Chardy‘s opponents have hit double faults on 4.3% of their service points, the highest rate of anyone in the top 50.  At the other extreme, Simon’s opponents doubled only 2.8% of the time, with Novak Djokovic and Rafael Nadal just ahead of him at 2.9% and 3.0%, respectively.

The conventional wisdom isn’t off to a good start.

But the simple numbers are misleading–as the simple numbers so often are.  Djokovic and Nadal, going deep into tournaments almost every week, play tougher opponents.  Djokovic’s median opponent over the last year was ranked 21st, while Chardy’s was outside the top 50.  While it isn’t always true that higher-ranked opponents hit fewer double faults, it’s certainly something worth taking into consideration.  So even though Chardy has certainly benefited from some poorly aimed second serves, it may not be accurate to say he has benefited the most–he might have simply faced a schedule full of would-be Fernando Verdascos.

Looking now at the most recent full season, 2012, it turns out that Djokovic did face those players least likely to double fault.  His opponents DF’d on 2.9% of points, while Filippo Volandri‘s did so on 3.9% of points.  While these are minor differences when compared to all points played, they are enormous when attempting to measure the returners impact on DF rate.  While Djokovic “induced” double faults on 3.0% of points and Volandri did so on 3.9% of points, you can see the importance of considering their opponents.  Despite the difference in rates, neither player had much effect on their opponents, as least as far as double faulting is concerned.

This approach allows to express opponent’s DF rate in a more efficient way, relative to “expected” DF rate.  Volandri benefited from 1% more doubles than expected, Chardy enjoyed a whopping 39% more than expected, and–to illustrate the other extreme–Simon received 31% fewer doubles than his opponents would be predicted to suffer through.

You can’t always get what you want

One thing is clear by now. Regardless of your method and its sophistication, some players got a lot more free return points in 2012 than others.  But is it a skill?

If it is a skill, we would expect the same players to top the leaderboard from one year to the next.  Or, at least, the same players would “induce” more double faults than expected from one year to the next.

They don’t.  I found 1405 consecutive pairs of “player-years” since 1991 with at least 30 matches against tour-level regulars in each season. Then I compared their adjusted opponents’ double fault rate in year one with the rate in year two.  The correlation is positive, but very weak: r = 0.13.

Nadal, one player who we would expect to have an effect on his opponents, makes for a good illustration.  In the last nine years, he has had six seasons in which he received fewer doubles than expected, three with more.  In 2011, it was 15% fewer than expected; last year, it was 9% more. Murray has fluctuated between -18% and +25%. Lots of noise, very little signal.

There may be a very small number of players who affect the rate of double faults (positively or negatively) consistently over the course of their career, but a much greater amount of the variation between players is attributable to luck.  Let’s hope Chardy hasn’t built a new game plan around his ability to induce double faults.

The value of negative results

Regular readers of the blog shouldn’t be surprised to plow through 600 words just to reach a conclusion of “nothing to see here.”  Sorry about that. Positive findings are always more fun. Plus, they give you more interesting things to talk about at cocktail parties.

Despite the lack of excitement, there are two reasons to persist in publishing (and, on your end, understanding) negative findings.

First, negative results indicate when journalists and commentators are selling us a bill of goods. We all like stories, and commentators make their living “explaining” causal connections.  Sometimes they’re just making things up as they go along. “That’s bad luck” is a common explanation when a would-be winner clips the net cord, but rarely otherwise.  However, there’s a lot more luck in sport than these obvious instances.  We’re smarter, more rational fans when we understand this.

(Though I don’t know if being smarter or rational helps us enjoy the sport more.  Sorry about that, too.)

Second, negative results can have predictive value. If a player has benefited or suffered from an extreme opponents’ double-fault rate (or tiebreak percentage) and we also know that there is little year-to-year correlation, we can expect that the stat will go back to normal next year. In Chardy’s case, we can predict he won’t get as many free return points, thus he won’t continue to win quite as many return points, thus his overall results might suffer.  Admittedly, in the case of this statistic, regression to the mean would have a tiny effect on something like winning percentage or ATP rank.

So at Heavy Topspin, negative results are here to stay. More importantly, we can all stop trying to figure out how Jeremy Chardy is inducing all those double faults.

5 Comments

Filed under Research, Serve statistics