The April 6, 2009 issue of Sports Illustrated contains an article by Albert Chen titled Baseball's Next Top Models; Chen describes how baseball teams are using advanced statistics to ascertain which players are the best fielders at each position. The Tampa Bay Rays won the 2008 American League championship largely because they tremendously improved their defense by using advanced statistics as the basis for various personnel moves and for deciding how to most effectively deploy the players on their roster to maximize their defensive skills (for instance, they moved Akinori Iwamura from third base to second base not only because his defensive statistics are better at the latter position but also to make room for Evan Longoria to be called up as the new third baseman). Baseball statisticians have access to data that pinpoints where every single batted ball went and whether or not the fielder converted that play into an out. Although there are at least 10 players on a baseball field at any given time (one pitcher, eight fielders, one batter--assuming that there are no men on base), virtually everything that happens when the ball is in play can be broken down into a series of discrete, one on one actions: the pitcher throws the ball, the batter swings and, if he makes contact, a fielder attempts to catch the ball. Therefore, if one gathers together a large enough sample size of data, it is possible to create reliable models regarding pitchers, batters and fielders.
Obviously, basketball is a much more fluid and complex sport than baseball, at least in terms of constructing meaningful statistical models: even during an "isolation" play ostensbily involving only one ballhandler and one defender the other eight players on the court all have the potential to affect what will happen--the other four defenders may end up trapping and rotating, while the other four offensive players (depending on their size and skill sets) may be called upon to set a screen, cut to the hoop, spot up for an open jump shot or grab an offensive rebound. The play may result in an offensive rebound tip dunk or a made three pointer that never would have happened if the original ballhandler had not been talented enough to attract extra defensive attention but in the box score that original ballhandler may either receive credit for nothing (if he passes the ball and the recipient then swings it to a player who ultimately makes a three pointer) or he may even record a negative statistic (a missed field goal attempt) despite the fact that his actions directly led to the opening that created the putback opportunity.
Clearly, it is difficult for basketball statistics to fully capture what happens offensively; progress has been made in this regard but it is far from an exact science--and it is even more challenging to accurately measure basketball defense, particularly on an individual level. A perfect example of why individual basketball defense is tough to quantify took place in the first quarter of Boston's 106-104 game five overtime victory versus Chicago: Kendrick Perkins caught the ball on the left block versus Tyrus Thomas, spun baseline and scored a layup. TNT's Doug Collins noted that Thomas had positioned himself by Perkins' left shoulder (i.e., overplaying Perkins to force him to go to the baseline) because Perkins' best move from that spot is to go to the middle and shoot a jump hook; Thomas was supposed to receive help on the baseline--on an earlier play, help defender Derrick Rose stole the ball so easily from Perkins it looked like Rose was receiving a football handoff--but this time the help never arrived. How would a basketball "stat guru" evaluate that play in terms of Thomas' individual defense? Thomas' defensive rating would indicate that he allowed Perkins to score against him. Plus/minus data would award Perkins a +2 and Thomas a -2 and would also "indict" the other Bulls' defenders who were on the court at that time but would not reveal who was really at fault. A knowledgeable basketball observer would understand--as Collins immediately explained to the viewers--that Thomas did what he was supposed to do but that the help defender never arrived. Multiply this type of scenario over thousands of plays during the course of a season and it is easy to see why someone who watches basketball with understanding may come to a completely different conclusion about a player's value/skill set than someone who relies on nothing but numbers.
What about the success that Houston's General Manager Daryl Morey has had using advanced basketball statistics, as detailed in a New York Times article that I discussed here? If basketball statistical analysis is truly science and not pseudoscience, then it has to be based on the principles of the scientific method:
- Ask a Question
- Do Background Research
- Construct a Hypothesis
- Test Your Hypothesis by Doing an Experiment
- Analyze Your Data and Draw a Conclusion
- Communicate Your Results
Don't think that I am picking on Morey or Houston; as I wrote in my PBN article cited above, I appreciate that Morey is fully aware of the current limitations of basketball statistical analysis:
It cannot be emphasized strongly enough that Morey is not merely looking at spreadsheets and randomly assigning arcane values to certain combinations of numbers; statistics give him an indication of what to look for when he watches game film but he still has to watch game film to determine why players are putting up the numbers they do and to figure out what exactly those numbers mean.
In other words, Morey appears to understand the limits of a purely mathematical approach to the game and thus uses numbers to confirm what his eyes tell him -- and vice versa. This is a completely different approach from the one taken by far too many stat gurus who are so enamored with their formulas that they dismiss the importance of actually watching games -- perhaps because they are in fact not truly capable of watching basketball games with any real understanding of what is happening on the court.
It is a laudable goal for basketball statisticians to strive to analyze the sport as effectively as baseball statisticians evaluate baseball but when "stat gurus" and their buddies in the writing business act as if basketball has already been "solved" from an analytical/statistical standpoint they are actually hurting their cause more than helping it, because intelligent observers can plainly see that such claims are false. As Cleveland General Manager Danny Ferry recently told me about basketball statistical analysis, "to just make decisions off of statistics would be a mistake but it can be an important part of the equation in basketball." It would be foolish for an NBA GM to not look at statistical data but it would be even more foolish for him to rely solely or even primarily on such data at this juncture; in the Perkins/Thomas example, it is much more useful for a GM or coach to know that Thomas did what he was assigned to do--and to find out which player missed the help assignment--than to get a spreadsheet filled with numbers detailing how many times Perkins scored in the post with Thomas as the primary defender, because without the proper context that data could be dangerously misleading if it influenced the GM or coach to make a negative evaluation of Thomas' defense.That's the article: The Difference Between Measuring Defense in Basketball and Baseball
You are now reading the article The Difference Between Measuring Defense in Basketball and Baseball with link address https://wordentertainmen.blogspot.com/2009/04/the-difference-between-measuring.html
Post a Comment