After I wrote about instances where advanced statistics agreed and disagreed about players’ value I think it’s a fitting moment to explore mostly theoretical/philosophical topic which has been on my mind for a while now so I wanted to simply pose questions below with a vague hope that someone knows the answers and will share them with me. And they are mostly very fundamental:
Which factors decide about popularity/usefulness of one advanced metric over any other?
Is it even possible to create any statistic about NBA players’ value which will be universally acceptable and impossible to ignore by others?
Even if we focus only on minority of people in a camp “numbers can tell us enough to judge players without seeing them” is it possible to create a metric which almost all nerds will use and agree with it’s merits?
In this small group, could there ever be an agreement on a front between boxscore and non-boxscore metrics or both sides will simply follow their own paths forever?
As a community do we even want to create something like that or are we destined for a growing number of camps and it is a good thing?
I’m a little worried that so many years has been spent on throwing other side of the debate under the bus which really could have turned off interested by-standers… but on the other hand I think that competition creates a great environment for improvement… but am I wrong or advanced statistics basically stand still with some marginal changes along the way?
Whatever the answers…
What kind of ingredients such new metric should/would have?
Accuracy in Evaluation
It’s an obvious reason and it would pretty much end this topic… if not for one huge problem:
almost every author is convinced that his metric is the best one! What makes this situation worse are incentives – if someone created his own advanced statistic… why would he use any other?
That’s why I wonder… aren’t metrics simply the best way to describe author’s point of view?
It would imply that metric’s popularity or usefulness is nothing else but an estimated number of people who share the same point of view therefore agree with all the assumptions.
Could it be that simple?
Availability of Data and Explanation
I’m pretty sure there are many interesting evaluation tools around the NBA… but they are only for the internal use so we just don’t know about them and we probably never will.
What’s more, such priorities create a situation where talented people could be signed by NBA team before they finish work started in a public forum.
But generally speaking availability of the metric gives a clear incentive to discuss it by other people and spread the word about it. I don’t think metric can become popular without it and by the way that’s a reason why I’m totally baffled by basketballprospectus’ policy regarding WARP.
So far I haven’t even try to create my own evaluation tool for NBA players even though it’s probably typical for nerd to wonder about it [to be clear, I spent a lot of time designing own metric for fantasy basketball so now that’s covered I wonder about the next step to explore in this area]… but I’m not smarter or better equipped than people who’ve already tried so is there a point in this exercise? Is it possible to improve existing ones? What is the best case scenario? If you’ve tried, would you recommend it?
In short, what’s the point in creating own advanced metric right now?