The Next Phase for CBB Analytics (HV Weekly: 5/29/2020)

Dreaming about tracking data.

Welcome back to the Hoop Vision Weekly!

In today’s edition:

  • New defensive accounting tutorial for HV+ subscribers

  • Dreaming about what’s next for college basketball analytics — different potential applications and insights from tracking data

  • The final scouting breakdowns from the HV Tourney Bible

  • Links from around the college basketball world

New at HV+: Defensive Accounting Tutorial

On Wednesday, we sent out a defensive accounting tutorial to Hoop Vision PLUS subscribers.

A standard box score contains steals and blocks, but it doesn’t divide the burden of credit (or blame) to individual defenders beyond those very basic counting stats. The defensive accounting report was something we used to help attack that problem.

In order to take a deeper look at the report, I used the first half of the February 29th game between Virginia and Duke. The video is a screen recorded tutorial of the process.

Video Outline and Timestamps

  • 0:00 Intro

  • 2:30 General thoughts on measuring defense

  • 4:42 Explaining the defensive accounting report and what each column means

  • 18:03 Overview of the software used to make the report and the video workflow

  • 26:44 Tutorial charting the actual Virginia defensive possessions

Link: Defensive Accounting Tutorial (HV+)


For access to the tutorial and the full archive of HV+ research, become a Hoop Vision Plus subscriber for $10/month or $100/year.


The next phase for (college) basketball analytics

Interest in analytics at the college basketball level is at an all-time high. Statistical websites like kenpom have successfully been integrated into the coaching world, advancing the understanding of tempo-free stats — whether coaches even realize it or not.

In 2013, Pete Thamel’s Sports Illustrated article about Butler’s secret weapon sparked added intrigue into analytics. Now seven years later, we have examples like:

Do college coaches unanimously agree upon the value of analytics? Of course not, but there are few universally-accepted practices in the sport.

All of this increased interest, however, has not exactly been backed up with a financial commitment. In 2013, the NBA committed to installing tracking cameras in every arena. Not only did the cameras themselves cost $100,000, but the new source of data required hiring analytics staffers with the technical skill sets to extract insight from the data. By contrast, Drew Cannon — Butler’s “secret weapon” that same year — was a grad student paid $1,000 a month.

Instead of tracking data, college basketball analysts are generally relegated to using standard play-by-play and Synergy data — or manual charting. In many cases, these data sources simply aren’t granular enough to answer the more nuanced coaching questions.

That’s not to say tracking data isn’t coming down the pipeline. Like any new technology, the costs eventually drop over time. There is even work being done to generate the data without the expensive cameras at all.

From the NBA’s tracking data, the one statistic that has consistently been referenced to the public is “quantified shot quality”. During last year’s playoffs, for example, Quin Snyder said:

“We’re getting better shots than anyone in the playoffs based on QSQ ratings. We’re getting open looks. We can’t stop taking them.”

With the tracking data, QSQ evaluates the context of a shot — location, distance of closest defender, number of dribbles, etc — to determine quality. Over the course of a game or series, this is valuable information for a coach. Are good shots just simply not falling? Or are large-scale changes needed to create better shots?

Unfortunately, we only get bits and pieces from the NBA. Most of the tracking data analysis is concealed with layers of secrecy. Given that, I decided to let my mind wander. As a college coach with (theoretical) access to tracking data and machine learning capabilities — what actionable insights I would prioritize?

[1] Measuring offensive spacing and execution

Watch almost any Hoop Vision video over the last year and you are likely to notice the animated graphics highlighting the movement or action of the play. Consider this Kansas set play:

The video animates the zipper cut with “Zip”, the dribble handoff with “DHO”, and the other player motion with “Clear”. We display these annotations to provide a deeper audience understanding of play design.

But in the process of creating the animations, I started noticing that it was actually a useful exercise for evaluating offensive execution and timing. Some players are very easy to animate — their movements are on time and on target. Other players struggle with execution.

With machine learning, a computer could be trained to understand the expected timing and spacing of a set play. Each time a team runs that play, the computer could then evaluate the spatial performance of each player.

Imagine a world where this analysis is automated and tied to a team’s video workflow. At halftime, a staffer could quickly pull up the plays with blown execution. In player meetings, a coach could sort plays by the execution level of that specific individual.

The more reads or randomness in an offense, the harder it becomes to uniformly evaluate execution. But that doesn’t mean there wouldn’t still be use for a team with a more conceptually based offense.

When consulting for motion-heavy programs, for example, we work with them on evaluating offensive spacing during ball screen initiation. Tracking data could be used to identify the right balance between motion/randomness and spacing.

[2] Measuring defensive positioning and execution

It doesn’t matter what your source of information is — tracking data, box score data, or the eye-test — defense is inherently the more difficult side of the ball to measure.

(Take it from the guy who published a 12-minute video last week called Should You Help One Pass Away On Defense? and ultimately landed on… “Maybe?”)

So while it definitely wouldn’t be easy, tracking data is our best bet at answering some of the most important defensive questions with any sort of precision — especially when it comes to three-point defense.

Much like the offensive spacing metrics, I’m also picturing player-specific defensive metrics. How often does Player X get to the mid-line when he is two-passes away? How often does Player Y deny when he is one-pass away? How often does Player Z prevent the ball handler from using the screen during ice coverage?

[3] Measuring shot/transition reactions

My first year at New Mexico State, we had very specific rules for each offensive player after a shot went up. Players were designated prior to entering the game to be either a safety (get back on defense), a crasher (crash the glass), or a spy (find the opponent’s point guard and deny).

In our video and analytics workflow, we manually charted the execution of each player. The results were summary statistics like “missed crashes per 40 minutes”. This type of analysis seems tailor-made for robust tracking data.

The next step, of course, would be to evaluate the effects of the overall strategy. Does decreasing the number of players crashing the glass lead to better transition defense? If so, is it worth sacrificing offensive rebound percentage?

[4] Measuring player fatigue

This last one is biometric related (a different type of tracking data) — and the ethics of wearables certainly needs to be considered carefully. But using in-game data to drive substitution decisions — especially in college basketball where the stars play nearly the whole game — would be extremely useful.

Another New Mexico State anecdote — we played UNM twice in non-conference play in 2018 when they were experimenting with an extreme style of pressure defense and tempo. Prior to the game, we decided to script our first half substitutions ahead of time.

In other words, we had a piece of paper with the exact times players were going to enter and exit the game. The idea, due to the unique pace and chaos of UNM, was to keep our players fresh no matter what — while optimizing lineups ahead of time.

We won the game, but the substitution idea did prove tricky to execute. In segments of the game where there was a long period of time without a dead ball, we were forced to go off script. Adjusting on the fly — while still maintaining lineup optimization — was harder than anticipated.

Regardless, measuring fatigue and taking some of the emotion out of substitution decisions generally seems like a beneficial idea for both lineup optimization and injury risk.


The last batch of scouting breakdowns

This week we also published the last four scouting breakdowns from our (cancelled) NCAA Tournament Bible.

All 44 scouting breakdowns are available for Hoop Vision Plus subscribers.

Teams 1-10: Kansas, Gonzaga, Baylor, Dayton, Duke, San Diego State, Michigan State, Ohio State, Louisville and West Virginia

Teams 11-20:Maryland, Creighton, BYU, Houston, Florida State, Michigan, Oregon, Villanova, Arizona, Seton Hall

Teams 21-30: Texas Tech, Wisconsin, Iowa, Butler, Penn State, Rutgers, Kentucky, Illinois, Marquette, Florida

Teams 31-40: Virginia, Saint Mary’s, Auburn, LSU, Utah State, Belmont, Winthrop, Bradley, ETSU, Oklahoma


ICYMI from earlier this week


This newsletter is entirely supported by its readers. Thank you for your support of Hoop Vision and allowing us to continue to grow.