Sprint Variability Profiling: New Insights From Speed Testing Data - Complementary Training
Sprint Variability Profiling: New Insights From Speed Testing Data

Sprint Variability Profiling: New Insights From Speed Testing Data

Guest Article by Robin Healy & Eamonn Flanagan

The old adage in team sports is that “speed kills” but what if faulty interpretation of speed testing data is killing our ability to meaningfully assess and assist our athlete’s sprint performance?

The typical approach for speed assessment of team sport athletes is to test athletes over a 40 m distance with their “output” tracked at various 10 m intervals in between – giving the coach 10 m, 20 m, 30 m and/or 40 m sprint times. Often in the team sport environment we look at 10 m time as a global indicator of acceleration with our total 40 m times highlighting top speed characteristics. ‘’We typically use rankings to benchmark players against each other or against “normative” data we have accrued over time. Ranking lists are shared and promoted to praise the best accelerators or our top speed kings.

But rankings are deceptive and offer very little practical insight into our athletes’ true speed qualities. They do not tell us the real difference between players. Only a hundredth of a second may separate an athlete ranked 1st to 2nd on a ranking list – well within true meaningful difference. Yet a tenth of a second might separate 2nd to 3rd. Rankings are ordinal variables and tell us nothing of the magnitude of difference. However, on a ranking list hung in the gym or the locker-room 1st to 2nd is enough for bragging rights regardless of the true difference.

While outcome is important (“who is the fastest over 40 m?”) as coaches we should be more interested in what contributes to that outcome and where opportunities for performance enhancement lie. For us, it is not just “what you do” but “how you do it”. If we understand the “how”, then we can be one step closer to crafting performance enhancement strategies.

The holy grail for coaches and sport scientists is to derive additional insight and competitive advantage from existing protocols and datasets – without any additional data collection. We aim to offer such a solution for speed testing data in this article.

A common mistake made by S&C coaches, with respect to speed testing data is that a 40 m time is assumed to be representative of top speed ability. However, a total 40 m time is an all-encompassing “outcome” variable and contained within it is 0-10 m times much more representative of accelerative ability. So for example, an athlete with exceptional short-acceleration ability over 10 m may be distinctly average from 30-40 m but he has already put enough distance between him and the opposition to finish with a top ranking 40 m time. So we assume “good top speed” but the reality is “top accelerator, average-to-poor top speed”. From our anecdotal experience, the reverse is even more common. Some athletes have poor acceleration but relatively higher maximum velocity abilities – a combination which yields merely an “average” 40m time1. Martin Bucheit, head of performance at Paris St. Germain football club, has empirically demonstrated examples of these “profile-types” in his 2014 paper “Mechanical determinants of acceleration and maximal sprinting speed in highly trained young soccer players”1.

Splits on the other hand isolate out a particular phase of sprint performance. A split time from 30-40 m is much more specifically representative of what is happening in the top speed phase. By examining outcomes (10 m, 20 m, 30 m, 40 m times) and splits (10 m, 10-20 m, 20-30 m, 30-40 m) together we get a better picture of “what we do” and “how we do it”. While track athletes have combined splits and outcomes in their training practice for eternity, the practice is much less common in field sport athletes where outcome scores like the 10 m and 40 m total time tend to be lionised. In field sports, benchmarking data is not generally used for split times. Coaches have a strong sense of what constitutes good or bad 10 and 40 m times – outcome is generally at the forefront of a coach’s mind. However, we don’t typically have as robust an understanding of how good or bad specific split times are.

What we really want is a ranking based system but which also takes into account the magnitude of differences between athletes AND allows us to derive insight into the different phases of early acceleration, late acceleration and top-speed running. We need an analysis tool which considers both outcomes and splits and allows us to identify gaps in performance.
Sprint variability profiling, using statistical z-scores can be this tool.

The z-score is a standardised score which tells us how many standard deviations you are better, or worse, than the average of your group. In a z-scoring system, the mean data is represented as a 0 value. If an athlete’s 10 m time is the exact same as the group average, then that athlete’s score presents as a 0. An athlete with a z-score of 1 would be 1 standard deviation above the mean data point for a particular variable. A z-score of -1 would be 1 standard deviation below average.

A sprint variability profiling system allows us the opportunity to combine outcomes and splits in our analyses and highlights athletes’ performance strengths and weaknesses relative to the variability within their group of peers. It tells us how good each athlete is overall but also in each section of sprint performance relative to the group. It’s a visual, intuitive tool that allows us to see the “what” and the “how”.

We start the process by pooling together all our group data, finding a group mean and assessing the group variance by calculating a standard deviation for each outcome (10 m, 20 m, 30 m & 40 m time) and for each split (0-10 m, 10-20 m, 20-30 m, 30-40 m split times). We use athletes’ fastest sprint from speed testing so that each split is “connected” and the outcomes are the direct result of cumulative splits.

Figure 1 shows the z-scores for a field sport athlete for his “outcome” measures only. Each outcome time (10, 20, 30, 40 m) is converted to a z-score relative to this player’s “group” using the equation:

formula

The representative group could be the player’s squad or a large group of players of similar position. The coach must be aware that the group will affect our insight on what the “rate-limiters” are for individual athlete. The technique lives or dies based on the quality of the normative dataset. Ideally groups will be as specific as possible for maximum validity. For example, in a sport like rugby union, players should be split into backs and forwards groups at the very least, but further refinement to create position specific norms would offer even greater relevance of insight. A good rule of thumb would be to match athletes based on general speed qualities that are desirable for their position. Coaches must also remember that normative data must have a critical mass of sample size to be valid. If possible, we recommend normative reference groups of approximately 15-20 athletes of similar positional demands.

A quick technical note: In the context of sprint times we “flip” the traditional z-score formula. A better speed time is faster and thus lower than the average represented by a negative z-score. However, we prefer to flip the calculation so that graphs and charts are more intuitive and the best performers are highest on the chart and an above average speed qualities are represented by positive z-scores. See example chart below.

graph1

The outcome data for the athlete in figure 1 has a clear “positive profile”. A poor 10 m time which is one standard deviation below average but increasing performance across the duration of the sprint means he finishes with an “average” 40 m outcome. A limited interpretation here would suggest the athlete has poor acceleration skills but rescues performance with better top speed running. When we overlay our split data, again converted to z-scores relative to the group, then we get a little more insightful detail.

mm
free-memeber-button
free-memeber-button

Welcome to Complementary Training Community! Forums Sprint Variability Profiling: New Insights From Speed Testing Data

Tagged: 

This topic contains 9 replies, has 4 voices, and was last updated by  Luca Schuster 1 year, 9 months ago.

Viewing 9 posts - 1 through 9 (of 9 total)
  • mm
    18/06/2017 at 09:10 #19888

    The analysis is based on the group: hence the “group” affects our insights on what the “rate limiters” are for a particular individual, and might bias our approach in setting up the interventions.

    mm
    18/06/2017 at 09:15 #19889

    This is a fair point and is a potential “limitation” of the technique. Robin developed this technique and as he describes it, the group used to generate the Z scores should “make sense”. If you have a back row in a group with outside backs then it wouldn’t be very surprising if the profile highlights weaknesses in the 20-30 and 30-40 m splits. We have included in the text “A good rule of thumb” which would be to match athletes / players based on general speed qualities that are desirable for their position. Hopefully the article shows a step by step process to interpreting the plots mixing qualitative (where is there a deficit) and quantitative (what is the absolute difference? Is it meaningful?).

    It can’t be stressed enough though – the normative group is key. This technique is not going to be meaningful if you have small sample size in your normative group or the members of your normative group have fundamentally different training goals and performance demands.

    mm
    18/06/2017 at 09:25 #19890

    How do we take into account variability of the test itself?

    mm
    18/06/2017 at 09:43 #19891

    A good point. The solution depends on what type of data is used to profile. This article and technique was born out of a collaborative project Robin completed at the Institute of Sport where I oversee the S&C department. Initial data collection was done on female field sport athletes (international level) and Robin used the fastest sprint so that each of the splits are actually connected in reality – the outcomes are the direct result of the cumulative sum of the splits. One solution to this is to use the range i.e. show the max and min values for each data point.

    If data are averaged (from 3 trials lets say) then the SD of the Z score is appropriate.

    We haven’t included any error bars in reporting charts at this point however – we think it might make the charts “too busy” and over complicate things. Ultimately the data has to be accessible to the coach so there is sometimes a tradeoff of true validity and data presentation. But the question is a good one. It is key that coaches understand the variability of the test or the technology and we should be mindful of this when trying to assess meaningful change.

    A key point that must be explicitly stated is that the inferences are only as reliable as the raw data! One needs to be very careful not to use “bad” trials i.e. where there is an issue with the start or first step or if the athlete doesn’t run maximally for the entire sprint – this skews things significantly.

    mm
    18/06/2017 at 09:55 #19892

    How does this compares to FV testing in sprinting?

    mm
    18/06/2017 at 11:18 #19893

    This method can be combined with other popular sprint profiling methods e.g FV profiling. In fact, the distance time data from timing gates etc can be used to perform FV profiling also. Robin has done some specific validation of this technique:

    A novel method to measure short sprint performance

    An important difference is that FV profiling requires a “true” first split time (where timing is initiated by the first movement of the athlete) whereas this is not a requirement of variability profiling (as long as the test set up is consistent, it’s fine). FV profiling calculates the macroscopic external mechanical capabilities of an athlete’s neuromuscular system throughout the acceleration phase of sprinting. In summary the methods are highly compatible as they yield distinct information.

    The sprint variability profiling using the timing gates can also, easily, be combined with other data. For example, in a collaborative project at the Irish Institute of Sport, Robin combined the timing gate data with stride length, stride frequency and contact time data from an optojump system (this article was born out of this collaborative project). This extra layer of data can be analysed in a very similar z-score manner helping to identify which athletes might be stride length dominant, which might be stride frequency dominant or which athletes might need to work on longer or shorter contact times.
    More data can blur the coaches vision and make everything more difficult to interpret. But I think the z-score analysis presented in the article can help add clarity.

    19/06/2017 at 00:22 #19895

    I like that. This will make communication with the coaches easier (common language).
    The other plus, Mladen has already included Z-Scores into the Annual Planner.

    Best wishes.

    mm
    20/06/2017 at 13:02 #19899

    Thanks for the feedback. I think this point is key – how can we communicate testing results more accurately and more impactfully.

    23/06/2022 at 00:57 #35830

    How did you manage to create the excel diagram for the splits and outcome? It’s very tricky to add in the 10-20m splits in between the 10m and 20m outcome. Did you but the splits on a secondary axis? But how it is that the splits find the corresponding y-axis naming e.g. 10-20m and letting one left out (20m split) before adding the point above the 20-30m split y axis?
    Thank you for the response.

    Best Regards
    Luca

Viewing 9 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic.