Load-Exertion Tables And Their Use For Planning – Part 3
Previous Part:
Load-Exertion Tables And Their Use For Planning – Part 2
Individual Differences
Before I changed my Ph.D. topic to sprint profiling, I wanted to wrestle with velocity-based training (VBT). Although I am still interested in VBT, due to practical reasons, I have changed my topic to something more doable and probably more impactful. But anyway, for my VBT Ph.D., we have collected some interesting data (these will be the topic of another article or even scientific paper, although one paper is already published; see Jovanovic and Jukic 2). Some of those data involved performing max reps at 90 and 80% 1RM using the hex bar deadlift exercise. We have witnessed athletes that could perform up to 20 reps at 80% 1RM and athletes who could do 5 reps. “Yeah Mladen, but maybe you have used the wrong 1RM to base those %1RM percentages off?” Not so – we have tested and witnessed 1RM attempts as well.
Long story short – there are huge individual differences in max reps – %1RM relationship (i.e., Reps-Max Table/Relationship). These vary across individuals, but also within individuals depending on the experience, strength level, exercise, and so forth. For example, you might do 10 reps with 80% 1RM in the squat, but only 5 in the bench press. Even if you observe single exercises across time, the number of reps at fixed %1RM might vary across time due to changes in strength level (i.e., 1RM) or training type being done. For example, when you lifted 100kg in the bench press, you could do 5 reps at 85%, now when you lift 140kg you can do 3 reps at 85%. This makes generic max reps equations rubbish, right?
Hold your horses – although they are imprecise, generic equations (like the one we have explored in this article series) are still useful. Imagine you have 30 athletes, and you implement 10 exercises. That is 300 exercise-athletes combinations! There is no chance we are going to create the specific rep-max table for each. Thus, we use heuristics, in this case, generic equations and tables, but we should also allow for some wiggle room when prescribing, taking into account individual variations across days, as well as the model error 5. We just need to keep in mind that these equations are not precise and we need to balance out the costs of over-and under-shooting 5.
There are cases when having individual rep-max tables and profiles is useful and practically doable. This is especially useful for strength specialists and for the limited number of exercises (e.g., bench press, deadlift, and squat for a powerlifter) mostly during the push the ceiling type of programs.
In this article, I will teach you how to create individualized progression tables (I am also developing a web tool called strengthPRO that allows you to do everything covered in this article series with few mouse clicks), but before we jump straight into that, I will walk you through the process of getting that Epley’s 0.0333 coefficient in the first place. Once you understand this, as well as other models we could utilize, it will be very easy to apply it to single individuals (it will be especially easy using the strengthPRO app).
How did we get that 0.0333?
How would we approach the problem of estimating the relationship between max reps and %1RM for a single exercise, let’s say a bench press? One way to do it, the most common one, would be to recruit multiple athletes willing to suffer for a few training sessions. The first thing that needs to be done is to estimate individuals’ 1RMs (later in the article, I will show you how you can avoid this by using a novel approach). We do need to make sure that these 1RMs are stable across testing sessions, and do not vary much due to motivation, fatigue, or adaptation. These are our theoretical assumptions of course, and things do suffer from measurement error, like it or not.
The second thing to do, once we know the individuals’ 1RMs, is to perform a few sets to failure, for example using 90, 80, and 70% 1RM. Should these be done on the same day? In that case, the fatigue from the previous set will affect the following sets’ performance. If we do them across multiple days, will 1RM be stable enough? Should we randomize the set order? These are all practical questions that impose different assumptions and introduce different measurement errors. But for the sake of this article, I will not delve much into these, but assume there is no measurement error or interference.
Table 1 contains one such data set. We have 12 athletes performing three sets to failure using 90, 80, and 70% 1RM. From Table 1 we can see that the range for 90% is 2-6 reps, 4-13 reps for 80%, and 6-21 reps for 70% 1RM load. This is of course simulated data, but not too far from what you can observe in the wild.
Athlete | 1RM | 90% | 80% | 70% |
---|---|---|---|---|
Athlete A | 100.0 | 6 | 12 | 21 |
Athlete B | 95.0 | 4 | 8 | 13 |
Athlete C | 120.0 | 2 | 4 | 6 |
Athlete D | 105.0 | 4 | 9 | 17 |
Athlete E | 110.0 | 2 | 6 | 11 |
Athlete F | 90.0 | 5 | 9 | 18 |
Athlete G | 102.5 | 3 | 7 | 11 |
Athlete H | 130.0 | 4 | 8 | 15 |
Athlete I | 107.5 | 4 | 13 | 20 |
Athlete J | 92.5 | 3 | 5 | 9 |
Athlete K | 102.5 | 2 | 4 | 7 |
Athlete L | 140.0 | 3 | 6 | 11 |
Table 1: Athletes with different 1RMs performing three sets to failure using 90, 80, and 70% 1RM. Numbers in the table represent count of the maximum number of reps
Figure 1 represents a visual depiction of Table 1. “But why do %1RM vary across athletes? Shouldn’t they be exactly the same?” you might ask. Here is how it works in the wild (i.e., real-life). Let’s say the athlete’s 1RM is 110 kg, and we plan to do a set of 80% 1RM to failure. To estimate weight than needs to be used, we multiply 110 kg with 80%, or 110 * 0.8
which is equal to 88kg. Unfortunately, we only have 1.25kg plates, so our weight can only be a multiple of 2.5kg (i.e., 1.25 on each side, which gives us 2.5kg increments). So we decided to go with 85 kg as a set to failure. Then we need to re-calculate the %1RM, which is equal to 85 / 110
, or 77.2%. Please note that we have a similar issue with 1RM estimation, and this also contributes to the measurement error.
Figure 1: Visual representation of the Table 1. %1RMs used are not exactly the same for every athlete due to rounding of the weight (please check the main text for more info)
What we need to do next, is to decide what model we want to fit the data. Although we could fit a simple regression line, which has intercept and slope parameters/coefficients (we are going to introduce and use a modified linear model later in the next part of this article series), let’s go with Epley’s Equation 1 to showcase how k=0.0333
emerged.
\[\begin{equation}
nRM = \frac{1 – \%1RM}{k \times \%1RM}
\end{equation}\]
Equation 1
From the Equation 1, you can see that we need to estimate the k
parameter, that minimizes the differences between predicted nRM
and observed nRM
. Since we have only one parameter to estimate (i.e., k
), we would need only two sets to failure (i.e., 90 and 80% 1RM), although we have done three sets to failure in this example. On a side-note, if we use linear regression, we will need to estimate slope
and intercept
which would demand three sets to failure, and also have other issues, that I will cover in the next part of this article series, as well as provide an alternative.
Estimation of the k
parameter in Equation (2.1) is the problem of non-linear regression or optimization. As alluded previously, we need to find the k
value that minimizes the differences between observed nRMs and equation predicted nRMs (i.e., the residuals). These differences are often summarized or aggregated using the mean-squared-error (MSE), which is equal to \(MSE = \frac{\sum(nRM_{obs} – nRM_{pred})^2}{n}\), where \(nRM_{pred} = \frac{1 – \%1RM}{k \times \%1RM}\). We thus want to find the k
that minimizes the MSE.
One way to do this, is to use the Optimization tool in the Microsoft Excel. This is cumbersome, especially if we need to repeat this for multiple athletes. Much more easier, more powerful, completely free, and reproducible way is to do this is in the R language 7 using the stats::nls
function. I have developed a whole R package, STM, short of Strength Training Manual, to help me perform all these estimations and set and rep schemes building 4. This whole article series was written in the R Markdown 8 9 with the help of the STM package. Without going any deeper in the estimation and optimization, I recommend checking my bmbstats book 6 and R package 3 for more info about these and other topics.
One thing I need to address before we move away from these braniac topics: please note that the Equation 1 have nRM
as target variable, and %1RM
as predictor. Although we could reverse these variables (see Equation 2), using nRM as target variable (Equation 1) is a proper statistical procedure, since %1RM is independent or given value, while number of reps performed is dependent variable. Note that estimated k
parameter from reversed model (Equation 2) will not be exactly the same as the k
parameter estimated using nRM as target variable and %1RM as predictor.
\[\begin{equation}
\%1RM = \frac{1}{k \times nRM + 1}
\end{equation}\]
Equation 2
Flipping the dependent and independent variables is a common mistake (which I have done myself numerous times). Although we will use Equation 2 to estimate %1RM from target reps, the statistically correct model definition to estimate k
parameter is using Equation 1, where nRM is a target variable.
Using the data from Table 1 and Figure 1, estimated k
parameter value is 0.032. It is easier to understand this visually. Figure 2 depicts model predictions as a black line. What you need to notice is how more variable is the residuals with the lower %1RM. This is why the 1RM prediction equations, like Epley’s \(1RM = (Reps \times Weight \times 0.0333) + Weight\) lose precision the more reps are being done (i.e., the lower %1RM is being used).
Another thing to notice in Figure 1 is the characteristic of Epley’s model (Equation 1) where 0RM is achieved with 100 %1RM. This is the mathematical property of Equation 1 and there is nothing we can do about it using this equation format (we will fix it with modified Epley’s equation in the next part of this article series). I am not sure why they have chosen this equation where 0RM is at 100% 1RM (i.e., the line origin is at 0RM and 100%RM), instead of 1RM at 100% 1RM. If someone has some info, please let me know.
Figure 2: Generic or pooled model fit. This approach uses all the athletes to estimate the “average” profile
To quantify model performance, we need to analyze the residuals. I am going to use five estimators: (1) mean-absolute-error (MAE), (2) root-mean-squared-error (RMSE), (3) maximal error (maxErr), (4) error interquartile range (IQR), and (5) variance explained (\(R^2\)) 6. You can find these in Table 2.
MAE (Reps) | RMSE (Reps) | maxErr (Reps) | IQR (Reps) | R2 |
---|---|---|---|---|
2.38 | 3.11 | -7.68 | 3.16 | 0.636 |
Table 2: Generic/Pooled model performance on the training data set. MAE = mean-absolute error; RMSE = root-mean-squared-error; maxErr = maximal error; IQR = error interquartile range; R2 = variance explained
Please note that the model performance (Table 2) is evaluated on the training data set, i.e., the very same data that we have used to fit the model. Ideally, we want to evaluate the model performance on the hold-out or unseen data, often referred to as testing data set. This can be done by removing few athletes from the training data, then estimating how good the model predicts for them. This loop can be done multiple times using the cross-validation, or performing the prediction for every athlete using the leave-one-out cross-validation (LOOCV) 6. This testing data set performance is often worse than the training data set performance. Estimating model performance on unseen data is unfortunately still not used in sport science research is satisfactory degree. But let’s move on from this topic, since it is not really important for the discussion of creating individualized regression tables. Just keep in mind that we want to evaluate model performance on unseen data, since we are going to use this generic model on new athletes, or athletes unseen by the model, so we are interested in knowing how it performs on new athletes/data, not on athletes/data we have used to build the model in the first place.
What can we conclude about this model performance (Table 2)? In short – it is shit. MAE, or mean absolute difference is equal to 2.38 reps, where the biggest error is equal to -7.68 reps. Table 3 contains model performance estimators for each athlete separately, and they are even more shittier. As can be seen, the generic model can only be useful for the Athlete B only (please check model performance using his/here testing data).
Athlete | MAE (Reps) | RMSE (Reps) | maxErr (Reps) | IQR (Reps) |
---|---|---|---|---|
Athlete A | 4.82 | 5.27 | -7.68 | 2.57 |
Athlete B | 0.32 | 0.32 | -0.34 | 0.32 |
Athlete C | 4.20 | 4.70 | 6.80 | 2.59 |
Athlete D | 1.83 | 2.06 | -3.07 | 1.17 |
Athlete E | 1.71 | 1.76 | 2.04 | 0.46 |
Athlete F | 2.31 | 2.72 | -4.32 | 1.60 |
Athlete G | 0.92 | 1.14 | 1.86 | 0.75 |
Athlete H | 0.83 | 0.86 | -1.18 | 0.29 |
Athlete I | 4.04 | 4.69 | -6.53 | 2.86 |
Athlete J | 2.39 | 2.76 | 4.15 | 1.69 |
Athlete K | 3.59 | 4.03 | 5.86 | 2.25 |
Athlete L | 1.63 | 1.79 | 2.55 | 0.91 |
Table 3: Generic/Pooled model performance estimated for each individual athlete. MAE = mean-absolute error; RMSE = root-mean-squared-error; maxErr = maximal error; IQR = error interquartile range; R2 = variance explained
Although shit for predicting individual nRMs and 1RMs, generic equation such as this one are still useful (as alluded previously), since it gives us hints on the general pattern, which we can use in training prescription. Table 4 contains Rep-Max table using the estimated k
parameter and Epley’s Equation 3 that we can use to gain some insights for prescription.
\[\begin{equation}
\%1RM = \frac{1}{0.032 \times nRM + 1}
\end{equation}\]
Equation 3
Reps | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 |
%1RM | 96.9 | 94 | 91.2 | 88.6 | 86.1 | 83.8 | 81.6 | 79.5 | 77.5 | 75.7 | 73.9 | 72.1 | 70.5 | 68.9 | 67.5 |
Table 4: Generic/Pooled Rep-Max Table
This is, in short, the story of how we got that 0.0333 k
parameter, although we didn’t get exactly the same number, you get the idea. Although far from great, these generic models are still useful as heuristics when dealing with unknown exercises, unknown athletes, a large number of both, when we do not need to be very precise, or when we supplement our numbers with coaching wisdom or individual feedback.
Instead of pooling everyone together to get the generic profile (or “average” profile), we can estimate individual profiles, or in this case k
parameter values. Let’s do that in the next section of the article.
Individual Models
Figure 3 depicts individual profiles, estimated using Epley’s model definition (Equation 1). Figure 3 also depicts a generic (or pooled) profile as a grey line. As you can see, only Athlete B is very similar to the generic profile. Also, note that all profiles originate from 0RM and 100%. Individual estimated k
parameter values are also depicted on Figure 3, including the model performance for every athlete.
This topic contains 0 replies, has 1 voice, and was last updated by Mladen Jovanovic 1 year, 10 months ago.
You must be logged in to reply to this topic.