“The Results of this Study Show That…” – Understanding Science to Inform Practice - Complementary Training
“The Results of this Study Show That…” – Understanding Science to Inform Practice

“The Results of this Study Show That…” – Understanding Science to Inform Practice

Guest Article by Daniel Kadlec

Due to our often limited face-to-face time with athletes, we simply can’t afford to waste our resources if we want to maximize our athlete’s potential. One go-to resource to inform our decision-making process is scientific evidence. However, without a genuine and in-depth understanding of how to interpret single studies, it’s nothing but foolish to blindly apply science in our very specific context.

To start from the end, evidence from scientific papers can only reduce uncertainty and provide us a bandwidth of possibilities of what one can expect when training athletes. Yet the confidence to predict adaptation from in parts highly reductionistic information is scaringly increasing, which subsequently translates to questionable interventions. This article aims to highlight some, not all, of the potential flaws we as practitioners face when wanting to run an “evidence-based” physical preparation program.

Understanding Science

Applying science outside of science is unscientific. the efficacy of distinct interventions with the aim to improve targeted physical capacities and their impact on sport performance is still unclear and at best speculative, as in situ evidence is missing. Without insights from in situ research, our body of knowledge relies entirely on experimental lab or, at best, on field-based research. Philosophically, this approach is limited to information derived from experiments with distinct and clearly defined conditions but does not account for all possible conditions that are entailed in sport and reality.

For argument’s sake, let’s assume we do have data from a meaningful project, which we now need to analyze and interpret. Statistical analyses are getting continuously more sophisticated, hence it’s harder for the average S&C coach with some postgrad-level understanding in p-values and ES to decipher more in-depth results…at least as long as one reads outside of JSCR (where the good stuff is). This, however, inherently limits the amount of new and potentially meaningful information that is translatable from research labs to gyms. Selected podcasts, blogs, workshops, and social-media accounts are on the forefront translating such evidence to make it easy to digest and eventually make it applicable in practice. This service should, however, never be for free and should be adequately remunerated by the consumer.

Biases, biases everywhere

We all want the objective truth when it comes to information, but “All things are subject to interpretation whichever interpretation prevails at a given time is a function of power and not the truth” (Friedrich Nietzsche) (Figure 1). Every new piece of information we acquire is up for the test first. It’s easier and more convenient for our minds to agree with new information if it is congruent with what we already believe, while it is easier to strongly refute any contrasting information. That’s confirmation bias 101. Within brief moments of time, your subconscious mind comes to a solution based on the fundamentally limited amount of relevant information we have. That’s our (we would like to say evidence-based) opinion about something. However, it’s incredibly tough to consciously challenge and question what we know based on new and initially opposing information to eventually change one’s views in the pursuit to come to better decisions. It’s even harder when someone else is telling you (think papers, tweets, or your least favorite influencer) your current ideas are wrong and outdated. Instead of contemplating and reflecting upon this new piece of information, everybody rather seems to get easily offended and highly emotional when conflicting ideas (not even evidence) against someone’s beliefs are shared.

Figure 1. The alpha GOAT Nietzsche

If we want to get clinically meaningful answers, we must first learn to ask relevant questions.

However, due to the substitution bias, we subconsciously take a very complex question of how to improve selected parts of sports performance, then immediately replace it with questions of how to improve MAS scores and 1RM strength. These feel like the same questions but are entirely different. Similarly, how many studies’ main criterion on the efficacy of distinct interventions is selected force-time variables from a perfectly standardized CMJ. The last time I watched any sports, it’s way too complex to confidently extrapolate research findings based on the change in bilateral-hand-on-hips-vertical-jumps-with-standardized-instructions-in-a-not-fatigued-state-during-new-moon in untrained PE students (Figure 2). With the rise in AI and machine learning (and Skynet), more in-depth questions about sports performance will be answered in future time.

Figure 2. Don’t get fooled by a significant finding of the interlimb-eccentric-RFD-ratio-of-the-M. bicep femoris-at 60°/s-during-isokinetic-tests.

Another big rock in understanding and evaluating published results is to always take into account all non-published work. Wait what? From a financial and prestigious point of view, it’s way more profitable for the author and the journal alike to publish positive results (i.e. intervention X elicited adaptation Y) than to publish about methods that do fuck all. However, for us practitioners, it’s at least as important to know what methods do work and which don’t in order to optimize our program. However, only about 15% (!) of all publications across the sciences do show a lack of change with their intervention. We can only speculate how many papers ended in a drawer because the results were just not worth writing up a paper only for it to eventually get rejected. Therefore, how can meta-analyses and reviews be considered as the pinnacle of evidence, when we don’t know what we are missing? Publication bias dictates our perception of evidence-based knowledge.

The Inflation of Reductionisms & Scientism

In this inherently complex, non-linear and uncertain world, the absolute value we confidently know is scarily limited while the amount we don’t know is constantly increasing. Philosophically, we don’t know what we don’t know, which is hard to accept, yet we assume more data will improve life quality or in our case sports performance. Hence, the pursuit of acquiring more and more data began with an exponential increase in highly specialized fields of interest with a reductionistic mindset. Due to the inability to understand complex systems (academics and practitioners alike) and the gullibility in scientific evidence, wrong causal links are translated into misinformed practice. The more closely you look at one thing, the lower is its relevance and impact on a holistic and meaningful level and “the more data we have, the more likely we are to drown in it” (Nassim Taleb), which increases the practitioners’ inability to decipher what information is truly relevant on a phenomenological level. As most academics’ in our field lack skin in the game (or gym) and live in their ivory tower while their paycheck depends on the quantity of publications they produce, practical relevance is at best only assumed from the academics when coming up with research questions. Suffering from a lack of skin in the game and being reluctant to applying reason and common sense can lead to highly irrelevant research projects (Figure 3). Science evolved to Scientism.

Figure 3. If those IYIs would just go and bench one single time…they would soon find the answer to this highly redundant question.

So, it’s not just the combination of the idea that more data will help us and the desperate need for publications skews the applicability of the acquired information in the real world, but the information produced also contradicts itself more often than not. The unspoken rule is that at least 50% of the studies published even in top tier academic journals with outstanding impact factors (Science, Nature, Cell, PNAS,…) can’t be repeated with the same conclusion by an industrial lab. Additionally, when we highlight the fundamental inter-individual biological variability and immense differences in our adaptive capacities, it becomes apparent why research results, particularly from intervention studies, simply cannot give us definite answers. Especially when the results are solely displayed as averages ± SD.

What do we do now?

Now that we know where some of the potential drawbacks of academia and our ability to confront biases are, we need to find a way to still keep ourselves updated with the latest information in order to inform our decision-making processes, but also to call BS on clinically meaningless scientism. If we are in the pursuit to make more right than wrong decisions with our athletes, we basically have only two possibilities. One is the hard way and the other is the easy (yet hard) way.

The hard way is to invest your time and energy into upskilling yourself in the boring yet hugely important realms (e.g. statistics, methods, critical thinking, biases, complex systems) and start to understand and interpret sports science or at least easily call out BS. Which, however, is a tedious and never-ending process and needs to adequately reflect the importance of your stakeholder (i.e. why would you do it, if you’re voluntarily coaching your daughter’s U10 soccer team?).

The easy (yet hard) way is to entirely ignore everything you see/hear/listen from various recent and modern sources and predominantly* rely on time-tested aka lindy-proof methods. Although time debunks all lies, we need to make decisions today. We know that physical preparation methods that have been used decades ago and are still used nowadays work sufficiently well for our needs. This, in turn, requires that we are confident enough to reject any new #gamechanger in our industry, despite its hype, potential theoretical justification or some low-level scientism (Figure 4). Unfortunately, and especially in our industry, we will always be surrounded by snake-oil products and methods as simply “the amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it” (Alberto Brandolini), which we must be aware of.

Figure 4. Aka your favourite Topless-Instagram-Influencer with a discount code in her/his bio.

*Even the most thorough needs analysis of your sport and the most sophisticated athlete screening/testing/performance diagnostics can’t give us all the information we need to write the perfect program as we always face complexity, uncertainty, and non-linearity. Therefore, when designing programs, I’d like to spread the investment of my resources in:

  • 70% time-tested methods (i.e. sprints, jumps, barbell lifts, conditioning)
  • 20% what we think will work, based on empiric knowledge, common sense, individual needs and insights from other domains (i.e. motor learning and skill acquisition principles translated to S&C)
  • 10% I don’t wanna say random (as seen with pierres elite performance or seedman’s utter BS), but fun and enjoyable, outside-of-the-box-ish exercises with low risk-high reward potential (worst case: no adaptation; bast case: dopamine overflow)

free-memeber-button
free-memeber-button

Welcome to Complementary Training Community! Forums “The Results of this Study Show That…” – Understanding Science to Inform Practise

Tagged: 

This topic contains 0 replies, has 1 voice, and was last updated by  Daniel Kadlec 3 years, 11 months ago.

You must be logged in to reply to this topic.