Manifest: Against Pseudoscience-infected Training Theory and Methodology
- Establishing the need for a different approach to Sports Training
- Bringing inter- and intra-individual to centre stage
- The teachings of outliers
- The need for testing the tests
- Abiding by the burden of proof
Sports training is infected with pseudoscience disguised as proper science. From simple misconceptions to practices based on evidence of extremely poor quality; from inferences based on studies with extremely small samples to simply ignoring dispersion of data in favor of an excessive focus on central values (e.g., group averages, group effects, usually neglecting individual variation in responses); from abusive conclusions to full-out superstitious beliefs (usually aggravated by marketing from the companies that want to sell you the product), Training Theory and Methodology seems to have lost its way. Actually, “superstition” is a word derived from the Latin superstitio, and it literally means: to stand above, which infers a will of superiority, a godlike way of observing and interpreting observation. This makes us realize that, not rarely, Sports Training applies a more religious thinking than would be desired.
Admittedly, some fields of Sports Sciences may be more rigorous and cautious in their approaches, but when it comes to Training and Performance Analysis, perhaps we are delivering the wrong message. Conceivably, Sports Periodization is the most widely believed and used example of pseudoscience disguised as science (for a systematic review on the problems involving research in this area, please consult Afonso, Nikolaidis, Sousa, & Mesquita, 2017; for a conceptual critique, please consult Kiely, 2012). This is worrisome, as biased or wrongly conducted research will influence ill-informed practices, with consequences for thousands upon thousands of practitioners. And, since sport is so popular, entire trends are established based on crumbles, quite often promoted to considerable status by financial interests. In what follows, we might be somewhat unfair, somewhat over-the-top, even sarcastic; yet, this is a conscious choice, one intended to provoke discomfort and to harass installed myths and dogmas.
Myths and dogmas should not be part of science, as they are not based on evidence. Instead, they’re based on acritical, simplistic observation. Take, for example, the rotation of planet Earth around the Sun. If one takes a glance at the window during daylight, all we see is the Sun moving around us – in fact it was this observation, together with a few more politic interests of some “religious parties”, that led us, humanity, to think we were the center of our solar system for so many years. Observation, from the perspective we can get out from our position, produced a huge dose of myth and a dogma. Science proved that, in fact, the contrary is true: the Earth moves around the Sun. How? Using rational and critical inquiring of observation (opposed to just simply describing it), taking math and statistical analysis as valuable tools to challenge hypotheses and never cease inquiring, again, and again, and forever.
Critical thinking is key towards a change of mindset. It comprises healthy skepticism to shield us from biased research, misleading marketing or personal opinions. A lack of critical thinking leads to poor scrutiny, which is paramount to detect flaws in any field seeking credibility. Nowadays, Training Methodology tends to be overly structured around simplistic observation. This is then put into practice without considering the hypothesis of it all being a fallacy, and thus myths and dogmas go unquestioned for years. But of course, some still believe in a Flat Earth and anti-vaxxers are on the rise, so maybe the problems with Sports Sciences aren’t so bad after all!
2. Inter- and intra-individual variation and the responsiveness continuum
One of the first things taught by any Training Theory and Methodology manual is the importance of variation in response to training (indeed, this is valid when applied to life itself!). Inter- and intra-individual are asserted as core principles of training (Bompa, 1999; Davids, 2008; Kiely, 2012; Rowland, 2011), a “mantra” that is vowed to impede abusive or misguided generalizations where they are not applicable. We say “mantra” exactly as in its Hindu and Buddhist definition, an instrument used to practice mindfulness meditation or to produce some special effects on the “soul” and “body”. Likewise, sports trainers, in general, use the expression individual variation (inter and intra) to achieve some reaction on the listener – usually they never explain what it really is, nor do they make any remarks concerning the “how’s” implicit in such expressions. Thus, we’ll try to make the expected remarks and explain what it is.
Basically, each person may react quite differently to a given stimulus or situation, and even the same person is likely to respond differently in distinct moments in time (Bowes & Jones, 2006; Kenney, Wilmore, & Costill, 2012; Kiely, 2012; Mujika, 2007; Rowland, 2011). A range of variation is expected, from people who are close to average values (whatever the validity and representativeness of average values might be…) to people who largely deviate and are found at the extremes of any given continuum. Still, this usually means nothing but empty rhetoric. In theory, scientists and practitioners recognize the importance of this principle. In reality, however, such professionals frequently recede into an average-guided practice.
The “mantra” goes that every person reacts differently, but then hypertrophy is somehow achieved using loads that assure between 8 to 12 repetitions (e.g., the usual recommendations of the ACSM’s Guidelines and Position Stands). Why 8 to 12? And what is the impact of using 8 vs 10 vs 12? And is 8 to 12 valid for all body muscles, for all body movements, for all types of exercises, for all movement speeds and inertial accelerations, and so on? Where did the values 8 to 12 come from? Do they represent an average interval that suits most of the population? What is the magnitude of the standard deviation? If a person stands one or two standard deviations above or below average, how many repetitions should that person perform? And how does this repetition range play in the whole of a training sessions? Is it likewise effective for any number of sets? For any number of exercises? Is training at 8 to 12 repetitions range equal if the day before was a resting day versus if it was a loading day? Furthermore, how will circadian rhythms affect this? And nutrition? And sleep hours and quality? And medication or supplementation? And the psychological state of the performer? What is the influence of the history of the performer? Maybe someone will respond better to repetition ranges to which one is used to; or maybe not, maybe novelty was just what the body required at that moment!
Overall, there is a responsiveness continuum (Ackerman, 2014; Bompa & Carrera, 2003; Lames, 2003). For each stimulus, for each training protocol, the results will range from absolute non-responsiveness to extreme-responsiveness, drastically varying from person to person and for a given person in different times. Some athletes might present large day-to-day variations in performance, while others might be more stable. We’ve questioned the validity of repetition ranges, but we can add further topics: movement speeds and accelerations, movement angles, number of sets, number of repetitions and/or sets per muscle group/movement, training duration, pauses between repetitions and sets, effects of accumulated load (daily, weekly, monthly, and so on); even movement technique, as each person has its own anatomy and physiology. Moreover, exercise prescription brings about psychological impact, having a strong emotional load; whereas some athletes may crave for a more stable prescription (with only minor variations, sometimes subtly applied), others may prefer a more diversified approach. We should not expect these parameters to impact similarly in distinct persons (otherwise, they wouldn’t be different at all…).
The phenomenon of the responsiveness continuum is not exclusive of sports training. Indeed, this continuum has been well documented in nutrition and in medicine – in life existence itself, and every aspect that relates to it. Non-responders have been identified in resistance training (Fisher, Bickel, & Hunter, 2014; Jones et al., 2016), altitude training (Hamlin et al., 2011), pulmonary rehabilitation (Stoilkova-Hartmann, Janssen, Franssen, & Wouters, 2015), cardiac resynchronization therapy (Auricchio & Prinzen, 2011), just to mention a few examples. Neglecting the responsiveness continuum represents a waste of time and, quite often, the promotion of iatrogenic effects (Taleb, 2012), meaning that what we are doing might actually be producing mal-adaptations, even injuries and hazards, instead of promoting greater health or performance.
For the purpose of avoiding the readers to think we’re just inquiring haphazardly, it would be a great time to bring into “light of day” a critical analysis to the above issue (the repetition range and the respective associated adaptations). The Journal of Exercise Physiology published a Critical Analysis of the ACSM’s Position Stand on Resistance Training (Carpinelli, Otto, & Winett, 2004). On this extended article, the authors inspect every citation made in the guidelines and proceed with checking the original sources. The conclusions were appalling: “…Many of the recommendations are without any scientific foundation. The Position Stand fails to meet the standards for a scientifically based, methodologically sound, consensus statement”. Of all the studies cited in the ACSM’s Position Stand, only 8 actually supported the primary claim or recommendations, 59 of the studies cited failed to support the claims, and 56 studies that actually repudiated the primary claim or recommendation where not cited in the Position Stand. The authors state that “there is very little evidence to suggest that a specific range of repetitions (e.g., 3-5 versus 8-10) or time-under-load (e.g., 30s versus 90s) significantly impacts the increase in muscular strength, hypertrophy, power, or endurance”. Thirteen years passed by, after this publication, and the “mantra” of the 8-12 repetitions range still can be heard, echoing in this dogmatic pseudoscience-dominated market.
With so many questions abounding, and many others lurking around the corner, can we truly trust average guidelines? And can we truly be sure they represent population averages? Are standard deviations reduced, meaning we can trust the average? Most likely, they are not. Guidelines may provide us with a false and dangerous sense of security; they may lead us to think we know something instead of admitting our ignorance. Ultimately, guidelines may be the cause of errors in our practices, the motors for abusive exercise prescriptions that don’t actually fit anyone at all. And when training goes one step further and encapsulates such guidelines in a periodized program, it simply ventures into a guessing game that imprisons what should be an organic, highly-flexible and adaptable process, with even the goals changing frequently. In essence, a training program designed for a given group of individuals is no longer suited for any of them individually. Guidelines might represent a set of a priori beliefs, or an over-trust in group averages (thus disregarding group variation), or an over-buying of standardized work routines. Pseudoscience and marketing hit us with “sound bites” (e.g.: static stretching avoids injury), and too many practitioners spread them like “mantras”.
3 – What outliers can teach us
If exercise guidelines are infested with problems and fall prey of the dictatorship of the average, in sports training vowed to form high-level practitioners a second problem may be added: the adoption of the practice regimes from outstanding athletes. This represents a very difficult, even paradoxical conundrum. On one hand, these processes base large amounts of their practices on ‘average’ guidelines. On the other hand, they add a dose of copycat practices based on what top-level athletes do. This is misleading, as the best performers in the world are likely to be outliers, meaning they are so far removed from average population values that constitute true exceptions (Gladwell, 2008). If, by definition, they are outliers, why should we even bother adopting their practices in the process of training wannabe athletes? Population averages are misleading (Taleb, 2012), but a greater percent of the practitioners will be close to average values than to 5-SD values!
Of course, we would be blind to ignore what outliers are doing. They expand our conceptions concerning the range of possibilities. Outliers often compromise our sense of reality, our beliefs regarding the limits and range of human performance. They show new ways, establish novel techniques and training regimes; they invite us to rethink dogmas in our sports and in training methods. Outliers summon us, in a very impacting manner, to question the guidelines and to think outside the box. What we should not do, however, is then expect that what outliers are doing is good for others. It might not be the case! Notwithstanding, outliers reinforce the idea that research should focus more deeply and thoughtfully in the dispersion of data than in its central values. In doing so, we go back to the root principle of training: inter- and intra-individual variation in response.
4 – The problems with testing
Reality is very complex, so testing is always a delicate exercise. Yet, dozens of tests are used almost acritically in Sports Training. From isokinetic evaluations to FMS®, not forgetting the infamous sit-and-reach, Sports Training applies many tests that haven’t properly proved their true worth. Most notably, most tests applied in our field haven’t been tested for their validity, reliability and, especially, for how they perform in confusion matrices (e.g., rates of false positives and false negatives). And testing the tests is actually a big deal! For example, we simply do not know the degree of specificity and sensitivity for most of our tests. As such, these tests cannot be deemed trustworthy. Even if the rationale behind a test is sound, and even if that test presents good reliability values (not the case of skinfold measurement, for example, especially when conducted by inexperienced ‘skinfolders’), still the actual usefulness of the test must be put to examination. If a given test presents a high rate of false negatives and/or false positives, that constitutes a serious problem, as it undermines its validity and how we interpret its results. Calculating such indexes is common practice in medicine, but somehow Sports Training tends to accept and implement tests simply by having a more-or-less solid logic supporting it (and sometimes, not even that!).
Isokinetic testing is one such example. Can this test truly relate to injury prevention? Perhaps, depending on which injury we are talking about. Commonly, it is stated that isokinetic knee testing reveals a rate of risk for injury of the anterior cruciate ligament. But is this accurate? In many sports, ACL injuries occur when the body rotates but the foot remains adherent, hence provoking an extreme rotation of the knee. ACL is particularly vulnerable when this occurs with the knee in full or near-full extension. Yet, isokinetic testing does not provoke such torques in the knee. The planes of movement are completely different, and their anatomical constraints largely unrelated. Furthermore, it is known that agonist-to-antagonist force production ratios are specific for each movement speed and acceleration. Can isokinetic testing at 180-600º/sec truly reveal imbalances for movements that occur at thousands of degrees per second? It would be like evaluating a sprinter’s ability using a super-slow-motion pace. Additionally, many sports movements have the foot grounded, while in the isokinetic machine the foot does not receive reaction forces from the ground.
More to the point, however: we are still waiting on isokinetic testers to deliver us the degree of specificity and sensitivity of such tests with regard to actual injury prevention and performance. Let’s not forget that a possibility exists that some interventions may relate positively with injury reduction, but negatively with performance, and vice-versa. Where is the inflection point? How to balance performance and injury risk when they don’t go hand-in-hand? Also, there are testing methodologies that might never gather validity or reproducibility from research, such as manual muscle testing. A test might be influenced by a wide array of factors, such as the performer’s operational status. Would a negative test be indicative of a week muscle or could previous fatigue influence it? Or could it be a calcium deficit? Or a recruitment issue? Would a bad night sleep influence skeletal muscle contraction? Could differences in joint architecture influence the test? Ultimately, if we are to conduct research or establish a test, testing the test should be the first step.
The two main factors that have been widely neglected, when using a testing protocol, are fatigue and Post-Activation Potentiation (PAP). These two physiological phenomena co-exist in skeletal muscle and should not be ignored. The response of skeletal muscle to a conditioning stimulus (submaximal or maximal) can cause an acute potentiated effect, which results in force production enhancement (Hodgson, Docherty & Robbins, 2005; Sale, 2002; Baudry & Duchateau, 2007). This is due to the balance of between the factors that enhance force production (e.g., myosin light chain phosphorylation; Hoffman Reflex enhancement; potential reduction of fibers pennation angle) and those that diminish it (i.e., those involved in central and peripheral fatigue) (Mahlfeld, Franke & Awiszus, 2004; Stull, Kamm & Vandenboom, 2011). Research has shown that after a conditioning stimulus ceases, fatigue subsides at a higher rate than the mechanisms that cause a potentiation effect (Tillin & Bishop, 2009). Thus, this is important to be considered as it can skew the results of many testing protocols used in Sports Training and Fitness.
5 – Burden of proof
The most offensive mistake in our field is, of course, the neglecting of the burden of proof. When someone proposes a new training protocol or a new test, the burden of proof relies upon showing the merits of the protocol or test. To be clear: it is not the responsibility of others to disprove the proposal. The proposal has to deliver proof of its merits. For example, if we were to state that playing volleyball is the best possible thing to promote playing quality in water polo, it would be up to us to: (i) establish a rationale that made that claim seem minimally acceptable or plausible; and (ii) in time, deliver evidence for our claim. It does not work the other way around, i.e., it is not up to other people to prove us wrong! Indeed, an infinite number of ideas and methods can be proposed, and it would be impossible to disprove them all (even because some might not be testable, rendering them unscientific). So, the burden of proof relies on the beholder of the proposition.
Of course, when a new method emerges, especially if grounded on a reasonable rationale, we could attribute it the benefit of doubt. We could apply such method, even if with caution, and start understanding it better, from the conceptual point of view to the pragmatic aspects of its application. With time, research should be conducted in order to test the new idea in a more rigorous, controlled fashion. To be clear: trying new things out is perfectly acceptable; disguising them as scientifically grounded issues is not! When any given method exists for 50+ years and has failed to deliver evidence supporting it, one of two things might have happened: (i) there is actually no merit to the method; or (ii) the method has failed to become a mainstream trend in research, and there simply is no research. In the former case, the method should be abandoned. In the latter, it should become the focus of research, so efforts should be taken to increase investigation.
When a method exists for very long, and research is abundant and of quality, yet fails to support the method’s claims, then that method should be abandoned or re-conceptualized. At the very least, it should not be promoted as scientific approach. Case in point: static stretching and injury prevention. After so many years and so much research, one thing is clear: static stretching does not prevent injuries, on average. So, using it as a wide-scale method is useless and constitutes a waste of time. Selling it as a tool for injury prevention or to behold any positive effects on strength is a fraud. Still… Still, we should go back to inter- and intra-individual variation. In some persons, static stretching may actually reduce injury risk; in other persons, it might increase that risk. Science should devote its efforts to understanding why this is so, developing a framework that helps comprehending in which cases static stretching should or could be used, and in which cases it should be avoided. Regardless of this, it remains true that static stretching should be viewed as a clinical application, and not as large-scale, by-default method to be used with the general population or with athletes (as a method for injury prevention).
Some ideas or methods haven’t had sufficient research; don’t throw them away right yet, but also don’t sell them as magical tools that have been ‘scientifically proven’ (this expression, in and of itself, makes no sense, but that is for another day…). In such cases, just develop more research and see what happens. And in the cases where an idea or method has been widely studied, be sure to analyze potential methodological problems. Was the research a randomized controlled trial? Were the effects not only statistically significant, but also of high magnitude? Was sample size adequate? Was the sample representative of the population? What confounding factors were controlled for? Was it a blind or even double blind study? Are the effects superior to a placebo? What was the dispersion in the data, and what does that say about the responsiveness continuum to a given protocol? Where the relationships between the scientists involved in the research and the holders of the rights for a given instrument or technique investigated? These and other questions should be at the core of any interpretation of a training method.
Furthermore, it should be overly emphasized that, even when something does work, that does not imply its rationale is correct! Indeed, in many instances a procedure or protocol might work due to reasons that are widely different from those originally proposed! In sports, perhaps the most debated case is that of so-called ‘myofascial’ techniques, as they are likely much more ‘myoneural’ than they are ‘myofascial’. Additionally, while most of its proclaimed beneficial effects still require more research, some negative effects have come to light (e.g., Freiwald, Baumgart, Kühnemann, & Hoppe, 2017a, 2017b), so we should be more cautious and less definitive when speaking about myofascial methods.
Then, there is Stretching Globale Actif: a compound method with both chain-stretching and respiratory work. Are its proposed effects due to the chain-stretching, due to the respiratory work (especially its effects upon the Autonomous Nervous System), or both? Might one even impair the other? In this case, there is actually near-zero research with a minimum of quality, so no conclusions should be attempted at this stage. Even then, on this issue of the “release of the fascia”, shall we give you a small sample of the little you’ll find in the small amount of scientific literature.
There is yet another source for concern. The growing pressure and fierce competition amongst scientists to publish their research in renewed journals has led to a rise in retractions in the last decade. In an article by Van Noorden (2011), the causes for this phenomenon were analyzed. It was found that almost half the causes were related to misconduct, due both to data falsification and plagiarism. This alarming reality draws attention to the fact that we must remain vigilant when it comes to Science itself. Looking at a study conducted by Markowitz & Hancock (2015), where 253 papers were retracted for fraudulent data and 63 for other reasons (e.g., ethics violations), it was concluded that “fraudulent papers were written with significantly higher levels of linguistic obfuscation, including lower readability and higher rates of jargon than unretracted and nonfraudulent papers” they also found that “fraudulent authors obfuscate their reports to mask their deception by making them more costly to analyze and evaluate”. Thus, a thorough scrutiny must be employed when analyzing scientific literature.
If there’s a lesson to be learned on critical thinking is that of Carl Sagan. In the book “The Demon-Haunted World: Science as a Candle in the Dark” (1995), Sagan reflects on the many types of deception which plague all aspects of society, and especially science. A kit for baloney detection was presented by Sagan as a set of tools for scientists to fortify the mind and shield themselves from the perpetration of falsehoods: (a) Facts must have independent confirmation wherever possible; (b) Evidence should be subject to debate by proponents with different points of view; (c) Authorities have made mistakes in the past and will continue to do so in the future. Therefore, their arguments must be taken with scepticism; (d) More than one hypothesis should be raised in order to think of all the different ways in which a result could be explained; (e) A reflection upon personal bias is an important step for personal detachment of the hypothesis; (f) Quantification in a numerical form is much better able to discriminate among competing vague and qualitative hypotheses; (g) If there’s a chain of argument, every link in the chain must work (including the premise) – not just most of them; (h) Occam’s Razor is a rule-of-thumb that helps us choosing the simpler hypothesis between two that explain the data equally well; (i) Ascertain whether the hypothesis can be falsified.
Pennycook et al. (2015) endorse Sagan’s critical thinking. In their research paper “On the reception and detection of pseudo-profound bullshit”, the authors suggest that people that are less receptive to bullshit are also more analytic, reflective, logic and had some degree of skepticism, whereas those more susceptible to bullshit “are less reflective, lower in cognitive ability (i.e., verbal and fluid intelligence, numeracy), are more prone to ontological confusions and conspiratorial ideation, are more likely to hold religious and paranormal beliefs, and are more likely to endorse complementary and alternative medicine”. To sum up, the resort to reflective thinking, a good dose of healthy skepticism and the will to critically analyze the presented evidence are certainly important steps for scientists, coaches and trainers to raise scrutiny for baloney in Sports Training.
6 – Concluding remarks
This is not a scientific article, and it is not intended to be. This is a manifest, an expression of our concerns with some practices related to Sports Training. Some outrage and sarcasm might be evident throughout the text; we do not drawback from it, as we think emotions play an important role and may aid in passing the message through. We do not, however, wish to diminish the efforts of many serious and devoted coaches, trainers, and researchers. But we can’t help but feel that quality evidence is still lacking for many methodological approaches, whereas in others people persist in traditional practices despite evidence suggesting they are actually not effective at all. This text represents an outcry, an attempt to touch on difficult issues in a passionate matter. Hopefully, we will inspire others adhere to a more critical thinking-like mindset and to build upon this manifest to change the way research is conducted and, especially, interpreted. In time, perhaps this will generate better practices.
- Ackerman, P. (2014). Nonsense, common sense, and science of expert performance: Talent and individual differences. Intelligence, 45(1), 6-17.
- Afonso, J., Nikolaidis, P. T., Sousa, P., & Mesquita, I. (2017). Is empirical research on periodization trustworthy? A comprehensive review of conceptual and methodological issues. Journal of Sports Science and Medicine, 16(1), 27-34.
- Auricchio, A., & Prinzen, F. (2011). Non-responders to cardiac resynchronization therapy. The magnitude of the problem and the issues. Circulation Journal, 75, 521-527.
- Baudry, S., Duchateau, J. (2007). Postactivation potentiation in a human muscle: effect on the rate of torque development of tetanic and voluntary isometric contractions. Journal of Applied Physiology, 102(4), 1394-1401.
- Bompa, T. (1999). Periodization. Theory and methodology of training (4th Ed.). Champaign, Illinois: Human Kinetics.
- Bompa, T., & Carrera, M. (2003). Peak conditioning for volleyball. In J. Reeser & R. Bahr (Eds.), Handbook of Sports Medicine and Science – Volleyball (pp. 29-44). Oxford: Blackwell Science.
- Bowes, I., & Jones, R. (2006). Working at the edge of chaos: understanding coaching as a complex, interpersonal system. The Sport Psychologist, 20, 235-245.
- Carpinelli, R. N., Otto, R. M., & Winett, R. A. (2004). A critical analysis of the ACSM Position Stand on resistance training: insufficient evidence to support recommended training protocols. Journal of Exercise Physiology Online, 7(3), 1-60.
- Davids, K. (2008). The athlete-environment relation as a complex system: implications for sport pedagogy. Paper presented at the 2nd International Congress of Complex Systems in Sport and 10th European Workshop of Ecological Psychology, Funchal.
- Fisher, G., Bickel, C., & Hunter, G. (2014). Elevated circulating TNF-α in fat-free mass non-responders compared to responders following exercise training in older women. Biology, 3, 551-559.
- Freiwald, J., Baumgart, C., Kühnemann, M., & Hoppe, M. W. (2017a). Foam-rolling in sport and therapy – Potential benefits and risks. Part 1 – Definitions, anatomy, physiology, and biomechanics. Sports Orthopaedics and Traumatology, 32(3), 258-266.
- Freiwald, J., Baumgart, C., Kühnemann, M., & Hoppe, M. W. (2017b). Foam-rolling in sport and therapy – Potential benefits and risks. Part 2 – Positive and adverse effects on athletic performance. Sports Orthopaedics and Traumatology, 32(3), 267-275.
- Gladwell, M. (2008). Outliers. A história do sucesso. Alfragide: Publicações Dom Quixote.
- Hamlin, M., Manimmanakorn, A., Sandercock, G., Ross, J., Creasy, R., & Hellemans, J. (2011). Heart rate variability in responders and non-responders to live-moderate, train-low altitude training. World Academy of Science, Engineering and Technology, 77, 936-940.
- Hodgson, M., Docherty, D., & Robbins, D. (2005). Post-activation potentiation: underlying physiology and implications for motor performance. Sports Medicine, 35(7), 585-595.
- Jones, N., Kiely, J., Suraci, B., Collins, D. J., De Lorenzo, D., Pickering, C., & Grimaldi, K. A. (2016). A genetic-based algorithm for personalized resistance training. Biology of Sport, 33(2), 117-126.
- Kenney, W. L., Wilmore, J. H., & Costill, D. L. (2012). Physiology of Sport and Exercise (5th Ed.). Champaign, Illinois: Human Kinetics.
- Kiely, J. (2012). Periodization paradigms in the 21st century: evidence-led or tradition-driven? International Journal of Sports Physiology and Performance, 7(3), 242-250.
- Lames, M. (2003). Computer science for top-level team sports. International Journal of Computer Science in Sport, 2(1), 57-72.
- Mahlfeld, K., Franke, J., Awiszus, F. (2004). Postcontraction changes of muscle architecture in human quadriceps muscle. Muscle & Nerve, 29(4), 597-600.
- Markowitz, D. M., & Hancock, J. T. (2015). Linguistic Obfuscation in Fraudulent Science. Journal of Language and Social Psychology, 35(4), 435-445.
- Mujika, I. (2007). Challenges of team-sport research. International Journal of Sports Physiology and Performance, 2(3), 221-222.
- Noorden, R. V. (2011). The trouble with retractions. A surge in withdrawn papers is highlighting weaknesses in the system for handling them. Nature, 478, 26-28.
- Pennycook, G., Cheyne J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A. (2015). On the Reception and Detection of Pseudo-Profound Bullshit. Judgment and Decision Making, 10 (6), 549–563.
- Rowland, T. (2011). The athlete’s clock. How biology and time affect sport performance. Champaign: Human Kinetics.
- Sagan, C. (1995). The Demon-Haunted World: Science as a Candle in the Dark. New York: Random House.
- Sale, D. (2002). Postactivation potentiation: role in human performance. Exercise and Sport Sciences Reviews, 30(3), 138-143.
- Stoilkova-Hartmann, A., Janssen, D., Franssen, F., & Wouters, E. (2015). Differences in change in coping styles between good responders, moderate responders and non-responders to pulmonary rehabilitation. Respiratory Medicine, 109, 1540-1545.
- Stull, J., Kamm, K., & Vandenboom, R. (2011). Myosin light chain kinase and the role of myosin light chain phosphorylation in skeletal muscle. Archives of Biochemistry and Biophysics, 510(2), 120-128.
- Taleb, N. N. (2012). Antifragile: Things that gain from disorder. Nassim Nicholas Taleb.
- Tillin, N., & Bishop, D. (2009). Factors modulating post-activation potentiation and its effect on performance of subsequent explosive activities. Sports Medicine, 39(2), 147-166.