Predicting non-contact hamstring injuries by using training load data and machine learning models - Complementary Training

Predicting non-contact hamstring injuries by using training load data and machine learning models

Welcome to Complementary Training Community! Forums Complementary Training Blog Posts and Articles Predicting non-contact hamstring injuries by using training load data and machine learning models

This topic contains 7 replies, has 2 voices, and was last updated by mm Mladen Jovanovic 3 years, 6 months ago.

Viewing 8 posts - 1 through 8 (of 8 total)
  • mm
    17/12/2018 at 23:30 #24376

    Research has shown that there is an association between training load and likelihood of suffering non-contact injuries. But can we predict the injury? In this paper I have tried to predict non-contact hamstring injury by using two seasons day-to-day training load data.

    [See the full post at: Predicting non-contact hamstring injuries by using training load data and machine learning models]

    11/09/2019 at 17:50 #26168

    Writing a sentence like “it can be concluded that the non-contact hamstring injuries could not be predicted by using training load data with features engineered, as described previously” is misleading and essentially wrong.

    It’s obvious a model trained on 20 injuries will not predict anything.

    Writing this before describing the limitations of the data is even more misleading.

    It doesn’t make sense to design a study in a way that is bound to fail and then publish it just to say it indeed failed.

    Honestly, better to retract the whole thing.

    mm
    11/09/2019 at 20:48 #26170

    I appreciate your comment, but I don’t agree with your statement. This “study” represents realistic data-set that a single club can collect (and I followed with how that can be improved by collecting on the league level) and thus have “ecologic” validity (plus I always restrained of making generalizable claims by using “given this data and model”). Publishing only studies that have a positive outcome is what is central problem of publishing bias, so your recommendation is not only ignorant but also harmful. Besides, this study is completely transparent with both the code and data available to help other professional providing a potential way to analyze this type of data set (hopefully larger). The aim of this post was to openly and transparently share the methodology and reproducible code, rather than ‘conclude’ anything or make gross generalizations. In injury prediction domain there are a bunch of studies that didn’t evaluate the predictive performance but made predictive and bold claims (later falsified). Thus I see this as a transparent methodology sharing and promotion of “pluralism” of models as something positive in our industry.

    Thank you for being part of the problem we are dealing in transparent sports science, particularly within the predictive domain. I really welcome reading your study that collected data over 10 years span with 20 clubs, without shared data and code that cannot be reproduced, but that found something (p<0.05).

    11/09/2019 at 23:14 #26171

    I never said anything about publishing only studies with a positive outcome. Therefore saying my comment is ignorant and harmful is out of place.
    I said that the extent of the dataset made it impossible to have a positive outcome in the first place, therefore I don’t find a reason to report that there was indeed no positive outcome.

    Your practical applications section has a direct and decisive statement about the methodology not being useful, not about the data. The data limitations are discussed afterwards and aren’t put front and center as the main cause of low predictive power. In my opinion, that statement and the way it’s structured is wrong and conveys a message that creates distrust in using advanced data science in sport.

    mm
    12/09/2019 at 15:06 #26173

    I do not know if my transparency about the method, data, limitations, assumptions can be more transparent than reported. Thus either you have issues reading or you have malevolent tendencies. You have many other studies with a similar data sample size, claiming ‘prediction’ yet not testing predictive performance on unseen data nor cross-validating. One of the purposes of this report is to showcase that predictive performance must be evaluated on the hold-out data. My conclusion is that GIVEN this data set and models used, prediction who will get injured within 7-15 day span is not possible. I do not see any issues with such transparent reporting and CONDITIONAL statement. I provided a potential method for fellow sports scientist to use as a tool on their data set.

    On the flip side, even if we could predict who is going to get injured from the observational data it doesn’t give us CAUSAL interpretation, which is discussed at the end of this report and in my other paper published in Aspetar. So yes, I am leaning in the Judea Pearl’s stance that those ‘advanced data science in sport’ is simple ‘curve fitting’ and that they need to be taken with a grain of salt. It also serves as a critique of ‘intellectuals yet idiots’ (see works by Nassim Taleb) working and being paid at the high-level sports organizations and providing neat graphs and dashboards, but jack-shit interventional forum for action.

    Having said that, I highly welcome your study showing that we can predict injuries and we can INTERVENE on those predictions to avoid them. I’ll wait…

    mm
    12/09/2019 at 15:29 #26174

    And I do not know what more than this you want to hear?

    “Having said this, the results of the current paper should be viewed highly skeptically and with high level of concern. The purpose of the current paper is thus educational and speculative, with special emphasis on presenting a potential approach in predictive modelling of day-to-day training load data, with the aim of predicting non-contact hamstring injuries.”

    And it is CLEARLY stated at the beginning. It is not reporting that is bothering you, but the fact that the results and critiques in this paper go against one of your source of income, and that is selling predictive models/services to clubs. Next time I suggest you be more transparent yourself and disclose conflicts of interest.

    12/09/2019 at 15:53 #26176

    You choose to attack me personally. Not very professional.
    I don’t hide that I believe in working a solution to this problem using AI-based methods. You found that out pretty easily.
    It makes sense for me to defend methods that I believe in. (even if I haven’t published them yet,).
    There’s no shame in working to solve this problem outside the academic framework.
    Anyway, if you can’t have a discussion without being offensive. I’m done here.

    mm
    12/09/2019 at 16:52 #26177

    Pointing to the conflict of interest is not personal attack. Accusing me of withholding limitations and portraying wrong conclusions, while going against everything written in the report and my wholehearted tendency to make this as transparent as possible, is, or you didn’t actually bother reading.

Viewing 8 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic.