Why evidence based practice requires a closer look
If you had to jump out of a plane would you wear a parachute?
The answer seems pretty obvious doesn’t it?
Consider for a moment the review paper by Smith and Pell in the British Medical Journal in 2003 that aimed to determine whether parachutes are effective in preventing major trauma related to gravitational challenge.
Their results showed that they were unable to identify any randomised controlled trials of parachute intervention and so they concluded:
“As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials. Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute.”
Ask yourself the original question again. What is your answer now? Have you changed your mind based on the evidence or lack thereof?
A slightly silly example I grant you (although the BMJ did publish it!) but with a Master’s degree fast becoming a pre-requisite to obtain an entry level internship in Strength & Conditioning it is becoming increasingly popular for coaches to demand evidence before even considering a training modality due to the way they are conditioned by Universities to only accept peer reviewed, double blind, randomised etc. etc…… you get the idea.
There are some problems that arise from this growing culture. Not only is it obviously a limiting and potentially negative attitude as pointed out by Smith & Pell but also the vast majority of research that people will accept is so poor. It is de rigueur these days to link research to studies via Twitter as if to say “See I read research” without bothering to comment on the quality or practical implications (if there are any)of the said research.
Ben Goldacre points out far more eloquently than I ever could, many of the problems with scientific research in general in his book and website Bad Science. This Ted talk he gave is also well worth a watch and a must for anyone who has ever read a health article in a newspaper and said “See I told you wine/chocolate/paint stripper was good/bad for you”.
As regards Strength & Conditioning research in particular, Mark Rippetoe details some salient points in the Conventional Wisdom chapter within his book, Strong Enough. He notes that there is a higher than usual level of what he calls “cronyism” in the Journal of Strength & Conditioning Research (in Volume 20, number 4 he notes that five of the 42 papers listed the editor-in-chief as a co-author and 17 papers which list the associate editors as authors).
The methodologies are often flawed – Rippetoe cites a peer reviewed study on the effects of different pacing strategies on the 5km running event that uses a treadmill instead of a track or road, the problem with which is self-evident to the most casual reader.
Many of the papers lack basic information such as a description of the exercise used which can obviously have a massive impact on results. A squat for example can be high bar, low bar, above parallel, below parallel etc. When I attended my UK Strength & Conditioning Association weightlifting course I was astonished at some of the candidates’ interpretations of what a barbell squat should look like let alone their ability to perform one – needless to say they all had at least a BSc and three of them had or were shortly to complete, a Masters. Might this affect research they conduct?
An abundance of small sample sizes run for short durations seem to suit authors’ desire for output over quality of findings. A vast majority of the research often referred to comes from Universities (for a number of reasons including funding and accessibility) which has its own inherent problems given that more often than not the quality of the test population is poor. It is very common to see the phrase “10 untrained college males” in research papers, given that students are offered course credits for taking part in studies. This might, for example, manifest itself in researchers showing that cycling increases one rep max bench press because they are so untrained that virtually anything will increase their one rep max! The most likely reason for this is that the actual athletes in Universities are potentially worth big money and therefore too busy or valuable to be taking part in research to show the effect of intra-abdominal pressure on spine stabilisation conducted on a seated leg extension machine.
A desire to produce some sort of result in spite of the evidence can lead to some less than stellar conclusions. Remember every time you see the phrase “X may be useful….” You should always go back and insert “or may not” so that it reads “X may or may not be useful……” and see how effective the message is then. A good example of this can be seen here
One particular recommendation I have seen recently by Dame Finch argues that open access is the future of academic publishing. I agree with many of the points she raises as more and more of the research that people are pointing to is hidden behind the pay walls of an increasing number of journals that cost a fortune to access for anyone outside academia with subscriptions through their institution. Open access is another valid and important way for research to be kept honest.
There can also be problems with good quality research if it is misinterpreted. Last year I spoke with a coach who had read Dan Baker’s article on Recent Trends in Maximum Aerobic Speed and promptly went out that evening with his squad of sub-elite athletes and tried to run a session at 120% MAS which descended into farce as they were unable to sustain the required running speeds after just 3 reps. He was perplexed as to how this could happen given that it had come from a reputable journal and had various references to peer reviewed research.
To be clear I think Dan Baker has consistently produced good quality research but when I read the same article I was acutely aware that his studies were based on his squad of elite rugby league players. I was interested in the protocol but common sense dictated that the levels he used with his elite squad would potentially be inappropriate with the squad of sub-elite players I was working with. I did some experimentation on myself and some colleagues that allowed me to adapt the protocol successfully for the squad and avoided at best a wasted session and at worst a potentially harmful situation where my athletes were pushed beyond their capabilities.
All this being said I do strongly see Sport Science as being not only valuable but fundamental in the modern age to the development of athletes and coaches when implemented properly. The “when implemented properly” qualifier may seem obvious but as I have experienced in interactions with too many people, it needs to be said.
The filter needed is plain old fashioned common sense as David James pointed out in his article here after Kolo Toure had a run-in with diet pills. (My family will be wetting themselves laughing at the irony of seeing me recommend common sense!).
In this respect I have been especially impressed with work I have seen undertaken by and interactions I have had with, coaches from the English Institute of Sport. They have been producing more and more quality research based on the successful work they are carrying out in the real world with elite athletes and more than happy to discuss the problems they have faced and how they have attempted to solve them in order that best practice filters out.
As the Soviet biomechanist, Zatsiorsky, points out in this video, the Soviet system was so successful (even before the arrival of performance enhancing drugs) because they had a healthy mixture of coaches, physiologists, biomechanists and high level athletes all under one roof much the same as can be seen at the EIS. I think this not only allows for cross fertilisation as he says but also keeps the various disciplines honest.
So next time you have to jump out of a plane use your common sense, unless you are a pedantic scientific researcher in which case……………..
Research I personally look out for – guidelines not absolute rules:
1. Done by practitioners rather than pure researchers
2. Accountable for the results of their athletes
3. Working with relevant standard athletes and if not what regressions/progressions need to be made
4. Solid methodology and descriptions
Here a few that I have seen present their research (peer reviewed or not) with good quality conclusions that have practical applications for coaches:
Two of the most knowledgeable people I know in Sport today who also happen to not have science degrees
Simon Nainby is a UKSCA Accredited Strength & Conditioning Coach, Sports Massage Therapist and RFU Level II coach. He has worked for a number of professional and semi-professional teams and he currently acts as a coaching consultant through Underground Athletics to a wide range of athletes from rugby players to Olympic Lifters.