*“EBM 1.0 was riddled with bugs, does not function and is no longer supported.*

*Continued use of EBM requires an update to EBM 2.0″*

EBM 2.0 is a new collaborative Evidence Based Medicine (EBM) project aimed at *fundamentally* changing the way we practice and interpret EBM.

Recent revelations have demonstrated that our modern EBM approach over the last few decades (i.e. “EBM 1.0”) was fatally flawed resulting in gross misrepresentation of the true value of published clinical findings. Put simply, we thought that most published findings were true but it has turned out they were mostly false and unfortunately many aspects of our medical practice are now based on these false findings.

The flaws in EBM 1.0 are due to several statistical misunderstandings, including an underestimation of bias and its effect on the validity of statistical tests, as well as a *fundamentally incorrect* interpretation of the p value.

I have discussed how and why this occurred and suggested how we need to move forward as a medical community in this talk

*“Evidence Based Fraud & the End of Statistical Significance”*

These revelations in the talk are game changing for the practice of medicine and as such this talk and/or the source articles are recommended essential viewing for:

- All doctors …
- Across all specialties …
- In all countries (though note currently talk is only English)

### 2 Minute Pitch – Why you need to watch this talk

### The Talk: Short Summary Version

*“This talk will forever change the way you interpret Evidence Based Medicine”*

Here is a 19 minute short summary version of the talk “Evidence Based Fraud & The End of Statistical Significance”.

Note that any references in the talk to “you” or “we” are intended to be inclusive *of me* and almost every clinician and every person involved in EBM in the world. We’ve **all** been mislead and it is not our fault – the talk explains why and provides a way forward – EBM 2.0.

### The Talk: Full Version

*“This talk will forever change the way you interpret Evidence Based Medicine”*

The full version of the talk is provided below. While watching the whole talk is recommended, this section guide will allow viewers to accelerate to parts that they believe will be of most value to them. Note that any references in the talk to “you” or “we” were intended to be inclusive *of me* and almost every clinician and every person involved in EBM in the world. We’ve all been mislead and it is not our fault – the talk explains why and provides a way forward – EBM 2.0.

**Section 1 – Introduction (0:00 – 08:40)**- Introduction to the problem – that most published research findings are false

**Section 2 – Bias, the underestimated factor (08:40-26:46)**- Discusses the role of bias in creating false research findings
- Provides several examples of both detectable and the more insidious undetectable forms of bias and argues that some bias needs to be “priced in” to all EBM and consequently our statistical tests and conclusions need to be discounted to account for this.

**Section 3 – Chance, and how we screwed up the statistics so, so badly (26:46-45:20)**- Discusses the key reasons why statistics such as the p value and statistical significance did not mean what we thought it mean and shows how we consequently grossly over-estimated the likelihood of our findings being real.
- Discusses the key statements from the American Statistical Association regarding the p value
- Demonstrates how p values are really likelihood ratios that convert a pre-test probability of a hypothesis being true (aka prior probability) into a post-test probability (aka posterior probability) where the study is the “test” and the p value represents the accuracy of the “test”.
- This is further explained in this supplementary post

**Section 4 – The ATOM Principle (45:20-1:07:22)**- Discusses the ATOM principle recommended by the American Statistical Association to guide a path forward for EBM with several examples of applying this principle provided. ATOM stands for:
**A**ccept Uncertainty and be**T**houghtful**O**pen**M**odest

- Discusses the ATOM principle recommended by the American Statistical Association to guide a path forward for EBM with several examples of applying this principle provided. ATOM stands for:
**Section 5 – EBM 2.0 (1:07:22-1:20:46)**- Describes the EBM 2.0 project and movement aiming to change the way we practice EBM forever.
- Summarise the key principles of EBM 2.0.
- EBM 2.0 as an update to EBM 1.0 incorporates its useful parts (e.g. diligently examining studies for bias) and replaces the non-functioning parts (e.g. the use of “statistical significance”) with a new approach.

**“Evidence Based Fraud & The End of Statistical Significance”: Full Version**

Recently this talk was presented at:

- ACEM Western Australian Scientific Meeting 2019
- ACEM Annual Scientific Meeting 2019 – heavily abridged version in the “Festival of Dangerous Ideas”
- EMS Conference Rusutsu Japan, 2020
- WA Rural Health Conference, 2020 – virtual conference

### Key Pictures from the Talk

To see how to convert Pre-test probabilities into Post-test probabilities using p values (like in the above infographic), see this page.

** **

**A proposed “Certainty before Change” Model**

### Key

### Key References

- The problem with EBM
- Ioannidis, J. P. A. (2005). “Why Most Published Research Findings Are False.” PLoS Medicine 2(8): e124.
- Prasad, V., et al. (2013). “A Decade of Reversal: An Analysis of 146 Contradicted Medical Practices.” Mayo Clin Proc 88(8): 790-798.
- Herrera-Perez, D., et al. (2019). “A comprehensive review of randomized clinical trials in three medical journals reveals 396 medical reversals.” eLife 8: e45183.

- Bias
- Pannucci, C. J. and E. G. Wilkins (2010). “Identifying and avoiding bias in research.” Plast Reconstr Surg 126(2): 619-625.
- Jones, C. W., et al. (2013). “Non-publication of large randomized clinical trials: cross sectional analysis.” Bmj
**347**: f6104. - Ioannidis, J. P. A. (2019). “What Have We (Not) Learnt from Millions of Scientific Papers with P Values?” The American Statistician 73(sup1): 20-25.

- Chance: p values, statistical significance and Bayesian Analysis
- Nuzzo, R. (2014). “Scientific Method: Statistical Errors.” Nature 506(7487): 150-152.
- Greenland, S., et al. (2016). “Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations.
- “Wasserstein, R. L. and N. A. Lazar (2016). “The ASA’s Statement on p -Values: Context, Process, and Purpose.” The American Statistician 70(2): 129-133
- Wasserstein, R. L., et al. (2019). “Moving to a World Beyond “ p < 0.05”.” The American Statistician 73(1): 1-19.
- Amrhein, V., et al. (2019). “Scientists rise up against statistical significance.” Nature 567(7748): 305
- Benjamin, D. J. and J. O. Berger (2019). “Three Recommendations for Improving the Use of p-Values.” The American Statistician 73(sup1): 186-191.

### EBM 2.0 Interpretation Principles

- Be Savagely Skeptical
- Assume
*all*positive study findings areuntil*false**rigorously*proven otherwise through repeated, high quality independent replications.

- Assume
- Seek the truth and ask the right question
- What is the overall probability that these findings are true given
*everything*we already know*and*what is likely to be hidden from us?- Note this is
*not*1 minus the p value (a common and serious misconception).

- Note this is

- What is the overall probability that these findings are true given
- Accept & Manage Uncertainty:
**Accept uncertainty**over the seductive lure of “positive” or”negative”- View evidence as merely incrementally changing levels of uncertainty
- Use real world cost-benefits of therapy to determine acceptable level of certainty required before practice change.
- e.g. therapies with low net benefit and high costs need the
*highest*levels of certainty to consider utilising – this can only be derived from supportive evidence that is*repeatedly reproducible in high quality independent trials*.

- e.g. therapies with low net benefit and high costs need the

- Bias
- As always closely review studies for any sources of bias
*and* **Assume some unavoidable bias**and discount findings appropriately- i.e we must “price in” reasonably foreseeable/predictable hidden bias.

- Realise that bias can
*invalidate*the p value calculation – the meaning of the p value is not useable for our purposes in studies with any significant bias.- Either reject studies with bias entirely or if seek to utilise them,
*heavily discount*findings generated from p values, such as post-test probability that a hypothesis is true.

- Either reject studies with bias entirely or if seek to utilise them,

- As always closely review studies for any sources of bias
- Chance:
*Abandon*EBM 1.0 statistical measures- End the use of the dichotomous concept of
*“statistical significance”*with arbitrary meaningless thresholds for p-values and confidence intervals

- End the use of the dichotomous concept of
**Use p values as likelihood ratios**based on pre-test probabilities (aka prior probability) of a finding being real to calculate post-test probabilities (aka posterior probability), where the study is the “test” and the p value is effectively the “accuracy” of the test.- As post test probabilities assume zero bias,
**discount probabilities**for assumed unidentified and actual identified biases. - Represent data using alternative statistics such as
**minimum False Positive Risk**to make some attempt to gauge the chance that the findings are false.

### EBM 2.0 Key Evaluation Criteria

- Estimate what was the pre-test probability pre-trial?
- i.e how biologically plausible was the finding and what prior research exists?

- Consider how low was the risk of bias
- If not minimal, heavily discount or outright reject the study findings
- Always assume and “price-in” some bias when interpreting findings.

- Was the finding truly “clinically significant”?
- If not, it is an EBM “double whammy” –> ignore.
- Small findings that are clinically insignificant are also unlikely to be true findings.

- If not, it is an EBM “double whammy” –> ignore.
- What was the measure of chance variability used and how does it relate to the chance of this finding being true?
- Was a p-value used, and was Bayes Factor Bound provided or has it been converted to a probability such as the
*minimum*false positive (false discovery) rate? - If it was a p-value, and low risk of bias, see here to estimate the
*maximum*post-test probability of the finding being true

- Was a p-value used, and was Bayes Factor Bound provided or has it been converted to a probability such as the
- Was “statistical hocus pocus” used “treat” data?
- If so, ignore or heavily discount findings and require replication without statistical adjustment

- Has the finding been repeatedly independently replicated?
- Are these findings externally valid?
- Consider what are the practical real world benefits versus costs of changing practice at the new
*post-trial level of uncertainty*?

*EBM 2.0 needs to be applied to all new evidence AND we must re-evaluate suspect old evidence*

### EBM 2.0 Big Picture Goals

#### Apply pressure to journals and government

We must* insist* on the following to minimise bias (including undetectable biases such as citation/reporting bias):

**No Pre-Registration = No Publication**- Papers not “pre-registered” are barred from formal publication
- No change in pre-registered primary outcome allowed
- Any new/changed outcomes must be labelled “exploratory outcomes”

**Pre-registration = Must Publish***All*pre-registered*trials*must be published (e.g. including via some public open mechanism)*All*pre-registered*outcomes*must be displayed in published articles

**Governments must**- Compel research institutions to publish
*all*human trials. - Fund more independent high quality research to:
- Reduce conflicts of interest
- Focus on clinical questions of greatest benefit to the population as opposed to highest drivers of profitability

- Compel research institutions to publish

**Disclaimer & Warning:**

For members of the general public :

- Please note this page, video and any related resources are intended for
*medical professionals only*. The content could be*misinterpreted*by non-medical people, untrained in evidence based medicine, as suggesting that most of medicine is inaccurate or unproven which is*not*the case. For example, vaccinations are proven life saving therapies where extremely small risks are easily out-weighed by profound benefits. This talk relates to more recently published literature on some newer treatments.*In general*, you can still trust your doctor and the medical profession 🙂

For medical professionals

- At a time when the general public are questioning authorities and traditional bastions of truth like never before, it is critical that we
**“put our house in order”**. If we don’t, we will lose our remaining credibility, allowing truly pseudoscientific movements to reign supreme at the detriment to our patients.

**Spread the Word**

Please share this EBM 2.0 project widely so we can start conversations with our colleagues and effect real change in the way we conduct and interpret EBM.

Share the website EBM2point0.com and tweet about this with hashtag #EBM2. Advocate that you support #EBM2 (eg. in your twitter profile).

Additionally, on request, I’m happy to present the above talk *virtually* (or *if practical * in person) to assist in spreading this information via medical conferences. I can deliver “abridged” versions to suit shorter time constraints.

This is the start, not the end of a project that may need progressive modification. Comments and feedback welcome below. **Collaboration keenly encouraged** via our Contact Page.

Warm Regards,

Dr Anand Senthi

MBBS, MAppFin, GradCertPubHlth, FRACGP, FACEM

Specialist Emergency Physician & EBM 2.0 “Enthusiast”

@drsenthi