Blogs & News
This blog will consists of two parts, this first part will be on the scientific reproducibility crisis and the challenges we found when trying to reproduce a clinical trial; in the second part we will go into more detail to explain how we performed the statistical analysis in the Ardihia DRE workspace.
Scientific reproducibility is an essential element of the scientific method; the result of any science experiment should only be accepted by the community when it has been reproduced by others. For this reason, the results of a Nature’s article published in 2016 are very troubling. It points out that 90%, out of 1,500 scientists, think there is a reproducibility crisis in the scientific community; and, even more concerning, 70% of those researches have failed when trying to reproduce an experiment. The goal of this project was very straightforward, reproducing the statistical analysis of a randomised controlled trial (RCT) in an Aridhia DRE workspace.
The first task was the most challenging one, find publicly available data to work with. Data-sharing in clinical trials can be a point of contention; while some think all the data from RCTs should be available for independent reanalysis and meta-analysis, others argue that this strikes against patient’s confidentiality. Nevertheless, many initiatives to increase transparency in RCT’s results have been developed over the years, platforms such as Clinical Study Data Request (CSDR) or Yale University Open Data Access (YODA) allow the sharing of patient-level data for the purpose of innovation and improvements in patient care. In any case, to access the data in these platforms you must submit a research proposal, meaning you can only use the data to publish the results; this was not our situation, as we only wanted to reproduce the analysis in a safe Aridhia DRE workspace. As a response to the lack of public data, Aridhia launched the FAIR Data Services that aims to improve research quality, impact and efficiency in all stages of a research lifecycle by setting guiding principles to make data findable, accessible, interoperable and reusable.
Luckily, after several searches, we found that the clinical trials unit at the University of Edinburgh had anonymised and shared all the data of the Randomised Controlled Trial of Mercaptopurine Versus Placebo to Prevent Recurrence of Crohn’s Disease Following Surgical Resection (TOPPIC). The TOPPIC study is the perfect example of an RCT because it follows the three golden rules of a clinical study: randomisation, blinding, and placebo-control. Accordingly, the patient sample was randomised into two different trial arms (treatment and placebo) and, at the end of the trial, the recurrence of Crohn’s disease of both groups was compared following the same methodology as a survival analysis.
Once we obtained the data, the “only” thing left to do was to reproduce the statistical analysis in a Workspace. This should have been easy, but due to anonymisation of the trial data, incomplete explanation of the methods and unavailability of the original code, we encountered several problems.
Although a data dictionary providing information about each variable was publicly available, the information had some gaps that were almost impossible to fill. For example, when we were trying to determinate the censored times for those subjects that did not suffer recurrence throughout their participation in the trial, there were two different variables in two different tables that, theoretically, contained the same information. Nevertheless, for some subjects, the indicated time differed between those variables and there was no further explanation as to the reason why. Since the code used for the analysis was not available, and it was not explained in the paper nor protocol of the study, there was no way for us to know how this time object should have been determined.
The TOPPIC study data is publicly available, to access it you do not have to identify yourself nor submit an application, thus anyone can access it. For this reason, the data has to be deeply anonymised so it is impossible to identify any subject from the study. When anonymising data, some information is always lost in the process. For example, all the dates were converted to the number of days relative to the day the subject was randomised and if the original date was not complete (day, month, year) it was included in the anonymised dataset as a missing value. Although this causes some trouble when reproducing the analysis, in my opinion, it is not a big impediment and it is certainly better than not being able to access the data at all.
Despite all this, the results we obtained from the study are very similar to those reported in the original study paper. Even though they are not identical and we cannot be sure that we followed the same methods as the original group; the availability of real clinical data is not only useful to confirm the original results or to do meta-analysis (situations which would require more faithful results), but it is also very interesting for other reasons. Students, such as myself, can really learn from working with real clinical data and get a sense of what “real science” looks like, so we can learn there will be information gaps and other problems we will have to face as a future scientist.
|
Adjusted HR |
Unadjusted HR |
---|---|---|
Original Study | 0.54 (0.27 – 1.06), p = 0.07 | 0.53 (0.28 – 0.99), p = 0.046 |
Replication | 0.54 (0.27 – 1.10), p = 0.09 | 0.53 (0.28 – 1), p = 0.05 |
In the second part of this blog post we will explain with more detail how the analysis reproduction was done within the Aridhia DRE Workspace.
July 17, 2020
Data Science intern at Aridhia. A first-year student in a Precision Medicine MSc.