Clinical trials data are clean and concise, but the real world is messy. How to we bridge the gap?
Big data and analytics approaches could fill some of the holes caused by a lack of representation in cancer trials, but this doesn’t mean we can throw away randomised trials.
Clinical trials excite cancer clinicians and researchers alike, especially when shiny new data shows a novel treatment is safe and effective. But many of these trials exclude significant proportions of the population – such as people from culturally and linguistically diverse backgrounds – during their development.
A couple of years ago, the recommended treatment for stage III colon cancer – surgery followed by six months of adjuvant 5-flourouracil/leucovorin plus oxaliplatin or capecitabine plus oxaliplatin – was challenged. The IDEA collaboration, a prospective pooled analysis of six phase III trials involving just under 13,000 patients, found three months of treatment was non-inferior when compared to six months for long-term survival.
The less-is-more findings were somewhat controversial. Associate Professor Darren Brennar, an epidemiologist from the Arnie Charbonneau Cancer Institute in Canada, told delegates at the Clinical Oncology Society of Australia Annual Scientific Meeting in Melbourne earlier this month that he and his clinical colleagues were keen to see if they could be replicated in a local setting.
Up to 60% of stage II or III colorectal cancer patients seen in medical oncology clinics would not have been included in the trials for the treatment they were about to receive, said Professor Brennar.
“This is particularly the case for populations defined by unique subgroups, where both younger and older populations are typically underrepresented in clinical trials,” he said.
He and his team set out to see how using real world data and causal interference approaches would affect the results. They performed a target trial emulation using similar eligibility criteria to the IDEA collaboration. Linking real-world hospital and administrative records for 500 Albertan cancer patients returned similar findings to that of IDEA’s: the shorter chemotherapy dose was non-inferior for long-term survival – even when the IDEA eligibility criteria were relaxed to consider a broader range of patients.
“This is an example of when using the proper data approach can [allow you to] answer important questions that can help reduce real world uncertainty,” Professor Brennar explained, before emphasising the importance of picking the right analytical approach in these situations.
“If you [analysed the same data] in a naïve observational way, which are crude and rudimentary analyses, you would come up with a quantitatively different conclusion that six months is better than three months,” he said.
But while agreeing that clinical trials and observational studies should be compared, there was limited value in directly comparing their results, said Rolf Groenwold, a professor of clinical epidemiology at Leiden University in the Netherlands, commenting on the target trial emulation results.
“Using (real-world) data from daily practice for studies of comparative effectiveness can introduce many sources of bias, such as confounding, missing data, and misclassification. Observational studies based on real-world data are not a test of the applicability of the results of RCTs and, vice versa, RCTs are not a litmus test of the validity of observational studies,” he wrote.
“However, a thorough breakdown of possible explanations (methodological and clinical) for observed differences in results could provide insight into the applicability of the results of RCTs and the possible sources of bias in observational studies.”