We are all familiar with alarming statistics that suggest one in six people will experience depression during their lives, with some groups facing even higher rates.
However, Canadian and Dutch researchers are concerned that the popular method of measuring depression prevalence is exaggerating the picture.
Writing in the Canadian Medical Association Journal, Dr Brett Thombs, from the McGill University in Montreal, and his colleagues, argue that the self-report questionnaires so frequently used by researchers are a flawed reflection of depression rates in the community.
“These studies misrepresent the actual rate of depression, sometimes dramatically, which makes it very difficult to direct the right resources to problems faced by patients,” Dr Thombs said.
“Self-report questionnaires are meant to be used as an initial assessment to cast a wide net and identify people who may be struggling with mental-health issues,” he said.
“However, we need to conduct a more thorough evaluation in order to determine an appropriate diagnosis and whether there may be other issues to address.”
This meant the number of people who both exceeded the threshold in a screening test and also had the condition was often “very low”, the authors said.
“In many medical settings, fewer than three of 10 patients with a positive screen have major depression.”
While questionnaires assess similar symptoms to those explored in a diagnostic interview, they are not as rigorous.
Self-report questionnaires failed to assess functional impairment or investigate other factors in the person’s life that might contribute to the symptoms, leaving the researchers to judge participants as “likely” or ”unlikely” to have depression based on whether they met a threshold score, Dr Thombs and colleagues said.
But because formal diagnosis was time-consuming and expensive, researchers often leant on self-report questionnaires to establish the prevalence of mental-health problems, they said.
However, screening tests for mental health were not designed to make diagnostic classifications, nor were they calibrated to estimate prevalence rates, said Dr Thombs.
He argued that using them for prevalence studies blurred the distinctions between high and low prevalence populations, “often substantially”, and inflated the low-prevalence group.
Unfortunately, other perverse incentives in the research world may have compounded the problem.
“Studies with dramatic results tend to be accepted by higher impact journals and attract more attention from the public than studies with more modest findings,” he said. “This may also encourage some researchers to report results from questionnaires rather than conducting appropriate diagnostic interviews.”
The authors analysed primary studies on the prevalence of depression or depressive disorders and found that 17 out of 19 were based on screening studies, with research suggesting similar overreliance among meta-analyses.
Because screening tests were designed to cast a wide net, well-cited research had shown that the percentage of patients who met the threshold in those tests “typically exceeds true prevalence substantially”, they wrote.
For example, one meta-analysis of bariatric surgery patients found that one in five had depression when a screening questionnaire was used, but this figure dropped to around one in 13 if a validated diagnostic interview was used.
They pointed to other studies showing similar discrepancies.
Not only does this problem affect policymakers, but it may add to the problem of overdiagnosis.
“Practitioners may use the same methods to diagnose cases in clinical practice, and they may assume that they should be finding similar rates of disorders,” the authors wrote.
“Overdiagnosis can lead to inappropriate labelling and nocebo effects, as well as the unnecessary consumption of health care resources and potentially harmful treatment for patients who will not benefit.”
CMAJ 2018; online 15 January