Study: Patient Reports Can Substitute for Objective Measures in RA Evaluation
A model incorporating rheumatoid arthritis (RA) patients’ self-reports matched conventional physician-led evaluations of disease activity to an acceptable degree, researchers said.
“Machine learning” applied to data collected in an earlier RA drug trial, focusing on participants’ ratings of pain and physical and social function, yielded an overall measure of disease activity with a positive predictive value (PPV) approaching 90% when compared with standard Clinical Disease Activity Index (CDAI) scores compiled by physicians, according to Jeffrey Curtis, MD, MPH, of the University of Alabama at Birmingham, and colleagues.
“This approach has promise for real-world evidence generation in the common circumstance when physician-derived disease activity data are not available” but patient reports are, the group wrote in ACR Open Rheumatology. It could be especially useful for assessing treatment responses when patients start a new biologic medication, they added.
Patients seen via telemedicine are one such circumstance, Curtis and colleagues noted. The CDAI and its close cousin, the 28-joint Disease Activity Score, are the industry standard for judging disease activity. Both require physicians to perform hands-on examinations and thus for patients to come into the clinic in person. Management could be more efficient if patients could simply report their own assessments remotely — particularly since their subjective experience is what matters most anyway.
To test their hypothesis, Curtis and colleagues took data from the AWARE study, in which 1,270 RA patients were followed for 2 years while taking one of the two biologic agents. This study collected participants’ self-reports on a variety of outcomes. In addition to social participation, physical function, and pain intensity and interference with daily living, these also covered fatigue, sleep, anxiety, depressive symptoms, and overall status. CDAI evaluations were also performed.
The researchers sought a model that, based on the patient-reported data, would accurately classify whether patients achieved CDAI scores of 10 or less (the accepted level reflecting “low disease activity”) when seen between treatment months 3 and 12. Of the overall 1,270 patients, 494 had clinic visits after the first 3 months and provided their own assessments both at baseline and at later follow-up. Data from a random 80% of this group were used to train the model and the other 20% were used for testing.
The best performance was obtained with a so-called random forest analysis focusing on pain aspects and social and physical functioning.
As is always the case in this type of predictive modeling, PPV values varied inversely with sensitivity. Thus, when tuned to achieve sensitivity at 100%, the model’s PPV stood at about 79%; at 45% sensitivity, the PPV was 89%. Overall model accuracy was in the range of 80%, the researchers said.
This particular model isn’t ready for clinical application, Curtis and colleagues emphasized. Their pilot study had many limitations, including lack of sufficient data from more than half of AWARE participants and the fact that AWARE didn’t collect all the potentially relevant information on participants (such as details of comorbidities).
“Further validation with similar data sets derived from routine care settings, perhaps combined with [electronic records] data, can further extend the utility and support the validity of this approach and its practical implementation,” the group wrote.
In addition, they observed, “patients might be effectively trained to perform their own self-assessed joint count, thus improving classification accuracy” when combined with their subjective evaluations.
Disclosures
The study had no specific funding. Authors declared they had no relevant financial interests.
For all the latest Health News Click Here
For the latest news and updates, follow us on Google News.