Health scorecards and electronic patient reported outcome measures (e-PROMs): the sum of us?
Editorial

Health scorecards and electronic patient reported outcome measures (e-PROMs): the sum of us?

Clarence Baxter^

School of Public Health & Social Work, Queensland University of Technology, Brisbane, Australia

^ORCID: 0000-0001-8258-4836.

Correspondence to: Clarence Baxter, PhD, MPH. School of Public Health & Social Work, Queensland University of Technology, Victoria Park Road, Kelvin Grove, QLD 4059, Australia. Email: c.baxter@connect.qut.edu.au.

Comment on: de Moraes VY, Silva RP, Kawagoe CK, et al. Monitoring health as an opportunity to categorize preventative and early-treatment actions in a self-care journey: our experience with a Healthcare Magenta Scorecard. mHealth 2023;9:25.


Keywords: Digital health scorecard; electronic patient reported outcome measure (e-PROM); health and wellbeing


Received: 10 July 2023; Accepted: 07 August 2023; Published online: 23 August 2023.

doi: 10.21037/mhealth-23-38


According to the World Health Organization, the term “scorecard” pertains to the reporting of a “status” (1). Health scorecards inhabit a longstanding space where medicine and mathematics meet. Complex health-related data sets can be distilled into simple, robust comprehensible numeric summaries to support diagnostic, evaluative or prognostic decision-making (2,3). Scorecard-based reporting has been embraced in such diverse healthcare contexts as comparative performance evaluation of healthcare systems, monitoring public health promotion initiatives, managing health conditions and summarising the overall health and wellbeing of individuals (3). It is in this latter realm of realising individual health improvements by applying scorecards and constituent score summaries to guide patient self-care journeys that the present study by de Moraes et al. (4) considered in this issue is vested.

It is not uncommon for an initial presentation for care by a patient or health consumer to be precipitated by self-initiated engagement with a quiz in a magazine, newspaper or (more recently) social media. Such quizzes constitute rudimentary scorecards (of varying provenance and curation) in their own right, identifying a health status which may prompt a decision to seek medical advice if for example, a summary score of more than 10 positive responses out of 20 screening questions is achieved when compared against a given preset threshold score or within a defined scoring range. On presentation to a healthcare facility, more focused (and often evidence-based) screening questions may then be posed by clinicians to elicit relevant patient history for triage purposes, to inform diagnosis and treatment choices or to track health status with repeated presentations over time.

Gone (or disappearing rapidly) are the days when patients sat in a clinician’s waiting room with a clipboard balanced on their laps, diligently providing ‘tick and flick’ responses to paper-based questionnaires using a blunt pencil. For some, the waiting rooms have also disappeared in deference to the necessity for COVID isolation or the sheer convenience of engaging with their own smartphones or tablet devices at home to provide this information outside the physical confines of a healthcare facility.

Constituting a “patient’s voice” or point of view, electronic patient reported outcome (and experience) measures (e-PROMs and e-PREMs) harness digital devices to elicit health data emanating directly from the patient, without interpretation of responses by clinicians or anyone else (5). Digital devices offer previously unheralded access to a “blank canvas” whereby pertinent and timely e-PROM-derived information can be captured. The key difference between a health quiz in a magazine (or on TikTok) and responding to one or more “formal” e-PROM assessments lies in the validation and testing that underpins e-PROMs and how the resultant patient-sourced data is then applied to effective clinical decision making for individual health improvement (or for health system evaluation) (5-7).

Implemented as a new cross platform-compatible mHealth app, de Moraes et al. report on development and preliminary evaluation of a novel health scorecard in this issue which captures responses using several self-administered e-PROMs (representing selected health domains) to quantitate an individual’s health and wellbeing status (4). For each domain considered in this study, the authors applied a three-tiered numeric scoring system to yield a poor, good or excellent rating. A single composite (Magenta) score was derived as the mean of scores tabulated across all of the domains investigated. Guidance was offered to study participants by means of defined decision trees suggesting evidence-based interventions for health improvement based on categorisation of reported scores, with subsequent follow-up assessment at between 3 to 5 months to gauge any change in reported health and wellbeing status.

A recent cursory Google search on the topic “domains of health” identified anywhere between 3 and 27 domains of health and wellbeing. de Moraes et al. leverage a selection of evidence-based e-PROMs representing six health-related domains for inclusion in this new scorecard, deemed to offer the greatest potential to realise health benefits from early intervention and reflecting the setting and context for this research, namely a Brazilian private healthcare organisation. In addition to internationally-recognised e-PROMs, the authors incorporated localised measures when considering specific health domains such as nutrition status, applying dietary recommendations published by the Brazilian Ministry of Health (4).

de Moraes and colleagues offer this study as a preliminary investigation, positing that further research is needed. The age of the study cohort reflects “digital natives” in their early thirties engaging with a private healthcare service. Distinct from this demographic, younger and older persons may face challenges in engaging with e-PROM technologies without assistance, as might persons from different socioeconomic backgrounds (8,9). The heterogeneous nature of the study group (i.e., presenting for case or disease management, health and wellbeing, etc.) demands experimental designs and sufficient sample sizes across target groups to facilitate robust longitudinal statistical analysis of outcomes. Lessons can be learned from prior systematic reviews regarding study designs suitable to demonstrate PROM and e-PROM efficacy (10,11). For example, a contemporary Cochrane review of 116 randomised trials found low to moderate certainty regarding evidence supporting the effectiveness of PROM feedback in improving health outcomes (10). Key risks identified in the reviewed studies included performance and detection biases.

Aggregation of existing e-PROMs in this new composite incarnation requires vigorous re-testing to assert that validity, reliability and usability is maintained across constituent e-PROMs. Variation in the order of question presentation, mix of question types across e-PROMs, onscreen response methods and overall completion times across e-PROMS may all affect clinical validity of the aggregated results and usability of the new tool (5,7,12). The Magenta score calculated in this study is based on the mean across all 6 domains investigated; the course of some health conditions or treatments may result in reporting average scores which fail to reflect changes (or minimum significant changes) in or between health domains (13,14). For example, sleep score may increase over time (>750) due to a worsening in mental health in some cases (<500) with or without a reduction in physical activity. Similarly, sleep may be disrupted (<500) for a patient who changes habits and reduces smoking (>500).

Opportunities exist to leverage emergent technologies to augment the “patient’s voice” constituted by e-PROMs. Many corporate health systems are already designed to capture, assimilate and report on e-PROM data as part of a patient’s electronic medical record (5,7). Assistive technologies available in modern digital devices offer support for equity in healthcare access for people with varying physical abilities (e.g., vision, dexterity) to engage with e-PROMs by means of spoken command interactions or alternate means of data entry to navigate digital device screens to capture data (15). Patient generated health data (PGHD) such as physical activity (steps) or sleep duration monitored by smartphone or wearable sensors could further “amplify” the patient’s voice by (unobtrusively) contributing quantitative health data; “silent” accumulation of such PGHD using the patient’s own digital devices have been likened to “grains of sand” with potential to accumulate as “clinical pearls” to further inform health improvement (16).

Potential also exists to harness emergent (and widely lauded) generative artificial intelligence (AI) to improve the breadth, depth and context of data captured by e-PROMs, refining score calculation algorithms and decision trees (17). Responses to e-PROM questions driven by AI may suggest an additional line of questioning to elicit more information in real-time engagements with “Chat”-style e-PROMs. Entirely new vistas of research open up when considering the possible uses of AI in eliciting patient reported outcomes, albeit demanding a completely new set of “sums” driven by vastly more complex AI-aware e-PROMs.


Acknowledgments

Funding: None.


Footnote

Provenance and Peer Review: This article was commissioned by the editorial office, mHealth. The article did not undergo external peer review.

Conflicts of Interest: The author has completed the ICMJE uniform disclosure form (available at https://mhealth.amegroups.com/article/view/10.21037/mhealth-23-38/coif). The author has no conflicts of interest to declare.

Ethical Statement: The author is accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. World Health Organisation. (2022). The making of the health and environment scorecards. Retrieved July 9, 2023. Available online: https://www.who.int/news-room/feature-stories/detail/the-making-of-the-health-and-environment-scorecards
  2. McDowell I. Measuring Health: A Guide to Rating Scales and Questionnaires (3rd Edition). Oxford: Oxford University Press, 2009.
  3. Ratzan SC, Weinberger MB, Apfel F, et al. The Digital Health Scorecard: A New Health Literacy Metric for NCD Prevention and Care. Glob Heart 2013;8:171-9. [Crossref] [PubMed]
  4. de Moraes VY, Silva RP, Kawagoe CK, et al. Monitoring health as an opportunity to categorize preventative and early-treatment actions in a self-care journey: our experience with a Healthcare Magenta Scorecard. mHealth 2023;9:25. [Crossref] [PubMed]
  5. Agarwal A, Pain T, Levesque JF, et al. Patient-reported outcome measures (PROMs) to guide clinical care: recommendations and challenges. Med J Aust 2022;216:9-11. [Crossref] [PubMed]
  6. Bull C, Teede H, Watson D, et al. Selecting and Implementing Patient-Reported Outcome and Experience Measures to Assess Health System Performance. JAMA Health Forum 2022;3:e220326. [Crossref] [PubMed]
  7. Glenwright BG, Simmich J, Cottrell M, et al. Facilitators and barriers to implementing electronic patient-reported outcome and experience measures in a health care setting: a systematic review. J Patient Rep Outcomes 2023;7:13. [Crossref] [PubMed]
  8. McCabe E, Rabi S, Bele S, et al. Factors affecting implementation of patient-reported outcome and experience measures in a pediatric health system. J Patient Rep Outcomes 2023;7:24. [Crossref] [PubMed]
  9. Miranda RN, Bhuiya AR, Thraya Z, et al. An Electronic Patient-Reported Outcomes Tool for Older Adults With Complex Chronic Conditions: Cost-Utility Analysis. JMIR Aging 2022;5:e35075. [Crossref] [PubMed]
  10. Gibbons C, Porter I, Gonçalves-Bradley DC, et al. Routine provision of feedback from patient-reported outcome measurements to healthcare providers and patients in clinical practice. Cochrane Database Syst Rev 2021;10:CD011589. [PubMed]
  11. Ishaque S, Karnon J, Chen G, et al. A systematic review of randomised controlled trials evaluating the use of patient-reported outcome measures (PROMs). Qual Life Res 2019;28:567-92. [Crossref] [PubMed]
  12. Aiyegbusi OL. Key methodological considerations for usability testing of electronic patient-reported outcome (ePRO) systems. Qual Life Res 2020;29:325-33. [Crossref] [PubMed]
  13. Orr MN, Klika AK, Piuzzi NS. Patient reported outcome measures: Challenges in the reporting! Ann Surg Open 2021;2:e070. [Crossref] [PubMed]
  14. Sedaghat AR. Understanding the Minimal Clinically Important Difference (MCID) of Patient-Reported Outcome Measures. Otolaryngol Head Neck Surg 2019;161:551-60. [Crossref] [PubMed]
  15. Calvert MJ, Cruz Rivera S, Retzer A, et al. Patient reported outcome assessment must be inclusive and equitable. Nat Med 2022;28:1120-4. [Crossref] [PubMed]
  16. Seneviratne MG, Connolly SB, Martin SS, et al. Grains of Sand to Clinical Pearls: Realizing the Potential of Wearable Data. Am J Med 2023;136:136-42. [Crossref] [PubMed]
  17. Cruz Rivera S, Liu X, Hughes SE, et al. Embedding patient-reported outcomes at the heart of artificial intelligence health-care technologies. Lancet Digit Health 2023;5:e168-73. [Crossref] [PubMed]
doi: 10.21037/mhealth-23-38
Cite this article as: Baxter C. Health scorecards and electronic patient reported outcome measures (e-PROMs): the sum of us? mHealth 2023;9:31.

Download Citation