Patient Generated Health Data Use in Clinical Practice: A Systematic Review

Precision health calls for collecting and analyzing large amounts of data to capture an individual’s unique behavior, lifestyle, genetics, and environmental context. The diffusion of digital tools has led to a significant growth of patient generated health data (PGHD), defined as health-related data created, gathered or inferred by or from patients and for which the patient controls data collection and data sharing.

Purpose:

We assessed the current evidence of the impact of PGHD use in clinical practice and provide recommendations for the formal integration of PGHD in clinical care.

Methods:

We searched PubMed, Ovid, Embase, CINAHL, Web of Science, and Scopus up to May 2018. Inclusion criteria were applied and four reviewers screened titles and abstracts and consequently full articles.

Findings:

Our systematic literature review identified 21 studies that examined the use of PGHD in clinical settings. Integration of PGHD into electronic records was extremely limited, and decision support capabilities were for the most part basic.

Discussion:

PGHD and other types of patient-reported data will be part of the health care system narrative and we must continue efforts to understand its impact on health outcomes, costs, and patient satisfaction. Nursing scientists need to lead the process of defining the role of PGHD in the era of precision health.

INTRODUCTION

Precision health calls for collecting and analyzing large amounts of data to capture an individual’s unique behavior, lifestyle, genetics, and environmental context to inform tailored and personalized delivery of health services (Akdis & Ballas, 2016). The growth of consumer technologies including smartphone apps and wearables has led to the design and use of tools that allow individual consumers to collect their own health related data. Such data pertain to their well-being and behavioral patterns as well as the environment in which they find themselves. In the US, 46% of consumers in 2016 were considered active digital health adopters, having used 3 or more categories of digital health tools (A. Adams, Shankar, & Tecco, 2016). Nearly a third of people who downloaded a health app did so because the app was recommended by their doctor and nearly a quarter of Americans owned a wearable device such as an activity tracker in 2016, up from 12% in 2015 (A. Adams et al., 2016).

The implementation of digital tools has led to a significant growth of so-called patient generated health data (PGHD). PGHD are defined by the Office of the National Coordinator for Health Information Technology (ONC) as “health-related data including health history, symptoms, biometric data, treatment history, lifestyle choices, and other information-created, recorded, gathered or inferred by or from patients or their designees” (Shapiro, Johnston, Wald, & Mon, 2012). This definition emphasizes that patients, not providers, are primarily responsible for capturing or recording these data and it is patients who direct the sharing or distributing of the data to stakeholders (Shapiro et al., 2012). PGHD from self-tracking has been envisioned as a means to bridge a gap, supplementing data from clinical visits, with a rich picture of a person’s daily behaviors, environment and lifestyle. This approach has the potential to inform better clinical decision making, with patients engaged in the decision-making process (Shapiro et al., 2012). PGHD tools are perceived as ways to capture and even “amplify” the patient voice in the health care system and strengthen the patient-provider relationship, increasing patient safety and information access (National eHealth Collaborative, 2013).

Patients may utilize a broad spectrum of platforms to capture such data (ranging from paper-based tools to wearable or implantable devices). Similarly, such platforms may have varying degrees of sophistication in how data are handled and analyzed. For example, the platforms may include alerts for individual data points, predictive analytics, natural language processing or artificial intelligence. The data may also be communicated and shared in numerous ways including integration into the patient’s record, graphical, text- or audio-based summaries that can be shared with clinicians and others. The use of information technology for capturing and transmitting PGHD allows for the generation of new types of data that can now be generated outside of a clinical setting without sole reliance on self-report. These might include data related to overall physical activity, mobility, sleep quality, nutrition, social interactions, water and air quality. Table 1 showcases the breadth of PGHD types and sources as well as potential tools to capture such data.

Table 1.

Range of PGHD types and sources

Data TypeData Element ExamplesModality for Data Capturing
Examples
Personal profileLife goals, valuesOnline/patient portal (Kneale, Choi, & Demiris, 2016)
PreferencesNotifications
Communication
Delegation or identification of proxy
Health data reviewEdits/ updates to health record data (e.g. list of allergies)
Health and family historyUpdates to personal and family health history and health events
Medication informationUpdates to over the counter medication
Medication adherenceConnected medication dispensing unit (Brath et al., 2013; Forni Ogna et al., 2013)
Biometric trackingBlood pressureWireless blood pressure cuff/Bluetooth to Smartphone application (Ciemins et al., 2018; Evans et al., 2016)
WeightDigital weight scale (Demiris et al., 2013)
Body temperatureDigital thermometer (Ask, Ekstrand, Hult, Lindén, & Pettersson, 2012)
Oxygen saturationWireless pulse oximeter (Velardo et al., 2017)
Blood glucose levelDigital glucose monitor (Lee et al., 2017)
Lung functionDigital spirometer (Shakkottai, Kaciroti, Kasmikha, & Nasr, 2018)
Heart rateWrist-worn activity tracking device (Thiebaud et al., 2018)
Behavioral trackingActivity levelPedometer watch/Accelerometer (Actigraph) (Hooke, Gilchrist, Tanner, Hart, & Withycombe, 2016; Joseph, Stromback, Hagstromer, & Conradsson, 2018)
Calorie burningFitness tracker with calorie burning calculator (Franco, Fallaize, Lovegrove, & Hwang, 2016)
Sleep qualityBed sensor strip with ballistocardiography sensor (Kortelainen, van Gils, & Pärkkä, 2012)
Daily hygiene routineWater sensors, motion sensors (J. Chung et al., 2017)
Environmental trackingRoom temperatureTemperature sensor (Bock et al., 2016)
NoiseIndoor sound level sensor (Risojević , Rozman, Pilipović, Češnovar, & Bulić, 2018)
LuminosityHome digital luminosity sensor (Bock et al., 2016)
HumidityIndoor air quality sensor (Bock et al., 2016)
Social interactions trackingNumber of visitorsDoor sensor (Skubic, Guevara, & Rantz, 2015)
Time spent outside the home
Number of callsPhone usage summary app (Deave et al., 2018)
Time spent onlineOnline monitoring app (Chen & Schulz, 2016)
Genetic informationPredictive and pre-symptomatic testingDirect to consumer genetic testing kit
Mental health assessmentScreening for depressionOnline/patient portal (Leveille, Huang, Tsai, Weingart, & Iezzoni, 2008)
Anxiety assessmentSmartphone app (Alyami, Giri, Alyami, & Sundram, 2017)
Symptom trackingSymptom frequency, intensity, side effectsOnline/patient portal (Kneale et al., 2016)
Patient reported outcomesCondition-specific outcomes, quality of lifeOnline/patient-portal (Kneale et al., 2016)
Multimedia observationsVideo- or photo-recordingsTelehealth video-camera (Gunter et al., 2016)
Care goalsPatient review of healthcare team goalsPersonal health record (Lum et al., 2019)
Patient experiencePatient satisfactionOnline/patient-portal (Kneale et al., 2016)
Legal documentationAdvanced directivePaper-based/ online (Lum et al., 2019)
Ad hoc requestsRequest for health data amendmentOnline/patient-portal (Kneale et al., 2016)
Administrative dataContact information, caregiver(s)

While opportunities have been identified in integrating PGHD into clinical workflow and care management, there are also identified concerns. Health care providers have expressed concerns over the potential added burden of reviewing PGHD outweighing any potential for added efficiencies (Shapiro et al., 2012). In a simulation study to understand changes to a health system with adding PGHD, researchers identified indirect consequences of additional time and cognitive demand, increase in labor cost with additional time required to assimilate PGHD (D. A. Steward, R.A. Hofler, C. Thaldorf, & D. E. Milov, 2010). Specifically, workday and patient visits were extended in duration and became less predictable to schedule, with nurse utilization rates of the PGHD system increasing over time while physicians’ utilization rates remained relatively unchanged. Authors concluded that the impact of PGHD is nontrivial and would cause longer workdays or mandate sacrifice of other activities. Other concerns include whether the data will be usable and of high enough quality to support decision making, what the financial impact may be, and whether there may be potential liability concerns (A. E. Chung & Basch, 2015a). For individuals unable to track PGHD based on disease make-up, access to devices, or medical coverage, for example, there are concerns about creating or contributing to inequities. Furthermore, questions remain about determining content and frequency that would be most helpful. Concerns of accuracy and completeness of PGHD have been identified (Weissmann, Mueller, Messinger, Parkin, & Amann-Zalan, 2016). Clinicians may have reservations in utilizing PGHD in their clinical decision making as such data sets may be new and unfamiliar source of information. In one study many patients who shared self-tracking data with their providers expressed their dissatisfaction with the level of provider engagement with these data (C. Chung et al., 2016).

Despite these concerns, some health systems are moving forward with efforts to use PGHD to improve care. For example, the US Department of Veterans Affairs is striving to implement the enterprise-wide capability to collect and use PGHD in order to improve the patient healthcare experience, and promote shared decision making (S. Woods, N. Evans, & K. Frisbee, 2016). To date evidence of the effectiveness of integrating PGHD into clinical settings may be limited and many questions still remain, such as: How can we integrate patient generated data into the electronic health record? What strategies can be pursued to effectively mine and analyze these data to support clinical decision-making? What are the barriers and challenges in the integration of patient generated data into health information systems? How can we facilitate patient engagement and empowerment while addressing ethical concerns associated with the use of pervasive and ubiquitous monitoring? The purpose of this paper is to assess the current evidence of the impact of PGHD use in clinical practice and/or the use of PGHD for clinical decision making (e.g., for diagnosis, treatment, monitoring, or management) and discuss opportunities and challenges associated with the formal integration of PGHD in clinical care.

METHODS

We conducted a systematic literature review to examine the use of PGHD in clinical practice. We used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (Moher et al., 2009).

Search Strategy

We searched PubMed, Ovid, Embase, CINAHL, Web of Science, and Scopus. We started to search the MeSH term, “Patient Generated Health Data.” Because it was newly introduced in 2018 as MeSH, only few articles were identified. To identify all relevant studies, we added several keywords and those synonyms. First of all, we entered keywords such as “patient”, “person”, “peer”, and “caregiver” to expand targets who can create, record, and gather data. Also, we put common data capturing modalities (e.g. self-tracking, wearable, mobile health, and m-health). Moreover, we included “patient reported outcome (PRO) as a common form of PGHD. All keywords and synonyms related to patient-generated health data and decision making were searched in May 2018. Table 2 outlines our search strategy which was finalized after review by a Health Sciences librarian. We were broadly inclusive of digital or paper as the means of self-tracking, but limited to articles in English, human participants, and articles in which full-text was available (but not constrained to free article access). We augmented this search with 11 papers known to us but not returned by keyword search.

Table 2.

Search strategy outline

Search TermsCombination
Search TermsCombination
“patient generated health data”[Mesh]OROROR
“patient generated health data”
“patient-generated health data”
“patient generated health information”OR
“patient-generated health information”
“patient generated data”OR
“patient-generated data”
“patient generated”ORORAND
“patient-generated”
“person generated”OR
“person-generated”
“caregiver generated”OR
“caregiver-generated”
“peer generated”OR
“peer-generated”
“data” or “information”
“patient reported outcome measures”[Mesh]OR
“patient reported”
“patient-reported”
“self tracking”OR
“self-tracking”
“body-worn sensor*”
wearable
smartphone*OR
mhealth
“mobile health”
“personal health record”
Combine all above and below with “AND”
“clinical decision making”OR
“clinical decision-making”
“medical decision making”
“medical decision-making”

Selection Criteria

Given the focus of this review on actual PGHD use for clinical decision making in a clinical setting (unlike use of PGHD for the sole purpose of collecting data for a research protocol or without involvement of clinicians), inclusion was based on three main criteria: (1) The article must have been peer reviewed, and represent empirical work (data, whether qualitative or quantitative, collected as part of the study and reported in the article). This excluded opinion papers, vision statements, literature surveys, and similar pieces. It also excluded a number of papers discussing PGHD issues, and papers describing the architecture of particular systems or apps. Articles in which the data were simulated or fabricated for test purposes were also excluded. (2) We adhered to the ONC definition of PGHD, requiring that the patient initiate data collection and control data sharing. This excluded clinician-initiated data collection such as clinical tele-monitoring and most implantable devices, because in those cases the patient did not control data collection or data sharing. Patient-reported outcomes were commonly reported but most were excluded from this review because the data were retrospective and gathered at a fixed schedule mandated by the research study protocol (patients did not control the “self-monitoring” process). (3) The self-monitoring data had to have been used for clinical decision making or during a patient-clinician encounter. This excluded social media groups and online discussion forums, which typically focused on peer-to-peer interactions, and platforms for self-improvement or self-reflection.

We used the Covidence systematic review software (Veritas Health Innovation LtD, Melbourne, Australia) to manage the review process. The software automatically removed duplicates. We required that at least two people from our team review the title and abstract for each article. When there were conflicts, we resolved those by a third person vote or, if the third person was also uncertain, used group discussion to reconcile the conflicts. We used a similar process for full text screening, allowing one person to vote to retain an article but requiring at least two for article exclusion, with group discussion to resolve conflicts.

Analysis

Papers were read in full by members of the research team to identify and tabulate features of the studies (such as design and sample). All papers were read by at least two team members. Two team members derived themes to describe the tabulated findings. The themes were reviewed and refined by all team members.

RESULTS

The original keyword search returned 7994 articles and screening and review of text resulted in a final 21 articles that were included in the review. Figure 1 shows our PRISMA diagram and includes the primary reason for exclusion for the articles in which full text was reviewed. In the larger set of articles, we eliminated 1256 duplicates. Most of the articles in the abstract/title search were excluded because of not meeting the PGHD definition used in this study, including more than 200 articles that included formal or informal patient-reported outcomes (but missing some element, most commonly not meeting the requirement that patient/participant controlled the timing of data collection or the decision about whether to share the data). Similarly, the most common reason for excluding studies of sensors was lack of information about whether the patient had any control over the data collection or sharing. Other common reasons for exclusion were that the study did not include actual participant data (e.g., fabricated data sets, or description of architecture without actual data collection). There was only 1 article excluded for not being English-language, however the English-language abstract for that article appeared not to meet the PGHD definition. There were no articles that were excluded on the basis of full-text being unavailable.

An external file that holds a picture, illustration, etc. Object name is nihms-1036072-f0001.jpg

We found that, while there are articles discussing the vision or need for PGHD in clinical care (at least 40), and more than 200 articles describing data collected from patients or caregivers, empirical research meeting the ONC definition of PGHD was scarce. Table 3 summarizes the final 21 articles. Publication dates for the articles ranged from 2001 to 2018. An initial slow start (less than 1 article per year) was followed by increasing number of articles starting in 2016. As shown in Table 3 , study locations included USA, UK/Europe, Australia, and Asia, with location not specified for 2 studies. Participant ages covered the lifespan from pediatrics to older adults. Studies examined a wide variety of conditions and symptom foci (see Table 3 ).

Table 3.

Description of the included studies

ReferenceOverviewStudy/SystemFindings
Adams et al. (2003)Model information system that integrates patient health information to support monitoring and care of children with persistent asthma

Focus: physical and emotional well-being

Focus: diabetes, hypertension

Focus: post surgical wound healing

System: cell phone camera, email

Focus: atrial fibrillation (AF), Gestational diabetes (GDM)

Focus: atrial fibrillation (AF), Gestational diabetes (GDM)

Focus: mild traumatic brain injury (mTBI), post traumatic stress disorder (PTSD)

System: un-named. described as an electronic survey tool that supports data collection from personal digital assistants, commercial SMS text messaging

Notes. CONDUIT-HID= CONtrolling Disease Using Inexpensive Technology-Hypertension in Diabetes; BP=blood pressure; AF=atrial fibrillation; GDM=gestational diabetes Mellitus; EMA=ecological momentary assessment; mTBI =mild traumatic brain injury; PTSD= post-traumatic stress disorder

Most of the studies were at exploratory or developmental stages. Two studies were randomized controlled trials (RCT), with sample sizes of 40 (Hsu et al., 2016) and 96 (Jiang et al., 2016). A third (Andy et al., 2012) described their 61-patient study as a randomized trial but did not provide information about the groups and reported the results like as an observational study. Six studies were observational or cross-sectional designs (Barrett et al., 2018; Lau et al., 2013; Lv et al., 2017; Marcau et al., 2010; Miller et al., 2017; Weissmann et al., 2016). More than half (12/21, 57%) described the study as beta test, pilot, feasibility or case study (Adams, et al., 2003; Albisser, et al., 2001; Basch et al., 2007; Bauer et al., 2018; Johansen et al., 2004; Lindroth et al., 2018; Martinez et al., 2017; Peleg et al., 2017a, 2017b; Quinn et al., 2008; Smith et al., 2012). Sample sizes for these early stage exploratory studies ranged from 4 (Johansen et al., 2004) to 142 (Albisser et al., 2001), with most having 30 or fewer participants ( Table 3 ).

Article Quality

We were not reporting the effects of PGHD per se, and most of the studies reported very early stage projects, with less than half of the articles as RCT or prospective observational studies. Because of the preponderance of early, developmental, and pilot studies we did not use a standard appraisal tool to formally evaluate study quality. However, we qualitatively examined article quality. Some of the articles met traditional quality metrics. For example the observational study by Weissman et al. (2016) had a fairly large sample, well-described participant characteristics, and provided detail about their study processes.

We also noted several limitations on study quality in some of the studies. Participant characteristics were largely unreported. The study by Adams et al. (2003), a striking example, did not report any participant characteristics including the number of participants. As shown in Table 3 , in multiple studies the age range was not specified but presumed to be adult based on the study description. Most were single center, and often single unit within a center, with only the study by Weissmann et al. (2016) explicitly described as multi-center. Some effects were reported but not actually measured; for example, in the study by Barrett et al. (2018), potential clinical effects were only hypothesized. The PGHD systems were evolving and undergoing iterative refinement (particularly for studies described as pilot or beta-testing). The study by Albisser et al. (2001) for example, explicitly noted the system was being actively refined during the time the study was being conducted. Refinement is a natural part of tool development processes but can be a challenge to reproducibility. We noted other design and methodology issues, some reported by the authors, including high attrition (Marceau et al., 2010), recall bias and ascertainment bias (Peleg et al.,2017a).

Types of Data Collected

We extracted descriptions of the wide variety of data collected in the studies and grouped them by data elements. Not surprisingly, almost all the PGHD systems collected data about symptoms, physiological measurements, and behaviors, with the exception of the study by Hsu et al. (2016); the PGHD system Hsu evaluated focused on blood glucose values and medication adherence although the patients also participated in virtual visits with their health care provider via videoconferencing. Some of the system didn’t just ask if symptoms were present but also included extent of symptom interference or quality of life metrics. The study by Basch et al. (2007) included formal PRO measures as well as study-specific questionnaires to measure symptoms, and used validated measures of quality of life. The study by Bauer et al. (2018) included the validated instruments PHQ-9 to measure depression and GAD-7 to measure anxiety. As shown in Table 3 , most studies were focused on condition-specific topics, and consequently data collected in the systems focused on the condition-specific symptoms. Also commonly reported were lifestyle and health behaviors such as activity/exercise and diet, risk behaviors (such as smoking) and preventive measures (like foot or eye exam). Medication usage or adherence was examined in several PGHD systems (Adams et al., 2003; Andy et al., 2012; Barrett et al., 2018; Hsu et al., 2016; Marceau et al., 2017; Peleg et al., 2017a, 2017b; Quinn et al., 2008).

Many of the studies included physiologic measurements from a device, such as a blood glucose monitor (Albisser et al., 2001; Andy et al., 2012; Hsu et al., 2016; Peleg et al., 2017a, 2017b; Quinn et al., 2008; Weissman et al., 2016), vital signs such as blood pressure or heart rate (Andy et al., 2012, Jiang et al., 2016; Peleg et al., 2017a, 2017b), body weight (Lv et al., 2017), or spirometry or peak flow (Adams et al., 2003, Jiang et al., 2016). The device data in some studies was manually entered by the patient into the PGHD system with notable exceptions being the report by Andy et al. (2012) and the reports by Peleg et al. (2017a, 2017b) which explicitly noted the sytem allowed data to upload from commercial blood glucose monitors; the Weissmann et al. (2016) report also included a device reader that could pull data from blood glucose monitors. The paper by Martinez et al. (2017) used an automated blood pressure cuff that uploaded data to the Microsoft HealthVault personal health record.

Less commonly seen data element categories were contextual data, goals/preferences, and miscellaneous. Contextual elements included patient demographics and events such as illness or pre-defined psychosocial contexts such as being at work (Albisser et al., 2001; Andy et al., 2012; . Goals or preferences were occasionally reported in the PGHD system (Barret et al., 2018; Peleg et al., 2017b). Miscellaneous data included problem-solving activities or journal functions in which the patient could choose what to document (Andy et al., 2012).

Usability and Satisfaction

Some of the studies reported that the PGHD system/app included built in surveys or questionnaires evaluating the application itself, reactions to using the system, or issue tracking. However, despite the clearly formative nature of most of these evaluations, few of the developmental studies reported formal usability evaluations and reported satisfaction or reactions in broad terms. Some assessed usability or satisfaction externally to the PGHD system. The study by Andy et al. (2012) for example, predominantly reported what functions were used, and noted in the conclusions that they received “user feedback including either device problems or browser compatibility problems” (p. 6 of 6). Jiang et al. (2016) used a satisfaction survey but then noted that the distribution of scores was highly skewed so dichotomized to fully satisfied or less than fully satisfied (p.6). Johansen et al. (2004) noted that families were “happy to participate” and found it “easy and convenient” (p. S1:55) but also provided qualitative comments to illustrate responses. Despite having ease of use in the title, the study by Martinez et al. (2017) did not report any ease of use or usability metrics. Similarly, Quinn et al. (2008) had satisfaction in their study title but what they evaluated was satisfaction with the clinical outcome rather than satisfaction with the PGHD system. Smith et al. (2012) did not describe their survey other than to note it was a brief, forced-choice questionnaire, and reported broadly “participants generally found the messaging program useful” (p. 300), with percentages of respondents who found selected features of the system “helpful” (p. 301).

A few of the studies were more informative. Barrett et al. (2018) described their study as a crowdsourced application, and although not describing a formal evaluation in detail, they reported participant experiences from 57 patients, noting 80% were satisfied with the sensor and found it easy to use, 81% reporting feeling more confident in being to avoid an asthma attack. Bausch et al. (2007) noted that they used a satisfaction survey with items adapted from measures used in similar research, and reported “Satisfaction with the system was high (90%), but only 51% felt communication was improved” (p. 5375). Bauer et al. (2018) reported “the app was easy to use and the amount of time was reasonable” but added detail including a table with responses to individual items on their questionnaire. Hsu et al. (2016) conducted a qualitative exit interview and reported example user comments organized around themes of reduced anxiety, empowerment, and connecting glucose level to behavior (p.63–64). Marceau et al. (2010) noted satisfaction as a main study outcome, and described their questionnaires as adapted for the study from previously published questionnaires and reported not only questionnaire results, but also qualitative comments, both positive and negative. Peleg et al. (2017b) reported in detail the results of a usability survey with detailed responses in a table. Weissmann et al. (2016) reported high levels of physician satisfaction (“satisfied or perfectly satisfied”) along a number of domains such as time for decision making, quality of patient interactions, speed of report generation, clarity of records, and other domains (p. 81).

PGHD Systems/Apps

Most of the PGHD systems were study-specific. A few (5) used commercial systems or included off-the-shelf components (Albessier et al., 2001; Barrett et al., 2018; Miller et al., 2016; Quinn et al., 2008; Weissmann et al., 2016). Data were predominantly manually entered. In many of the studies, patients had to manually record even device data into the PGHD system. There were 5 studies (Andy et al., 2012; Hsu et al, 2016; Martinez et al., 2017; Quinn et al., 2008; Weissmann et al., 2016) that indicated they pulled data from a limited number of very specific devices, such as specified glucometers. Voice or phone touch-tone was used for data entry in 2 studies (Adams et al., 2003; Albisser et al., 2001). Digital images were used in the studies by Johansen (2004) and Miller (2016), and as an option for recording food intake in the study by Andy (2012). Bauer et al. (2018) included data from sensors built into the phone or tablet, such as location, movement, phone usage, and app usage data.

Data storage was predominantly not reported. A few studies discussed a study or app-specific survey, or integrated with REDCap or similar data collection tools. Data transfer methods included Bluetooth (for those that captured data from devices to phone) and Wi-Fi (phone to server). Web portals were reported in multiple studies. Some had no data transfer (data were entered and viewed directly on a central server). Data transfer methods were sometimes unspecified (“secure data transfer” or “patients could upload”).

Electronic Health Record (EHR) integration

Interestingly, 2 of the studies used paper (printed reports) for sharing data with the clinicians (Albisser et al., 2001; Basch et al., 2007). Only 4 studies claimed electronic health record (EHR) integration of PGHD data Adams et al., 2003; Holch et al, 2017; Lv et al, 2017; Martinez et al., 2017). The system examined by Peleg et al. (2017a, 2017b) interacted with EHR data in the other direction, pulling clinical data into the PGHD system. Integration with electronic health records was discussed as a potential for future development using terms such as HL7 compatible (Martinez 2017) or formatted to support semantic integration (Peleg 2017a, 2017b)). Andy et al. (2012) created a report formatted as an HL7 Continuity of Care Document (CCD), which is a national standard accepted by the U.S. Department of Health and Human Services for sharing clinical information (HL7 International).

Decision Support

The articles in this review used mostly very simple forms of decision support. The predominant form of decision support was information presentation, in a variety of summaries, reports, or status dashboards. This included progress reports stored centrally (Albisser et al., 2001), visualizations of patient data viewed the nurse and used to adjust interview questions for face to face consultations (Lindroth et al., 2018), and weekly summaries correlating medication adherence and blood glucose values with reminders to also consider diet and exercise effects and links to communications tools (Hsu et al, 2017). Blood glucose profiles, statistics, graphs and other visualizations were also provided in reports by Weissman et al. (2018). Johansen et al. (2004) asked families to email a summary to the burn team. Well constructed reports and information displays are known to support communication between patients and providers and facilitate collaborative decision making. Weissman et al. (2018) for example, explicitly noted that the reports were used during clinic visits to guide collaborative decision making, as did Hsu et al. (2017). Marceau et al. (2010) used direct patient and healthcare provider communications as the primary means of decision support. Similarly, Smith et al. (2012) used data shared with the treatment team as a primary form of decision support. Some of the reports and information presentation features targeted specifically the patient or provider. Miller et al. (2016) presented wound images to the providers, leaving it to the provider to interpret. Lau et al (2013) specifically designed their electronic diary to support participant self-reflection, with llinks to communications portals that would allow people to choose to communicate with clinicians.

Also commonly reported were a variety of unspecified feedback or reminders, or simple alerts based on thresholds (like a blood pressure that was above guideline thresholds). Patients were generally advised to consult with their clinician, rather than being offered specific actionable advice. A few systems included automatically generated emails that could be sent to the providers for certain alert conditions. Examples of these alerts and feedback include:

symptom-treatment mismatch notice to the patient, with alerts sent to a clinician (Adams et al., 2003)

out of range vital signs (Andy et al., 2012)

dashboard showing asthma control, medication adherence, as well as notification of local air pollution levels (Barrett et al., 2018)

patient alerted to contact clinician if symptom severity of grade 3 or higher was reported (Basch et al., 2007). This study had no automated reporting to clinicians.

app (for patients) plus dashboard (for care managers) that include ability to graph findings over time; dashboard flagged patients with specified alerts such as persistent symptoms or isolation based on movement/communication, or if patient response indicated thoughts of self-harm. Care managers and clinicians responded to patient by phone (Bauer et al., 2018)

alerts for symptoms that passed critical thresholds, with feedback message about when and what to report to the transplant coordinator (Jiang et al., 2016)

dashboard alert for nurse case manager if individual blood pressure measurements cross a specified critical level (Lv et al., 2017)

feedback about how entered blood glucose value compared to patient-specific target (Quinn et al., 2008)

Patient education was specifically called out as a form of feedback or advice in some of the studies. Albisser et al (2001) provided self-management instructions, and Andy et al. (2012) provided standardized educational messages. Similarly the system evaluated by Quinn et al. 2008) provided patient feedback/education about nutrition, lifesyle, stage of change, and self-management skills. Sometimes the messages were somewhat tailored. Adams et al. (2003) provided behavioral reinforcement education tailored to the patient data. Barrett et al. (2018) tailored education based on guidelines.

Three of the more recent studies provided actionable advice, coupled with clinician notifications. The sytem evaluated by Holch et al. (2017) provided immediate targeted advice based on local and national guidelines for low to moderate severity events. For severe events, the system provided advice to contact the hospital and email was sent to clinicians. The system evaluated by Martinez et al. (2017) used a protocol to evaluate data and provided feedback to adjust medications if blood pressure was not controlled, along with alerts sent to diabetes care nurses. The system evaluated by Peleg et al. (2017a, 2017b) included a formal clinical decision support system that provided feedback based on patient data and clinical guidelines, but gave patients control over how to use the system.

A care manager (often a nurse care manager) or other intermediary was an important part of the decision support workflow for several studies. Adams et al. (2003) triaged alerts into level 1 (high priority) with alerts sent directly to a care manager, and level 2 (lower priority) alerts which were reported into a document that could be reviewed by the care manager at their convenience. The care manager determined when to contact the primary care provider. In the system used by Albisser et al. (2001), the case worker was the primary day to day reviewer of data in the system and providers reviewed printed reports biweekly or monthly. The system examined by Andy et al. (2012) gave the patient the ability to initate a message with a case manager. In the study by Basch et al. (2007), the primary intermediary was the nurse at a clinic visit. However, only 1 in 7 of the nurses reported that they discussed PGHD findings with patients “frequently”, with time as the biggest barrier to discussing the data with patients. In the study by Johansen et al. (2004), research staff acted as the intermediary. Patients were asked to send emails to the burn team, but those emails were delivered to the research staff and then collated and forwarded by research staff to the burn team. The collated emails added a checklist for the burn team to use in evaluated the image quality. Responses from the burn team were sent to the research staff, who then forwarded messages back to the family. The importance of nurses as an intermediary continued into more recent studies. In the study by Lv et al. (2017) nurse case managers and registered dieticians actively accessed the dashboard, contacting patients as needed using system-supported bi-directional secure messaging. IN the study by Martinez et al. (2017), diabetes care nurses phoned patients between office visits and when alert was generated.

DISCUSSION

In this review we examined scientific literature to attempt to understand the extent to which the vision of using PGHD to inform clinical decision-making has been realized. We found literature that showed predominantly developmental and feasibility studies, and studies that look at impact or outcomes are just emerging. The PGHD systems were highly diverse in terms of what data were collected, and how data were collected, stored, and shared. Despite the rapid growth in personal sensors (such as activity trackers) and general positive attitudes about “quantified self” in popular literature, we found only limited usage of these devices in the studies. This slow start and gradual growth aligns with the PGHD adoption curve projected by the Office of the National Coordinator for Health IT (Cortez, Hsii, Mitchell, Riehl, & Smith, 2018), which suggested that we are currently in an early adopter stage for PGHD in clinical care and research.

The scarcity of empirical research that included both PGHD and clinical processes was similar to that reported in a recent synthesis looking at PGHD information quality (Peter West, Max Van Kleek, Richard Giordano, Mark Weal, & Nigel Shadbolt, 2017). Because of our narrow focus and the scarcity of literature that met our review criteria we also examined the excluded studies, at a high level, to try to evaluate why these studies returned on the keyword search but were excluded. We saw that people sometimes used PGHD keywords to represent data collection methods, such as interviews or questionnaires that are aimed at the patients or caregivers (A. E. Chung & Basch, 2015b; Peeples, Iyer, & Cohen, 2013). In many cases, papers were excluded that included study protocols, a number of scale or instrument development or validation studies, and system architecture descriptions. Some used only fabricated or synthetic data and lab testing. We also excluded a number of drug studies or intervention evaluations, in which the “patient-reported information” was limited to intervention effects or “reportable” drug adverse effects.

In terms of clinical decision support features for PGHD data, we identified in most cases a very basic level of decision support. This rudimentary form of clinical decision support may be a reflection of the emergent state of PGHD systems (Shameer et al., 2017). It may also be that developers could be intentionally avoiding giving actionable recommendations because such usage might place the devices into the realm of being a “medical device” per FDA definitions and therefore subject to additional oversight, which can be prohibitive for devices that are still in developmental process (Tung et al., 2018). Personal devices that are low-cost enough for widespread use (consumer-grade devices) are in some cases known to have issues with accuracy and precision (P. West, M. Van Kleek, R. Giordano, M. Weal, & N. Shadbolt, 2017). Finally, data from these devices can be difficult to use in rigorous research studies, with no standard formats defined (as yet) and data from many devices are often stored in a manner that is proprietary to the system developer (Quinn et al., 2008a).

Our findings highlight that efforts to integrate PGHD to support clinical decision making are growing in recent years, however, further work is needed to allow for its broader application and use. Our recommendations fall under the following categories: research; policy; system design, EMR integration, regulating hardware and software; engaging the clinical workforce, and consumer education.

Research

Findings from our systematic review highlight the need to further explore several areas to ensure clinical decisions can be made appropriately when PGHD are used. Research using rigorous methods and larger sample sizes are needed to evaluate the impact of PGHD on, for example, health outcomes, or cost of care. Further research should address quality, accuracy, and reliability of the data produced in various settings and case scenarios. Data accuracy and reliability will be increasingly important as more individuals decide to share their data and providers use it to guide their care (Sitapati et al., 2017; Tung et al., 2018). Researchers have described an anticipated enhanced patient engagement using these technologies, however, this assumption should be directly assessed (Y. R. Park et al., 2018). Unintended consequences have also been suggested, such as the potential for increased patient anxiety due to a heightened awareness of health decline (Harrison, Koppel, & Bar-Lev, 2007; D. A. Steward, R. A. Hofler, C. Thaldorf, & D. E. Milov, 2010). The unanticipated consequences need to be closely monitored and further described to help mitigate poor outcomes or to identify who may benefit most using PGHD. Other areas of further exploration include how PGHD influences shared decision making, care coordination, new models of patient-centered care delivery, healthcare utilization, and workflow and provider efficiencies. Usability studies will help to integrate the patient voice and elucidate user issues and satisfaction with the mobile and sensing tools and determine how to meaningfully provide feedback to patients and families (S. S. Woods, N. C. Evans, & K. L. Frisbee, 2016).

The analytic processes for assessing PGHD is another area primed for further development. As we move from historically aggregated, population-based data to individual, longitudinal data more advanced methodologies need to be applied to identify an individual’s patterns, changes in patterns and outliers. Advanced methodologies for interpreting PGHD include, for example, predictive analytics (the branch of analytics that uses various techniques to predict future events based on existing large data sets), machine learning (the use of algorithms by computer systems to complete tasks relying on inferences over time), deep learning (that focuses on learning data representations rather than tasks), artificial intelligence, and other complex analysis (Bhavnani et al., 2017; Peake, Kerr, & Sullivan, 2018b; Shameer et al., 2017). We also recommend being proactive in making patients and families part of the analytic team to make better sense of the data.

In the included studies we found variability in the data that were presented. We recommend authors use and journals require a standardized reporting framework to assess the quality of the produced data. For example, the Mobile Health Evidence Reporting and Assessment (mERA) reporting framework has been adapted to support health system evaluation of technology promoting the capture and use of PGHD to deliver patient-centered care (Agarwal et al., 2016).

Policy

Policy will help to guide and determine the future of digitally enabled healthcare. Based on our findings, there are several areas where policy development is important to further examine and provide guidance for the use of PGHD in health care delivery. Policy areas include, but are not limited to, guiding interoperability of devices and systems; establishing standards around tracking modalities; addressing issues of liability and privacy; and to help inform reimbursement structures. Tracking modality issues may include, for example, determining the frequency or intervals of tracking and analysis, methods of measuring, and how providers should manage the data. When technological advances occur too quickly for existing healthcare practices to keep up a mismatch between development and preparedness of the system to effectively integrate and utilize the data can occur (Bhavnani et al., 2017). Liability issues include determining who is responsible for analyzing the data – the provider, the vendor of the digital tool, to whom can the data analysis be delegated. Potenial liability may be reduced or mitigated by establishing policies and proceedures for handling PGHD and maintaining transparency about the used of the patient’s information (HIMSS, 2014). It will be important to determine the delegation of responsibilities for review of certain types of PGHD, for example, to designees such as a nurse, care manager or other staff and guidelines for responding to alerts or concerning data. Despite the value of PGHD to extend or expand care for individuals, there is a tension that this approach to care management/delivery is not yet reimbursed by the current payment structures limiting the integration of PGHD in practice. There is a need for the innovations to align with institutional objectives and for business cases that incorporate payment models and value based reimbursements (Bhavnani et al., 2017). Establishing a reimbursement structure could promote broader use or more rapid uptake. With clinical measures increasingly tied to performance and payment metrics, ensuring that data accurately reflects the health status of patient population is critical (P. West et al., 2017).

System Design, EHR integration, Regulating hardware and software

As PGHD tools become more widely available to become formally integrated to standard processes of care, applying principles of user-centered design can facilitate the implementation of systems that more effectively address stakeholder and workflow needs (Poole, 2013). The integration of PGHD into the Electronic Health Record systems is not fully examined and efforts to date highlight the need for wide adoption of interoperability standards in the industry (Mandel, Kreda, Mandl, Kohane, & Ramoni, 2016). When considering regulating PGHD related hardware and software, several challenges have been identified. Many mobile applications or sensors on the market are considered “lifestyle devices” and do not undergo FDA approval. The FDA has adapted new strategies to address the growing concern for regulation (U.S. Food and Drug Administration, 2013, 2015). Although FDA reviews medical devices, it does not require that the device has been rigorously tested to show if it has had an impact on health outcomes (IMS Institute for Healthcare Informatics, 2015).

Engaging the Clinical Workforce

In addition to integrating data into EHRs, there is a need for clinical workforce training on interpretation of PGHD. Establishing best practices for integration into clinical workflow is essential. For example, real-time alert systems that align with the health systems’ workflow can help providers and staff quickly sift through a large quantity of data to identify when follow-up action is needed (National eHealth Collaborative, 2013). W. Adams et al. (2003) established protocols and built algorithms to determine responses to alerts, e.g., level 1 required immediate response, whereby a nurse was alerted and and patient/parent was notified to seek medical care, while Level 2 alerts were reviewed by a study nurse. All alerts and their corresponding responses were entered into the Electronic Health Record. Providers will also need guidance for identifying tools to recommend to their patients when they choose to take advantage of self-tracking options. For example, a framework has been developed to assist healthcare professionals in recommending quality applications to match patients needs for diabetes self-management (Hale, Capra, & Bauer, 2015).

Implications for Nursing Science

As nurse scientists frequently examine biological underpinnings of symptoms that are inherently self-reported or captured by patients outside clinical settings, PGHD systems can become powerful tools in capturing or predicting vulnerability to changes in health. As Hickey et al. (2019) point out nurse scientists can integrate precision health to better understand disease burden and facilitate symptom management and improvement of quality of life. Given the comprehensive focus on health and well-being in different settings, nurses are uniquely poised to assist patients in capturing information about their physiological, mental and cognitive well-being as well as exposure to environmental parameters (aspects of what is referred to as “phenotypic characterization” in the Nursing Science Precision Health Model (Hickey et al., 2019)). Nursing scientists can use their holistic lense as reflected in the Nursing Science Precision Health Model to lead the process of defining the role of PGHD in the era of precision health.

Consumer Education

Health consumers will need education in how to select accurate and reliable tools, interpret their data, discuss and understand expectations of how their data will inform clinical decision making or lifestyle choices. In this context, we must remain aware of the potential for widening health disparities and be proactive in identifying strategies to mitigate this potential unwanted outcome, such as actively seeking to reduce digital divides and developing novel ways to assure digital data privacy for small populations (Zhang et al., 2017). Challenges include not only the level of access to digital tools and necessary infrastructure but also challenges of health literacy and also “data literacy”, the extent to which users understand the meaning of their data, how they are stored and transmitted and who has or may have access to them (Lor, Koleck, Bakken, Yoon, & Dunn Navarra, 2019; van der Vaart & Drossaert, 2017).

Limitations of our Review

In this review we pursued a narrow focus, requiring that the study include interaction with clinicians for decision making, but that data collection and sharing be patient-initiated. Requiring that there be a clinician-patient decision-making interaction excluded social media platforms and similar emerging forms of patient initiated health data. We tightly adhered to the ONC definition of PGHD, but the use of this term has clearly evolved over time in addition to other, broader definitions. Because of our tight adherence to this specific definition, we excluded many studies where patient generated data were facilitated for the purposes of a research study without actual use in clinical practice; however, the findings of these studies may inform the next step of actual translation of this work into clinical settings. In particular, we excluded many studies that used patient reported outcomes but didn’t meet the nuances of the selected PGHD definition, most often because the data were collected only at investigator-specified intervals or only at the prompting of a clinician during a clinic visit. Our choice of search terms may have also limited our findings. We chose many synonyms for “patient generated health data” but we still found surprisingly few articles that included sensors or monitoring technologies, for example. It is possible that had we searched for specific types of sensors (such as actigraph or activity tracking) without looking for PGHD phrasing, we might have found more relevant literature.

CONCLUSION

Our systematic literature review found few studies that implement the full scope and intent of the ONC definition of PGHD. Integration of PGHD into electronic records was extremely limited, and decision support capabilities were for the most part basic/rudimentary. PGHD will be part of the health care system narrative and we must continue efforts to understand its impact on health outcomes, costs, efficiency, and patient satisfaction. This will require an iterative design and implementation process with patients, health care providers, and researchers. To accomplish the integration of use of PGHD in daily practice, policies and guidelines will be needed to accommodate the vast arrange of data types and use case scenarios to be able to use PGHD in daily practice effectively. We conclude that the use of PGHD in clinical practice is in the promising stage and inevitable but needs further work for widespread adaption and seamless integration into healthcare systems. Nursing scientists need to be at the forefront of this research and lead the process of defining the role of PGHD in the era of precision health.

Footnotes

Declarations of interest: none

References