U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Cover of Does a Smartphone App Help Patients with Cancer Take Oral Chemotherapy as Planned?

Does a Smartphone App Help Patients with Cancer Take Oral Chemotherapy as Planned?

, PhD, , PhD, and , BA.

Author Information and Affiliations

Structured Abstract

Background:

Patients prescribed oral chemotherapy receive less support for adherence and monitoring of symptoms from oncology clinicians than do patients prescribed traditional infusion chemotherapy, resulting in poor adherence, lower-quality care, and worse disease outcomes. No theory-based, efficacious interventions exist to promote adherence and symptom monitoring for patients prescribed oral chemotherapy.

Objectives:

The primary aims of this study were to (1) develop a patient-centered, smartphone mobile application (app) to facilitate adherence to oral chemotherapy and symptom management for patients with cancer; and (2) test the effect of the app on improving adherence to oral chemotherapy, symptoms, quality of life (QOL), and quality of care in a randomized controlled trial (RCT).

Methods:

A multidisciplinary research team worked with key stakeholders to develop the mobile app, soliciting feedback on app content, usability, and patient-centeredness from 4 groups: patients/families (n = 8); oncology clinicians (n = 8); cancer practice administrators (n = 8); and representatives from the health system, community, and society (n = 8), as well as patients (n = 10) and oncology clinicians (n = 8) from the Massachusetts General Hospital.

Then, from February 18, 2015, to October 31, 2016, 181 patients with diverse malignancies prescribed oral chemotherapy enrolled in an RCT to receive the mobile app intervention or standard oncology care. The primary outcomes were adherence and self-reported symptoms and QOL. Adherence was measured by the Medication Event Monitoring System Cap (MEMSCap) and by self-report. The secondary outcomes were patient perceptions of quality of care and utilization (ie, hospitalizations and emergency department visits). Patients completed the self-report questionnaires at baseline before randomization and at 12 weeks.

Results:

Feedback from stakeholders and patient participants greatly informed intervention development and showed that the app was perceived as useful and acceptable. The final app incorporated features including a treatment plan, reminder system, symptom reporting modules, and patient resources. Patient-reported data were transmitted to the oncology team via HIPAA-compliant email on a weekly basis. The mobile app intervention group and control group did not differ over time with respect to the primary outcomes of adherence, self-reported symptoms, and overall QOL, or in the secondary outcomes of quality of care and utilization. In examining specific domains of QOL, patients in the mobile app group had a smaller reduction in social well-being over time (Mdiff = 1.67; SE, 0.74; F1161 = 5.13; P = .025; 95% CI, −3.12 to −0.21). Subgroup analyses showed that patients with poor self-reported adherence and high anxiety at baseline who were randomized to the app had improved MEMSCaps adherence rates compared with the standard care group. Finally, older patients randomized to the app reported improved QOL compared with those receiving standard care.

Conclusions:

Feedback from stakeholders and patient partners was instrumental in optimizing relevancy, feasibility, and acceptability of the study methods and app intervention. Across all patients, the mobile app was not efficacious in improving adherence or symptoms. However, patients at greater risk for nonadherence may benefit.

Limitations:

Use of daily MEMSCap as the primary study outcome may have raised participant awareness of adherence across both study groups, perhaps diminishing intervention effects. Additionally, generalizability of study findings is limited due to the restricted diversity of this well-educated sample at an academic institution.

Background

Cancer care delivery has shifted in the past decade, with a substantial increase in the prescription of oral cancer therapies as an alternative to traditional intravenous chemotherapy. In 2010, approximately 16% of patients receiving cancer treatment were prescribed oral agents, and this figure is expected to surpass 25% in coming years, given advances in the study of tumor genetics and the number of oral chemotherapy agents in current development.1 Patients overwhelmingly prefer oral administration to intravenous due to the enhanced convenience of home administration, the mitigation of problems related to intravenous access such as pain or discomfort, and an increased sense of control of the chemotherapy environment.2 In fact, patients prescribed oral chemotherapies report less interference in their daily activities, corresponding to better quality of life (QOL).2,3

Patients and oncology clinicians have encountered unique challenges as cancer care becomes increasingly delivered in the outpatient and home setting.4,5 While patients prescribed traditional intravenous chemotherapy receive direct supervision in infusion centers, where they are monitored and treated for symptoms and side effects, individuals prescribed oral chemotherapy take their medications at home with limited oversight, monitoring, and support from their oncology clinicians.6,7 The toxicities of oral chemotherapy are equivalent to those of intravenous chemotherapy, including nausea, vomiting, fatigue, and diarrhea,8 yet the lack of regular contact with the oncology team is a barrier to proper use of this regimens.5,9 For example, symptoms such as difficulty swallowing, nausea, and vomiting may interfere with taking oral agents if not treated appropriately. Patients and their families often must assess and manage symptoms on their own and, in turn, may not adhere to the treatment regimen as intended.

Patient adherence is often defined as taking a medication as prescribed regarding daily amount, dosage, and frequency; it is vital to the efficacy of an oral chemotherapy regimen.10 Importantly, poor adherence to oral anticancer treatment is associated with poor survival rates and with disease progression.10-14 Despite the importance of adherence for optimal cancer outcomes, several systematic literature reviews have shown that adherence rates to oral chemotherapy in patients with cancer vary widely, with adherence ranging from as high as 100% to as low as 16%.15-18 These estimates vary based on patient sample, medication type, follow-up period, assessment measure, and calculation of adherence. Various patient, provider, treatment-related, and health care system factors are associated with treatment adherence,19 including patient health beliefs regarding treatment efficacy, cognitive impairments, inadequate social support, psychological distress, poor communication with providers, adverse effects of treatment, and difficulty accessing care or costs of medications.15,16,19-23 More specifically, studies have shown that patients who are male, older, living alone,24 nonwhite,25 of low socioeconomic status,26 treated in community vs academically based centers,26 or depressed are more likely to be nonadherent.27 In addition, presence of severe side effects,28 greater complexity of cancer treatment (eg, variable dosing schedules), and greater length of time on treatment are associated with poor adherence to oral chemotherapy.24,29,30 Other potential barriers to oral chemotherapy adherence may include patient forgetfulness, misunderstanding of dosing instructions, and attitudes toward the effectiveness of the chemotherapy.31,32 In addition, patients with elevated distress are more likely to struggle with adherence.33,34 In our own longitudinal investigation of patients receiving chemotherapy for advanced non–small cell lung cancer, approximately 30% had heightened baseline anxiety symptoms, which significantly predicted the occurrence of chemotherapy dose delays and reductions.35 Given that 10% to 25% of patients receiving cancer treatment become clinically depressed,36,37 many patients on oral chemotherapy will experience psychological distress that could interfere with adherence.

There is a critical need to overcome the challenges associated with the fragmentation of care related to oral chemotherapy administration, with specific attention to medication adherence and symptom management.38 Recently updated standards from the American Society of Clinical Oncology (ASCO) and Oncology Nursing Society now include comprehensive guidelines for prescribing, documenting, and monitoring patient treatment with chemotherapy, including oral agents.1 These standards include recommendations for discussing and documenting a chemotherapy treatment plan based on the type of medication, dosage, anticipated duration of treatment, and goals of therapy. Furthermore, the ASCO Quality Oncology Practice Initiative has been examining quality metrics for oral chemotherapy administration pertaining to documentation of treatment plan, patient consent and education, and ongoing monitoring of oral agents.39 Despite these guidelines, very few interventions to improve adherence and monitoring for patients prescribed oral chemotherapies have been tested. In a recently published systematic review, we identified only 12 adherence interventions for patients with cancer, with some resulting in mixed findings40-42 and most lacking methodological rigor, with nonrandomized designs and small sample sizes.18 Thus, theory-based interventions that are accessible to patients in order to promote adherence and symptom management are critically needed.

Mobile health (mHealth) technology provides an opportunity for support and monitoring in a minimally burdensome, maximally accessible approach.43 Evidence suggests that interventions delivered via mobile technologies can improve health behaviors in patients with cancer.44 In addition, mobile smartphones allow for ecological momentary assessments by facilitating repeated evaluation of participants' symptoms and adherence behaviors in real time, which may enhance the provision of care for patients prescribed oral chemotherapies. Smartphone mobile applications (apps) may be an ideal platform to administer a supportive intervention that promotes adherence and symptom management for patients prescribed oral chemotherapy. Thus, with support from the PCORI, we conducted a 2-phase study to develop a patient-centered mobile app to assess symptoms, side effects, and adherence to oral chemotherapy that is feasible and efficacious for use with oncology patients.

In phase 1, we developed an acceptable and feasible patient-centered mobile app informed by qualitative feedback from key stakeholders, patients, and oncology clinicians. In phase 2, we conducted a randomized controlled trial (RCT) to demonstrate feasibility and evaluate the efficacy of the mobile app in improving adherence as well as patient-reported clinical outcomes. Aim 1 of phase 2 was to test feasibility based on rates of completion of symptom reports in the mobile app. We hypothesized that at least 75% of participants assigned to the mobile app intervention would complete symptom surveys for at least 9 of the 12 study weeks. With respect to evaluating the efficacy of the mobile app in improving primary outcomes (aim 2), we hypothesized that patients prescribed oral chemotherapy for cancer who were randomly assigned to the mobile app intervention would report better medication adherence, fewer symptoms and side effects, and improved QOL compared with the control group (ie, patients receiving standard care). The third aim of phase 2 was to evaluate the efficacy of the mobile app in improving quality of oncology care. We hypothesized that patients who were randomly assigned to use the mobile app would report greater satisfaction with medical care and have fewer emergency department visits and hospitalizations compared with the control group. Finally, we explored treatment heterogeneity by examining whether particular patient demographic and clinical characteristics (eg, cancer type, age, gender, baseline self-reported adherence) moderated the effect of the study intervention, thereby identifying any key subgroups of participants who may have responded differently to the mobile app.

Participation of Patients and Other Stakeholders in the Design and Conduct of Research and Dissemination of Findings

In accordance with PCORI Methodology Standard PC-1, we engaged individuals representing the population of interest (ie, patients with cancer, their family members, clinicians, administrators, and policymakers) in formulating research questions; defining characteristics of the intervention, study design, and outcomes; monitoring study progress; and developing plans for dissemination and implementation. To include a representative, diverse, and comprehensive group of stakeholders,45 we identified 4 core stakeholder groups (Figure 1) by drawing from a population-based model for patient-centered care from the Medical College of Wisconsin.46 We selected stakeholders from across the United States (13 states), thus reaching outside our local academic medical community. We identified patients and family members from the Massachusetts General Hospital (MGH) Cancer Center Patient and Family Advisory Council. To be eligible, the stakeholder must have been able to represent the interests and perspectives of at least 1 of the 4 groups. Members of the investigative team (Drs Greer, Temel, Pirl, Safren, Lennes, Jethwani, and Buzaglo) organized a list of stakeholders from these 4 cancer community groups. We contacted stakeholders to explain their involvement and study procedures. Thirty-two stakeholders assisted with the study, representing the following 4 key stakeholder groups: (1) oncology patients and family members (n = 8); (2) oncology clinicians (n = 8); (3) cancer practice setting administrators (n = 8); and (4) representatives of the health system, community, and society (n = 8). Stakeholders were involved in the study as research collaborators/consultants and were remunerated up to $1 000 for their time and effort. These stakeholders were involved in both phase 1 and phase 2 of the study. In addition to the stakeholder groups, we enrolled 10 MGH patients prescribed oral chemotherapy and 8 MGH oncology clinicians as participants during phase 1 of the study to review the mobile app wireframes (ie, screen blueprints) and provide feedback. These patients and clinicians were considered study participants, and they each signed IRB-approved, HIPAA-compliant consent forms before participation. Relevant characteristics of these participants are presented in Table 1. The specific involvement of the stakeholders, as well as the patient and clinician participants, in each study phase is detailed below.

Figure 1. Stakeholder Groups and Engagement.

Figure 1

Stakeholder Groups and Engagement.

Table 1. Phase 1: Characteristics of MGH Patients and Oncology Clinicians.

Table 1

Phase 1: Characteristics of MGH Patients and Oncology Clinicians.

Phase 1

First, we conducted a pretrial planning interview with the first 4 stakeholder groups to solicit feedback about the proposed study topic, design, and intervention to ensure relevancy, acceptability, and the potential for dissemination. The patient/family interview took place in person at the MGH Cancer Center, and the 3 other group interviews occurred as teleconference calls. Specifically, we addressed the following topics: (1) perceived importance of monitoring oral chemotherapy remotely; (2) barriers to communication between patients and the oncology team regarding management of side effects and medication adherence; (3) the potential role of the mobile app to address barriers to quality of cancer care; (4) the potential feasibility, acceptability, and usability of an mHealth intervention; and (5) system barriers and facilitators to implementation. We identified consistent themes about the planned intervention and study design from these interviews. Feedback from this stage was integral in informing the development of the mHealth intervention. For example, stakeholders recommended a symptom monitoring feature with interpretable graphics, emphasized the importance of distinguishing urgent vs nonurgent symptoms within the symptom reporting module, provided guidance on optimizing patient–physician communication while minimizing burden, and suggested methods for promoting participant engagement with incentivizing app features. We incorporated each of these recommendations into the final mobile app version. The interview guides for these focus groups are presented in Appendix A and a summary of feedback is presented in Table 2.

Table 2. Phase 1 and Phase 2 Feedback From 4 Stakeholder Focus Groups.

Table 2

Phase 1 and Phase 2 Feedback From 4 Stakeholder Focus Groups.

Next, we met individually with the 10 MGH patient and 8 oncology clinician participants to review the app content using wireframes created by the research and design teams (see Figure 2). Using semistructured interview guides (see Appendix B), we solicited feedback in 3 domains: (1) components of the mobile app, (2) feasibility and usability of the app, and (3) weekly in-app symptom assessments. We directed integrated feedback from this stage regarding the aesthetics, frequency of push notifications, and incorporation of the patient's treatment plan into the mobile app design (see Appendix C).

Figure 2. Wireframes (Screen Blueprints) for CORA Mobile App.

Figure 2

Wireframes (Screen Blueprints) for CORA Mobile App.

Finally, after developing the beta version of the app, we invited members of the initial patient and family stakeholder group (n = 8) to participate in user acceptance testing. Research and development staff observed stakeholders during their initial interactions with the mobile app and asked them to complete specific tasks (eg, “How do you think you would go about adding your oral chemotherapy medication into this app?”). Stakeholders were asked to share general and specific feedback about task intuitiveness. We further refined the app based on their responses. In summary, feedback from key stakeholder groups as well as patient and clinician participants in phase 1 had a significant impact regarding maximizing the patient experience, optimizing patient–clinician communication within the app, and refining study procedures for phase 2.

Phase 2

The 4 key stakeholder groups from phase 1 were also involved as research collaborators/consultants for phase 2 of the study, during which we tested the efficacy of the mobile app intervention in an RCT. We maintained consistent communication with stakeholders throughout the RCT in the form of surveys, quarterly newsletters, a midstudy luncheon, and a final presentation and focus group. At the initiation of the RCT, we emailed stakeholders a survey to collect feedback regarding participant recruitment and retention, as well as clinician engagement. We then distributed a newsletter summarizing the recommendations we received and describing how we had incorporated stakeholder feedback into our study procedures (see Appendix D). We also sent biannual newsletters that described updates about overall study progress, including participant accrual, upcoming stakeholder engagement opportunities, recent press highlights, study-related presentations or publications, and any other relevant information. We held a midstudy luncheon at MGH (and via teleconference call) for all stakeholders, during which we discussed study progress and facilitated initial conversations about dissemination. Last, we conducted final focus groups with each of the initial 4 key stakeholder groups at the end of the study to present preliminary results and discuss plans for dissemination and implementation. The patient/family stakeholder group participated in person at a luncheon while the other 3 groups from across the country participated via teleconference. Feedback from this final engagement was instrumental in informing the next steps for this project. Table 2 displays a summary of feedback from these final stakeholder focus groups. For example, stakeholders recommended examining the role for social support in monitoring adherence and symptoms, suggested the option to integrate information directly into the electronic health record (EHR), provided ideas for implementation with involvement of pharmacy groups, and encouraged dissemination via society newsletters, organizational webinars, posts, and listservs.

Methods

Phase 1

Study Design

We used an mHealth intervention development framework47 to guide the creation of our smartphone-based, patient-centered intervention with maximum usability, acceptability, and feasibility. In phase 1, we (ie, the investigative team of oncologists, psychiatrists, and psychologists) developed the mobile app intervention through an iterative process with the Partners Center for Connected Health, patients, clinicians, and the 4 key stakeholder groups (n = 32) described previously. Stakeholder groups also provided feedback regarding the design and implementation of the RCT in phase 2 of the study. See the Stakeholder Engagement section for an in-depth description of the iterative, multistep process we undertook to ensure usability and feasibility of the mobile app intervention. Briefly, we first led focus groups with key stakeholders to solicit feedback regarding the study design, clinically relevant content, and functionality of the mobile app. We then worked with our technology partners to create screen blueprints (known in the software development industry as wireframes) of the proposed mobile app. Next, we presented the wireframes to MGH patients (n = 10) and clinicians (n = 8) to solicit feedback on the content, design, and patient centeredness of the intervention. After incorporating this feedback and refining the mobile app content, we invited patients and families from the original stakeholder group to participate in user acceptance testing with the beta version of the app to assess task intuitiveness and to share general feedback. We further modified the app based on feedback from each stage of this process in order to ensure optimal usability and feasibility for testing in the RCT. The Dana Farber/Harvard Cancer Center IRB approved the study.

Forming the Study Cohort

Stakeholder Groups

Drawing from a model of population-based patient-centered care, the investigative team identified stakeholders from various cancer community groups across a diverse range of expertise. Thirty-two stakeholders comprised 4 key stakeholder groups: patients and families (n = 8); oncology clinicians (n = 8); cancer practice administrators (n = 8); and representatives of the health system, community, and society (n = 8). Stakeholders included pharmacists, health care leaders, lawyers, and patient advocates. These individuals were consultants on the study and not participants; therefore, no demographic or other personal information was collected from these individuals.

MGH Patients

Patients were eligible to participate if they had a cancer diagnosis, had a current or past prescription for oral chemotherapy, and were the primary owner and user of a smart mobile phone with an iOS or Android operating system. Eligibility criteria also included age ≥ 18, ability to respond to survey questions in English, and a performance status ≤ 2 on the Eastern Cooperative Oncology Group (ECOG) measure.48 We implemented the age criterion to maximize the likelihood that patients were administering their own medications. We chose an ECOG performance status ≤ 2 to ensure that patients had sufficient functioning to participate in the study. We required patients to be receiving their care at MGH or a community affiliate (ie, Mass General/North Shore Cancer Center, Mass General West, or Mass General Cancer Center at Emerson-Bethke). We excluded patients with comorbid acute or psychiatric symptoms or neurological dysfunction that would interfere with consent and participation. Additionally, we excluded patients who were enrolled in oral chemotherapy clinical trials because the strict adherence monitoring of drug trials, could influence the proposed study outcomes. After screening the EHR to determine preliminary eligibility and obtaining permission from the patient's treatment team, a trained research assistant (RA) either contacted the patients by phone or approached them in private clinic settings within MGH Cancer Center to explain the study and invite the patient to complete the eligibility screen.

MGH Oncology Clinicians

Oncology clinicians included board-certified oncologists or nurse practitioners who maintained at least 25% clinical practice at one of the study sites. Study staff directly approached and recruited 8 oncology clinicians to participate in qualitative interviews either in person or over the telephone.

Study Setting

The patient and family member stakeholder focus groups, as well as the individual MGH patient and clinician interviews, took place in person on site at the MGH Cancer Center in order to optimize involvement and feedback. Interviews with the remaining 3 stakeholder groups took place via teleconference call to accommodate individuals who resided throughout the United States.

Intervention

We did not administer an intervention during phase 1.

Follow-up

Stakeholders continued to provide feedback into phase 2 regarding the mobile app content. During this period, we solicited feedback primarily in the form of email surveys (eg, see Appendix D), biannual newsletters, a midstudy luncheon, and final focus groups (see Table 2).

Study Outcomes

The primary study outcome for phase 1 was to develop a patient-centered mobile app to assess symptoms, side effects, and adherence to oral chemotherapy that is feasible for use with oncology patients. The criteria for success was to ensure that the mobile app met standards for usability, acceptability of delivery, and patient-centeredness per expert evaluation and qualitative feedback from interviews with oncology patients, clinicians, and key stakeholders.

Data Collection and Sources

A trained clinical psychologist and psychology postdoctoral fellows administered all individual and group interviews. The semistructured interviews with the stakeholder groups, MGH patients, and MGH oncology clinicians were audio-recorded and stored on the secure, encrypted MGH server. See Appendices A and B for the interview guides used during phase 1 of the study.

Analytic and Statistical Approaches

Study staff reviewed all interviews so that the feedback obtained could inform modifications and refinements to the mobile app. Specifically, trained research assistants transcribed the feedback from the interviews to generate a complete list of comments, impressions, and recommendations from the MGH patient and clinician participants in phase 1 as well as from the 4 key stakeholder groups. After the completion of each focus group, we shared a summary of the focus group results with the stakeholders via email and again elicited any final feedback. The investigative team, including the study staff who conducted the interviews, then reviewed these reports for comprehensiveness and accuracy. Finally, in close collaboration with the technology experts at the Partners Center for Connected Health, the investigative team decided by consensus how to modify the app and optimize user engagement in consideration of all feedback generated from the interviews. The investigative team also considered the impact of stakeholders' recommendations on the scientific integrity and feasibility of the study.

Conduct of the Study

During phase 1 of the study protocol, we submitted 1 amendment to the IRB. Specifically, on July 10, 2014, we submitted an amendment proposing to change the format of stakeholder focus groups to individual or group interviews in order to best accommodate the scheduling needs of our collaborators. This amendment was approved by the IRB on July 15, 2014.

Phase 2

Study Design

We enrolled patients with diverse malignancies who were prescribed oral chemotherapy to participate in a nonblinded, randomized, parallel assignment efficacy trial of the mobile app intervention compared with standard oncology care (ClinicalTrials.gov identifier NCT02157519). Patients were receiving care at the MGH Cancer Center or a community affiliate. Independent of the research team, the study statistician developed a computer-generated randomization scheme stratified by cancer type (hematologic malignancy vs solid tumor) to ensure that relatively equal proportions of diagnoses were represented in each study group. The Dana-Farber/Harvard Cancer Center Office of Data Quality then randomly assigned participants 1:1 to either the mobile app intervention group or the standard oncology care control group. The first 5 patients who were randomized to the mobile app intervention participated in beta testing. After they completed the study, research staff conducted a semistructured 20-minute interview to gather their feedback on the app's feasibility, usability, and aesthetics, allowing users to suggest revisions. The research team also interviewed 5 oncology clinicians whose patients were randomized to the mobile app regarding their conversations about the app with patients and helpfulness of the symptom reporting feature.

Forming the Study Cohort

Eligibility criteria for patients in the RCT during phase 2 of the study were nearly identical to those of MGH patients participating in phase 1 development (see Phase 1: Forming the Study Cohort: MGH Patients). However, for the RCT, patients were also required to have a current and active prescription for oral chemotherapy in order to enroll.

After screening the EHR to determine potential eligibility of patients, a study staff member obtained permission from the patient's oncology team to approach the patients and explain the study. On receiving permission from the oncology clinicians, an RA contacted the patients by telephone or approached them in a private clinic room to assess interest and complete a brief screen to confirm eligibility. Our recruitment protocol addressed Methodology Standard PC-2 by systematically identifying all patients who were prescribed oral chemotherapy via the EHR. Unlike other methods of recruitment that we considered, such as patient self-referral or clinician referral, systematic searching of the EHR eliminated any selection bias in screening and enrolling of participants. Furthermore, once they were enrolled, we used the same standard operating procedures with all participants to ensure that there were no biases in retention. To address the representativeness of participants, we recruited patients at 3 community affiliate sites in addition to the main academic hospital site. This approach facilitated the enrollment of participants who choose not to receive care at a tertiary medical center for financial, geographical, or other reasons.

Study Setting

We recruited patients from MGH Cancer Center or one of the community affiliates listed previously. Study visits took place in conjunction with scheduled outpatient oncology appointments.

Interventions

Mobile App Intervention

Patients randomly assigned to receive the mobile app intervention met with the RA to download the mobile app (Chemotherapy Assistant [CORA]) to their personal smartphone. RAs oriented the patients on how to use the app, enter their treatment plan, and complete weekly symptom and adherence reports. They instructed participants to use the mobile app for approximately 12 weeks. The mobile app intervention consisted of several elements that we had developed and refined based on stakeholder and participant feedback in phase 1. The essential app components included the medication treatment plan and reminder features, a symptom and adherence reporting module that was transmitted weekly to the respective oncology clinician, and educational resources (see Appendix E). Push notifications reminded patients to take their medications and complete weekly symptom and adherence reports. Push notifications are pop-up messages that appear on the mobile device to remind the user to engage with the app. No extra staffing was required, as patients who reported serious symptoms (eg, fever) were instructed within the app to call their oncology clinician or go to the nearest emergency department. Patients were informed during the consent process and app orientation process that their reporting would not be monitored in real time, so that there was no expectation of an immediate response. The research team encouraged oncology clinicians to follow up on the weekly symptom reports based on their clinical judgment, though no data were collected from clinicians about how such reports may have affected their clinical care. Patients were asked to store their oral chemotherapy medication in a Medication Event Monitoring System cap and bottle.

Standard Oncology Care

Patients randomly assigned to standard care did not receive the mobile app but rather received care as usual from their oncology clinicians. These participants were also asked to store their oral chemotherapy medication in the Medication Event Monitoring System Cap and bottle.

Follow-up

Patients completed the baseline self-report surveys before randomization. Subsequently, study staff contacted patients by telephone or during a routine clinic visit to have them complete an identical survey 12 (±3) weeks after the baseline assessment. Patients had the option to complete surveys on paper or via REDCap,49 an electronic HIPAA-compliant survey tool. On completion of the postassessment survey, RAs instructed participants on how to delete the mobile app from their smartphone.

We followed up with participants at 2 weeks postbaseline and at 6 weeks postbaseline to ensure that they completed study procedures per the protocol. Additionally, if a patient who was randomized to the mobile app group did not complete a weekly symptom report during the first active week of the study, a study staff member called to make sure the mobile app was working properly. Attrition was not significant in our study.

Study Outcomes: Primary Outcome Measures

Adherence to Oral Chemotherapy Medication

We employed a multimethod assessment of adherence given that all sources of measurement (eg, self-report, pill counts, pharmacy refill data, and electronic monitoring) have different strengths and limitations with potential for bias. The assessment therefore included remote electronic monitoring devices and self-report instruments as follows:

  1. Medication Event Monitoring System (MEMS) Cap. The MEMSCap records the date and time that the pill bottle is opened and medication is taken. These data were stored on the MEMSCap and collected by the study team postassessment. MEMSCaps are widely used in adherence monitoring and have been used in patients with cancer.50
  2. Morisky Medication Adherence Questionnaire (MMAS-4). The MMAS-4 is a brief, self-report, validated measure to assess medication-taking behavior over the past week. Patients are asked to respond to each of 4 items with a “yes” or “no.” The 4-item scale has good sensitivity in identifying nonadherent individuals.51
  3. Pill Diary. We provided patients with a weekly log to keep track of medication doses that they took without using the MEMSCap. Usage of the pill diary was optional, but all patients received one as a backup for documenting adherence.

Symptoms and Side Effects

To assess symptoms, patients completed the MD Anderson Symptom Inventory (MDASI), a 19-item instrument that assesses the most common physical and psychological symptoms related to cancer. The MDASI assesses the severity of symptoms at their worst in the past 24 hours on a 0-to-10 scale, with 0 being “not present” and 10 being “as bad as you can imagine.” Two subscales are computed to measure interference and severity of symptoms. The measure has been validated in patients with diverse malignancies, and test–retest and internal consistency reliability is confirmed.52 The MDASI demonstrated strong reliability in this sample (severity α = .93; interference α = .94).

Quality of Life

We administered the Functional Assessment of Cancer Treatment–General (FACT-G), a 27-item questionnaire that assesses physical, social/family, emotional, and functional well-being during the previous week, to assess QOL. The validated measure utilizes a 5-point scale from 0 (not at all) to 4 (very much). It has sound psychometric properties, is used widely in patients with cancer,53 and showed good reliability in this sample (α = .70).

Study Outcomes: Secondary Outcome Measures

Treatment Satisfaction

The Functional Assessment of Chronic Illness Treatment–Treatment Satisfaction–Patient Satisfaction (FACIT-TS-PS) is a 29-item questionnaire that assesses patient satisfaction with doctor and staff communication, competence, and confidence, as well as trust in providers and overall satisfaction. Higher scores indicate greater satisfaction. The FACIT-TS-PS has high validity and reliability,54 and the instrument demonstrated strong reliability in this sample (α = .91). To reduce questionnaire burden on patients, we administered 5 subscales of the FACIT-TS-PS, which assess satisfaction with (1) clinician explanations, (2) interpersonal treatment, (3) comprehensiveness of care, (4) nurse communication, and (5) confidence and trust in the doctor and treatment staff.

Urgent Visits

We administered the Resource Utilization Questionnaire, an adapted 3-item questionnaire, to inquire about the number of emergency department visits and hospital admissions in the past 3 months.

Potential Moderators: Measures for Exploratory Analyses

Sociodemographics

Participants reported their gender, race, ethnicity, religion, marital status, smoking history, income, and level of education on a baseline demographic questionnaire. Research staff collected data from the EHR on patients' age, cancer diagnosis, ECOG performance status, therapy dosing schedule (continuous dosing vs interval dosing), type of oral therapy (targeted therapy vs oral chemotherapy), number of concomitant medications, and duration of oral therapy treatment.

Mood

The Hospital Anxiety and Depression Scale (HADS) was designed for medical patients and demonstrates adequate psychometric properties for use among individuals with cancer.55 Composed of 14 items, the instrument contains 2 subscales that measure anxiety and depression symptoms in the past week, with scores ranging from 0 (no distress) to 21 (maximum distress). A threshold of >7 indicates clinically significant anxiety or depression, and a score of >11 indicates definitive anxiety or depression.55 A trained study psychologist followed up with all patients who scored >11 on the depression subscale. The HADS demonstrated strong reliability in this sample (α = .94).

Social Support

The Multidimensional Scale of Perceived Social Support (MSPSS) is a 12-item questionnaire that assesses perceived social support on 1-to-7 scale, with 1 being “very strongly disagree” and 10 being “very strongly agree.” Three subscales, each comprising 4 items, are computed to assess perceived social support from family, friends, and significant others. The MSPSS has adequate test–retest and internal reliability, and high factorial validity.56 The MSPSS demonstrated strong reliability in this sample (α = .96).

Health Literacy

The Rapid Estimate of Adult Literacy in Medicine is a 2- to 3-minute assessment of medically relevant vocabulary (66 total words) that has been shown to correlate well with other measures of various literacy skills.57

App Usability

The study team adapted the App Usability Questionnaire from the System Usability Scale (SUS),58 a validated, easily administered scale. We adapted the 10-item SUS to a simplified, 6-item Likert scale ranging in response from “strongly disagree” to “strongly agree.” Final scores can range from 0 to 30. Higher scores indicate higher perception of usability. Scores above 21 (ie, 70% of total score) can be considered to have good usability.59

Data Collection and Source

The study RA called all participants at the time of postassessment and reminded them to complete the self-report questionnaire and return the electronic pill bottle, either at their next clinic visit or by mail. RAs checked questionnaires in real time for incomplete items and solicited clarification from participants. Of the 181 patients randomized in this trial, 12 did not complete post-assessment questionnaires. Of these patients, 7 withdrew from the study: 4 patients opted to discontinue because of study burden, 2 were unable to continue because their phone became incompatible with the mobile app, and 1 became too ill to continue in the study. An additional 4 patients died before completing post-assessment questionnaires, and 1 patient was lost to follow-up.

We were unable to retrieve MEMS data from 11 patients. Of these patients, 9 did not return their MEMS pill bottle to our study team, we were unable to download data from 1 participant's bottle due to a technical issue, and 1 participant's MEMScap was lost in the mail.

Analytic and Statistical Approaches

We used SPSS (Version 22.0; SPSS) to conduct statistical analyses, first with all available baseline and follow-up data and then using multiple imputation to account for missing data. We described demographic and clinical characteristics with measures of central tendency or percentages (see Table 3).

Table 3. Phase 2: Sociodemographic, Clinical, and Psychosocial Characteristics in the Full Sample and by Study Group.

Table 3

Phase 2: Sociodemographic, Clinical, and Psychosocial Characteristics in the Full Sample and by Study Group.

Aim 1: To Implement a Patient-Centered Mobile App to Assess Symptoms, Side Effects, and Adherence to Oral Chemotherapy That is Feasible for Use With Oncology Patients

To assess feasibility of participants using the mobile app, we examined completion rates of symptom reports during the 12-week study period. The app was considered feasible if 75% of participants assigned to the intervention completed 75% of possible symptom reports or more than 9 total reports. We examined participants' perception of app usability by interpreting the means and SDs on the app usability questionnaire. Scores above 70% were considered acceptable, those between 80% and 90% were considered good, and those above 90% were considered superior.

Aim 2: To Evaluate the Efficacy of the Mobile Application in Improving Adherence and Patient-Reported Clinical Outcomes

For tests of aim 2, we examined between-group differences in changes in the primary outcomes from baseline to the 12-week follow-up assessment using linear regression models. We created difference scores (post minus baseline) and conducted each model by regressing the change in each outcome (dependent variable) on study group assignment (independent variable) and interpreting the unstandardized coefficients, represented by the capital letter B. We considered estimates statistically significant based on a 2-sided α of 0.05 and 95% confidence intervals. We included the change in perceived social support on the MSPSS as a covariate in all models for 2 reasons. First, we selected this for use as a covariate a priori, due to the documented relationship between social support and adherence. Second, we observed that there was a marginally significant difference in perceived social support over time on the MSPSS, such that patients assigned to the mobile app intervention reported larger decrements in perceived social support compared with those assigned to the standard care group (MeanDiff = 0.39; SEDiff = 0.20; t162 = 1.90; P = .060).

Aim 3: To Evaluate the Efficacy of the Mobile Application in Improving Quality of Oncology Care

For tests of aim 3, we examined between-group differences in changes in the secondary outcomes, conducted in an identical fashion to tests of aim 2.

Exploratory Aim: To Determine Whether Particular Patient Demographic and Clinical Characteristics Moderate the Effect of the Study Intervention

We also conducted tests of treatment response heterogeneity with the goal of determining whether the treatment effect of the mobile app varied among levels of baseline and other factors. We prespecified subgroups of interest in the study design based on our previous research.18 To identify moderators of the treatment effect, we first examined demographic and clinical characteristics known to be related to poorer adherence. These factors included being less educated or less health literate, not being married or partnered, having lower perceived social support, having higher anxiety or depression, and reporting memory problems. We also examined demographic and clinical factors that have been inconsistently related to adherence (ie, gender, age, number of concomitant medications, duration of oral therapy treatment), or those that were theoretically believed to influence the treatment effect or overall adherence (ie, type of cancer [hematologic malignancy vs solid tumor], therapy dosing schedule [continuous dosing vs interval dosing], type of oral therapy [targeted therapy vs oral chemotherapy], and functional performance status [ECOG]).

To conduct subgroup analyses, we first created interaction terms (study condition by subgroup characteristic) and regressed each outcome on the interaction term, the subgroup characteristic, and study group assignment, controlling for change in perceived social support (per the MSPSS). Given that tests for interactions usually have limited power, and that the lack of a significant interaction does not definitively eliminate the possibility of treatment heterogeneity, we further probed interaction terms with α < 0.10 to examine the effects of study group assignment on the outcome across levels of the moderator.60 For categorical moderators, we examined the effect of study group assignment on the outcome for each subgroup. For continuous moderators, we used an empirical cutoff when applicable, or applied the Johnson-Neyman technique in the PROCESS macro for SPSS,61 which uses iterative approximation to calculate regions of significance and identify the optimal cutoff.

Power Analyses

Using the effect size estimates from our prior pilot investigation, we had 80% power to detect a statistically significant improvement in adherence rates from 0.70 to 0.90 with a sample size of 150 patients (75 patients per group). While we originally aimed to enroll 180 participants in the study based on this power analysis, we increased the accrual goal to 220 participants to account for attrition. The larger sample size also helped increase power to explore potential moderators (ie, identify subgroups of patients who may respond differently to the mobile app intervention).

Missing Data Analyses

Data were missing at postassessment for 12 participants. Reasons for missing data were as follows: withdrawal (n = 7); death (n = 4); lost to follow-up (n = 1). Due to a clerical error with administering the MDASI, data on this measure were missing for 31 participants at the baseline assessment. Otherwise, 1 participant did not complete the baseline survey and therefore had missing data on most baseline measures. We first conducted statistical analyses using all available baseline and follow-up data and then, to address missing data concerns, we repeated the analyses with imputed data using the Multiple Imputation method in SPSS.62

Conduct of the Study

Over the course of this study we amended the protocol (Appendix F) to restructure the assessment timeline; add a resource utilization questionnaire; add the MD Anderson Symptom Inventory questionnaire; add an optional pill diary; add community cancer clinic affiliates in North Shore, Emerson, and MGH West as study sites; add an app usability questionnaire; and increase accrual from 180 to 220 participants to ensure that at least 180 patients were randomized.

Results

Phase 1

Participant Characteristics

We previously described stakeholder involvement, and characteristics of MGH patients and oncology clinicians are presented in Table 1.

Final Mobile App Intervention

To meet criteria for phase 1, aim 1, we developed a patient-centered mobile app to assess symptoms, side effects, and adherence to oral chemotherapy that is feasible for use with oncology patients. We successfully created the app content with input from key stakeholders, the research team, and technology experts. Summary findings of feedback from the individual interviews with MGH patients and clinicians as well as the 4 stakeholder focus groups during phase 1 are presented in Table 2 and Appendix C. In addition, example feedback in email communication with stakeholders and the research team's response is presented in Appendix D.

The mobile app, CORA, was written primarily in JavaScript language and developed on the Titanium 3.5 and 5.0 platform to ensure cross-platform functionality on both Apple iOS and Android devices. CORA was supported by a PHP/MySQL database and hosted on a LAMP server that met HIPAA Security Rule requirements. The entire study team participated in code reviews and quality assurance testing with each code release. We included MGH oncology patients in usability testing and beta testing to ensure app usability for implementation in the RCT. Qualitative feedback from interviews with oncology patients, family members, clinicians, and key stakeholders indicated that the app was feasible and acceptable for use in this patient population. Table 4 illustrates examples of how we incorporated feedback from stakeholders into the final mobile app. CORA underwent 7 version updates to address integration with third-party smartphone operating systems (n = 4) and to fix software bugs or make minor improvements (n = 3). The final version of CORA is organized in functional modules (Appendix E), including a medication treatment plan with a timeline and reminder system, reporting features for adherence and symptoms along with graphics, educational resources and recipes, integrated wearable fitness tracking with Fitbit, and a section for notes and questions.

Table 4. Mobile App Modifications Based on Stakeholder Feedback.

Table 4

Mobile App Modifications Based on Stakeholder Feedback.

Phase 2

Participant Characteristics

Of the 696 potentially eligible patients screened via the EHR, 196 were not approached for the study because the oncologist denied our request to approach (n = 64) or did not respond to our request to approach (n = 134) the patient. We therefore approached 500 patients in clinic, 178 (35.6%) of whom did not own a smartphone, and 110 (22.0%) of whom declined to participate. Reasons patients cited for refusal included not interested in the intervention (n = 43), not interested in participating in any research (n = 25), not comfortable using their smartphone (n = 25), belief that the study would be too burdensome/disrupt current treatment (n = 16), or concerns about the security of their data (n = 1). The remaining 212 enrolled in the study and were scheduled to complete baseline assessments at their next outpatient oncology visit. The baseline visit occurred on average 36 days (SD, 49 days) after enrollment. During this time, 28 participants dropped out of the study, 3 were lost to follow-up (see CONSORT flow diagram, Figure 3), and 181 completed baseline assessments and were then randomized to either the mobile app (n = 91) or standard care (n = 90). Of this total of 181 patients, we had recruited 173 from MGH and 8 from community affiliate sites. A total of 169 patients completed the postassessment survey at the 12-week follow-up. Reasons for incomplete assessments included patient withdrawal (n = 7), death (n = 4), and loss to follow-up. MEMSCap data were available on 170 patients; 9 patients did not return their pill bottle, 1 cap was lost in the mail, and 1 did not have available data on the cap. No study-related adverse events occurred over the course of the study. As noted, 1 participant's MEMSCap data were lost in the mail (Table 5); however, confidentiality was not breached because no identifiable participant information was in the envelope or on the bottle. Table 6 presents Patient Intervention Comparison Outcome (PICOT) descriptors, and Appendix G references results tables submitted to ClinicalTrials.gov.

Figure 3. CONSORT Flow Diagram.

Figure 3

CONSORT Flow Diagram.

Table 5. Adverse Events Overview.

Table 5

Adverse Events Overview.

Table 6. PICOT Descriptors.

Table 6

PICOT Descriptors.

Within the sample, patients were 53.30 years of age on average (SD, 12.91), half were women (53.6%), the majority were Caucasian (87.8%), and 80.1% were partnered (see Table 3 for demographic characteristics of the sample). Participants were well-educated, with 23.2% having graduated from college, and an additional 44.8% having an advanced degree. Approximately a third of patients had a hematologic malignancy (33.1%), followed by non–small cell lung cancer (18.2%), breast cancer (14.4%), and high-grade gliomas (11.6%). Most patients (66.9%) were prescribed targeted therapies (ie, agents that specifically target cancer cells with known oncogenic mutations) while the remainder were prescribed other oral chemotherapies (33.1%). Appendix H lists the types of oral therapies patients were prescribed. Patients had been taking oral therapies for an average of 12.70 months (SD, 20.87). At the baseline assessment, 21.5% of patients (n = 39) reported problems with adherence to oral therapies. Of the 170 participants with MEMSCaps data available, 52.9%, 22.4%, and 12.9% of patients were less than 90%, 70%, and 50% adherent over the course of the study, respectively.

Aim 1: To Implement a Patient-Centered Mobile App to Assess Symptoms, Side Effects, and Adherence to Oral Chemotherapy That is Feasible for Use With Oncology Patients

In phase 2, tests of aim 1 showed that the feasibility aim was not met, with only 34% of patients assigned to the mobile app completing the adherence and symptom reports on more than 75% of the total possible study weeks. On average, patients assigned to the mobile app condition completed 15.92 (SD, 14.15) reports over the course of the study (median = 14.00; IQR = 5.00 to 21.00). Patients completed the adherence and symptom reports on a mean of 6.43 weeks (SD, 3.86) out of the 12 study weeks (57.1% of possible weeks). The average app usability rating was good (mean, 71.22; SD, 17.36), with 23.1% of patients reporting acceptable usability (scores 70-79), 21.2% reporting good usability (scores 80-89), and 15.4% reporting superior usability (scores 90-100). On average, patients used the app for 59 minutes and 32 seconds (SD, 1 hour, 8 minutes, and 15 seconds) over the course of the 12-week study period and accessed the app on 21.75 discrete days (SD, 21.24 days) out of 84 possible days. The medication treatment plan timeline was the most frequently visited page of the app, followed by the educational library, the symptom graph review, the ad hoc symptom reporting module, and the free notes section. The most frequently reported symptoms in the app were fatigue and disturbed sleep.

Aim 2: To Evaluate the Efficacy of the Mobile App in Improving Adherence and Patient-Reported Clinical Outcomes

Tests of aim 2 evaluated the efficacy of the mobile app in improving adherence as measured by MEMSCap, patient-reported adherence, change in symptom severity and interference, and change in QOL (Table 7). These analyses showed that the mobile app intervention group and usual care control group did not differ with respect to the primary outcomes of MEMSCap adherence rates, self-reported adherence, symptoms, or overall QOL. Specifically, at the postassessment, 23.3% of patients in the standard care group and 13.8% of patients in the mobile app intervention reported poor adherence; however, this difference was not statistically significant. Study groups also did not differ with respect to objective MEMSCap adherence rates, change in symptom severity or interference, or overall QOL. We observed a significant effect of group assignment on change in social and family well-being on the FACT-G; patients in the mobile app intervention had a smaller reduction in social and family well-being from baseline to postassessment (Mchange = −0.55; SE, 0.53) compared with the standard care group (Mchange = −2.22, SE, 0.50; Mdiff = 1.67, SE, 0.74, F1161 = 5.13, P = .025, 95% CI [−3.12 to −0.21]).

Table 7. Differences in Primary Outcomes by Study Group.

Table 7

Differences in Primary Outcomes by Study Group.

Aim 3: To Evaluate the Efficacy of the Mobile Application in Improving Quality of Oncology Care

Tests of aim 3 evaluated the efficacy of the mobile app in improving secondary outcomes of quality of oncology care (Table 8). Study groups did not differ significantly with respect to satisfaction with clinicians and treatment or the number of emergency department visits or hospitalizations. We observed a marginally significant difference in the change in Satisfaction with Interpersonal Treatment subscale on the FACIT-TS-PS; patients in the mobile app intervention had a slight improvement in their satisfaction on average (Mchange = 0.07; SE, 0.13) compared with those in the standard care group, who had a slight reduction in satisfaction (Mchange = −0.29, SE, 0.13; Mdiff = −0.35, SE, 0.18, F1159 = 3.67, P = .057, 95% CI [−0.72-0.01].

Table 8. Differences in Secondary Outcomes by Study Group.

Table 8

Differences in Secondary Outcomes by Study Group.

Exploratory Aim: To Determine Whether Particular Patient Demographic and Clinical Characteristics Moderate the Effect of the Study Intervention

The exploratory aim addressed treatment heterogeneity by testing efficacy of the mobile app intervention within key patient subgroups. We first examined the presence of an interaction between study group assignment and the proposed baseline moderator factors in predicting primary outcomes. In tests of moderation, we did not find evidence for moderation by the following factors: gender, education, health literacy, relationship status, depression, perceived social support, type of cancer (hematologic malignancy vs solid tumor), type of oral therapy (chemotherapy vs targeted therapy), duration of oral therapy treatment, number of concomitant medications, therapy dosing schedule (continuous dosing vs interval dosing), and functional performance status (ECOG).

We did find evidence of moderation by the following factors: baseline self-reported adherence (MMAS-4), baseline anxiety (HADS-Anxiety), and patient age. Linear regression models examining self-reported adherence at baseline as a potential moderator showed a significant interaction between group assignment and baseline self-reported adherence in predicting the MEMSCaps adherence rate (B = 26.04; SE, 9.65; P = .008; 95% CI, 6.97-45.10; Table 9). Further examination of the effect of group assignment on the MEMSCaps adherence rate at levels of the moderator (good adherence vs poor adherence) revealed that among patients with poor self-reported adherence at baseline (Table 10), those who were randomized to the mobile app intervention had improved adherence on the MEMSCaps (mean, 86.23; SE, 7.72) compared with those in the standard care control (mean, 63.94, SE, 6.46; B = 22.30, SE, 10.06, P = .034, 95% CI [1.78-42.82]; Figure 4). Self-reported adherence at baseline was not a moderator of the other primary study outcomes (all P > .10).

Table 9. Linear Regression Examining Baseline Self-reported Adherence as a Moderator of the Effect of the Mobile App Intervention on the Objective Adherence Rate per the MEMSCaps (n = 158).

Table 9

Linear Regression Examining Baseline Self-reported Adherence as a Moderator of the Effect of the Mobile App Intervention on the Objective Adherence Rate per the MEMSCaps (n = 158).

Table 10. Differences in Primary Outcomes by Study Group in Patients With Self-reported Poor Adherence at the Baseline Assessment on the MMAS-4.

Table 10

Differences in Primary Outcomes by Study Group in Patients With Self-reported Poor Adherence at the Baseline Assessment on the MMAS-4.

Figure 4. Differences in MEMSCaps Adherence Rates Between Study Groups Moderated by Self-reported Adherence (Good vs Poor) at Baseline.

Figure 4

Differences in MEMSCaps Adherence Rates Between Study Groups Moderated by Self-reported Adherence (Good vs Poor) at Baseline.

Linear regression models examining anxiety as a potential moderator showed a significant interaction between group assignment and self-reported anxiety (HADS-Anxiety subscale) at the baseline assessment in predicting the MEMSCaps adherence rate (B = 17.55; SE, 8.84; P = .049; 95% CI, 0.08-35.02; Table 11). Probing at the levels of this moderator (low anxiety vs high anxiety) indicated that among patients with high anxiety at baseline (Table 12), those randomized to the mobile app intervention had improved adherence on the MEMSCaps (mean, 85.46; SE, 5.57) compared with those in the standard care control group (mean, 69.39, SE, 5.19; B = 16.08, SE, 7.76, P = .044, 95% CI [0.41-31.74]; Figure 5). Baseline anxiety was not a moderator of the other primary outcomes (all P > .10).

Table 11. Linear Regression Examining Baseline Anxiety as a Moderator of the Effect of the Mobile App Intervention on the Objective Adherence Rate Measured With the MEMSCap (n = 158).

Table 11

Linear Regression Examining Baseline Anxiety as a Moderator of the Effect of the Mobile App Intervention on the Objective Adherence Rate Measured With the MEMSCap (n = 158).

Table 12. Differences in Primary Outcomes by Study Group in Patients With Self-reported High Anxiety on the Baseline Assessment on the Hospital Anxiety and Depression Scale-Anxiety Subscale.

Table 12

Differences in Primary Outcomes by Study Group in Patients With Self-reported High Anxiety on the Baseline Assessment on the Hospital Anxiety and Depression Scale-Anxiety Subscale.

Figure 5. Differences in MEMSCaps Adherence Rates Between Study Groups Moderated by Anxiety (Low vs High) at Baseline.

Figure 5

Differences in MEMSCaps Adherence Rates Between Study Groups Moderated by Anxiety (Low vs High) at Baseline.

Finally, in linear regression models to test whether age was a potential moderator, we found a significant interaction between study group assignment and age predicting change in overall QOL on the FACT-G (B = 0.27; SE, 0.13; P = .041; 95% CI, 0.01-0.52; see Table 13).

Table 13. Linear Regression Examining Patient Age as a Moderator of the Effect of the Mobile App Intervention on Change in QOL on the FACT-G (n = 162).

Table 13

Linear Regression Examining Patient Age as a Moderator of the Effect of the Mobile App Intervention on Change in QOL on the FACT-G (n = 162).

Using the Johnson-Neyman technique in the PROCESS macro for SPSS,61 we identified the optimal cutoff of greater than 55 years of age vs 55 years of age or younger. Patients greater than 55 years old (Table 14) who were randomized to the mobile app intervention reported improved overall QOL (mean, 1.93; SE, 1.93) compared with those in the standard care control (mean, −3.90, SE, 1.68), B = 5.84, SE, 2.57, P = .027; 95% CI, 0.70-10.98 (Figure 6). Age was not a significant moderator of the effect of group assignment on other primary outcomes (all P > .10).

Table 14. Differences in Primary Outcomes by Study Group in Patients >55 Years of Age.

Table 14

Differences in Primary Outcomes by Study Group in Patients >55 Years of Age.

Figure 6. Differences in the Change in Overall Quality of Life Between Study Groups Moderated by Age.

Figure 6

Differences in the Change in Overall Quality of Life Between Study Groups Moderated by Age.

Missing Data Analyses

The rate of missing data at the postassessment time point was 6.6% for the self-report questionnaires and 6.1% for the MEMSCap data. To account for these missing data, as well as the missing baseline MDASI data due to a clerical error, we conducted all analyses in an identical fashion using Multiple Imputation.62 Tables 15 to 22 display the findings, which generally corroborate the available case analyses. Specifically, the only significant primary main effect of the intervention was the same: Participants in the mobile app group reported a smaller reduction in social well-being over time than did those receiving standard care. In addition, the marginally significant group difference in satisfaction with interpersonal treatment (secondary outcome) became statistically significant with Multiple Imputation, favoring the intervention. Otherwise, the subgroup analyses were essentially replicated for the effect of the intervention on objective (MEMSCap) adherence in patients who reported adherence problems at baseline. However, the moderator effects of anxiety on adherence and age on QOL became marginally significant with Multiple Imputation.

Table 15. Differences in Primary Outcomes by Study Group Using Multiple Imputation (Pooled Results From 10 Data Sets).

Table 15

Differences in Primary Outcomes by Study Group Using Multiple Imputation (Pooled Results From 10 Data Sets).

Table 16. Differences in Secondary Outcomes by Study Group Using Multiple Imputation (Pooled Results From 10 Data Sets).

Table 16

Differences in Secondary Outcomes by Study Group Using Multiple Imputation (Pooled Results From 10 Data Sets).

Table 17. Linear Regression With Multiple Imputation Examining Baseline Self-reported Adherence as a Moderator of the Effect of the Mobile App Intervention on the Objective Adherence Rate per MEMSCaps (Pooled N = 181).

Table 17

Linear Regression With Multiple Imputation Examining Baseline Self-reported Adherence as a Moderator of the Effect of the Mobile App Intervention on the Objective Adherence Rate per MEMSCaps (Pooled N = 181).

Table 18. Differences in Primary Outcomes by Study Group (Using Multiple Imputation) in Patients With Self-reported Poor Adherence at the Baseline Assessment on the MMAS.

Table 18

Differences in Primary Outcomes by Study Group (Using Multiple Imputation) in Patients With Self-reported Poor Adherence at the Baseline Assessment on the MMAS.

Table 19. Linear Regression With Multiple Imputation Examining Baseline Anxiety as a Moderator of the Effect of the Mobile App Intervention on the Objective Adherence Rate per MEMSCaps (Pooled N = 181).

Table 19

Linear Regression With Multiple Imputation Examining Baseline Anxiety as a Moderator of the Effect of the Mobile App Intervention on the Objective Adherence Rate per MEMSCaps (Pooled N = 181).

Table 20. Differences in Primary Outcomes by Study Group (Using Multiple Imputation) in Patients With Self-reported High Anxiety on the Baseline Assessment per the HADS-Anxiety.

Table 20

Differences in Primary Outcomes by Study Group (Using Multiple Imputation) in Patients With Self-reported High Anxiety on the Baseline Assessment per the HADS-Anxiety.

Table 21. Linear Regression With Multiple Imputation Examining Patient Age as Moderator of the Effect of the Mobile App Intervention on Change in QOL on the FACT-G (Pooled N = 181).

Table 21

Linear Regression With Multiple Imputation Examining Patient Age as Moderator of the Effect of the Mobile App Intervention on Change in QOL on the FACT-G (Pooled N = 181).

Table 22. Differences in Primary Outcomes by Study Group (With Multiple Imputation) in Patients > 55 Years of Age.

Table 22

Differences in Primary Outcomes by Study Group (With Multiple Imputation) in Patients > 55 Years of Age.

Discussion

In this study, we developed an acceptable, patient-centered mobile app for adherence and symptom management for patients with diverse malignancies who were prescribed oral chemotherapy. However, patients assigned to the intervention group did not meet the a priori feasibility criterion for completing the weekly reports of adherence and symptoms. Moreover, the mobile app did not lead to significant improvements in the primary and secondary outcomes of adherence per MEMSCaps, symptoms, overall QOL, perceptions of quality of care, and health care utilization as hypothesized. Patients in the mobile app intervention reported a smaller reduction in social and family well-being over the course of the study than did those in the usual care control condition. It is possible that the app relieved some of the burden that caregivers generally experience in the context of home-based care, and this may have resulted in improvements in QOL of the patient in the social domain. Furthermore, patients who are struggling with medication adherence or have elevated anxiety may benefit from such an app to improve oral chemotherapy adherence, which may, in turn, improve therapeutic efficacy and influence treatment outcomes. Finally, older patients may find this mobile app helpful for their overall QOL, potentially by connecting them with resources for managing symptoms and by providing education about their illness and resources for improving health (eg, recipes, activity tracking).

Decisional Context

The study results underscore the importance of clinicians proactively assessing adherence to treatment in the modern era of oral cancer therapeutics—22.0% of patients reported difficulties taking these medications at baseline. However, only 13.8% of patients in the mobile app group reported adherence problems at postassessment, while 23.3% of patients in the standard care group continued to report difficulties. Although not a statistically significant change, proactive and systematic monitoring of adherence and symptoms through mobile technologies not only emphasizes the value and importance of adherence for patients but also may serve as an extra layer of support for the care team and patients to communicate effectively about the administration of oral chemotherapy and management of symptoms.

Treatment adherence is significant for public health and is a challenge for the health care system, which aims to optimize treatment outcomes. Health care decision makers may find these study results beneficial in that they suggest the potential for improving treatment outcomes for specific populations of patients who may be at greater risk. A mobile app provides a minimally burdensome, cost-effective, low-resource approach for patients who are receiving care outside of the hospital or infusion center. Such an approach could be offered to patients who endorse difficulties taking oral chemotherapy as instructed, have anxiety symptoms, or are older. The mobile app intervention reduces variation in practice by administering validated instruments to assess adherence and symptoms in a systematic manner. Moreover, using this intervention to target patients at greater risk for adherence problems would ideally reduce variation in treatment outcomes across patient subpopulations.

Study Results in Context

The current study results highlight the potential for an adherence intervention to promote medication taking in patients at greater risk for nonadherence. However, we did not observe a significant benefit of the mobile app intervention in improving the outcomes of adherence, symptoms, QOL, or perceptions of quality of care overall between the 2 study groups. Several factors may have contributed to the null findings: (1) most patients had high self-reported adherence at baseline and therefore had little or no room for improvement; (2) the app included multiple features to enhance patient engagement, which perhaps diffused its target focus on adherence and symptoms; (3) the sample was quite heterogenous with respect to cancer types, stages, and oral treatments; (4) the use of MEMSCaps to assess adherence to oral chemotherapy regimens that have multiple intermittent breaks between cycles is challenging; and (5) oncology clinicians were not required to follow up with patients regarding their weekly adherence and symptom reports but rather could respond based on their clinical judgment. Further rigorous qualitative study with the intervention patients who participated in this study and their oncology clinicians would help elucidate the possible reasons the mobile app did not have its intended benefits on the outcomes.

In our recently published systematic review, we identified only 12 adherence intervention studies for patients with cancer, and most of them had a high risk of bias due to methodological limitations such as a small sample size or nonrandomized designs.18 To overcome these prior limitations, we implemented an adequately powered, randomized trial with 181 patients diagnosed with diverse malignancies. Furthermore, we employed a more robust measure of adherence, including both objective monitoring with MEMSCaps and self-reported adherence, methods utilized in only 2 previous adherence intervention studies.41,63 Finally, in our clinical trial, we examined clinically meaningful outcomes relevant to the patient experience, such as QOL, symptoms and side effects, and satisfaction with treatment18,64 in addition to adherence.

Studies to date have not targeted multiple adherence factors at the patient, provider, and systems levels, with the few intervention trials mostly focused on reminder systems. For example, a randomized 3-group pilot study by Spoelstra and colleagues42 showed no differences in adherence rates following an automated voice response (AVR) system alone compared with AVR combined with adherence management or with AVR combined with adherence and symptom management. In another randomized 3-arm trial, investigators did find differences in adherence when effects from 2 patient information program interventions were pooled in comparison with the control group.65 Otherwise, the few intervention studies with improved outcomes for adherence were nonrandomized. Specifically, in one nonrandomized study of patients with advanced non–small cell lung cancer who were prescribed oral chemotherapy, participants in the treatment monitoring program had higher rates of adherence as recorded by pill count and self-report than did a retrospective standard care control group.66 In addition, a nonrandomized study involving intensified multidisciplinary pharmaceutical care showed that patients with colorectal and breast cancer had higher daily adherence rates than did a standard care comparison group.41 Within this context, the advances of our clinical trial are reflected not only in the randomized design, sample size, and selection of outcomes, but also the intervention components. That is, we designed the mobile app to target patient factors (ie, reminder system, education library, energy tracking), treatment factors (ie, symptom monitoring and management strategies), and clinician factors (ie, proactive communication with cancer care clinicians).

With respect to mHealth interventions, most studies have focused on management of long-term conditions such as diabetes, HIV, and asthma. For example, findings from a recent meta-analysis revealed improvements in treatment adherence following mobile text messaging for patients with chronic illness, but this review did not include studies with oncology patients and is therefore limited in generalizability.67 The authors of another meta-analysis of mHealth interventions concluded that certain mobile phone messaging interventions may improve the self-management of long-term illness; however, significant gaps exist in this work, requiring further research.68 While we observed no intervention effects overall in our study sample, our findings extend the growing literature suggesting that an adherence intervention delivered through mobile modalities may be beneficial for disease self-management among patients with cancer who are at greater risk for nonadherence. However, prospective follow-up study is needed to confirm that the app is indeed effective for those with poor baseline adherence and higher anxiety before broader dissemination and implementation of the intervention.

Implementation of Study Results

The key to successful implementation of the mobile app intervention in this study was incorporating the voices of patients, family members, clinicians, cancer practice administrators, and health care representatives throughout every phase of development and testing. While stakeholder engagement is common in earlier stages of research, we incorporated stakeholders throughout the research process, including dissemination and implementation, which occurs less frequently.69 For example, our final stakeholder focus groups proved to be instrumental in brainstorming the next steps for this study and how to disseminate results to audiences outside the scientific community. Moreover, the stakeholders were enthusiastic and willing to participate, and many did not want reimbursement but rather sought to help make the research more meaningful, relevant, and feasible in real-world care settings. The positive experiences with our stakeholder engagement in this study informed the development of a patient/family advisory council specifically for supportive care research at the MGH Cancer Center, which meets 3 to 4 times per year. Investigators from our Cancer Outcomes Research Program now present studies to and gather feedback from the council about clinical relevance, whether interventions are timed effectively, and how to optimize delivery to patients, families, and the care team.

As an example of how stakeholder feedback enhanced acceptability and implementation of the mobile app intervention by MGH oncology clinicians, we drew on qualitative feedback from clinician stakeholders. Specifically, we learned that the optimal frequency for the care team to receive patient reports of adherence and symptoms would be no more than weekly. To translate this intervention successfully in typical care settings, such patient symptom reports would ideally be integrated seamlessly with the EHR for ongoing tracking and documentation. Unfortunately, we were unable to deliver the reports in this manner as our institution was in the process of converting to a new EHR system at the time of study implementation. Regardless, it is also important to note that we did not achieve our a priori feasibility threshold of most participants in the intervention group completing 75% of the weekly adherence and symptom reports during the study period. Perhaps the proposed feasibility criterion was not the most appropriate measure for how patients engaged with the mobile app, as some patients may have chosen not to complete the adherence and symptom reports on weeks they were feeling well or had nothing to endorse. Follow-up qualitative study is needed with patients assigned to the intervention, to discern the optimal frequency of communication with the cancer care team and whether reports should be recommended at certain time intervals, in tandem with clinically meaningful triggers, or whether the reports should be delivered primarily at the patient's discretion.

Another primary concern for implementation across care settings relates to maintaining the mobile app functionality over time. Specifically, because mobile devices and operating systems are constantly evolving and continuously upgraded, the app software must also be adjusted and maintained. During our trial, we occasionally needed to loan backup tablet devices to participants when we encountered glitches in the system due to smartphone upgrades, for example.

A final key barrier to implementation of the intervention would be the extent to which the patient populations, or those at risk for poor medication adherence, own smartphones since the intervention was specifically designed for use on mobile technology. Given that many people keep their smartphone on their person most of the day, this technology represents an ideal modality, particularly for real-time reporting of symptoms with responsive logic to teach behavioral strategies for management. Although not all patients in the oncology setting have access to smartphones, ownership continues to grow at a rapid pace across populations, including older patients and those of lower socioeconomic status. Within our study, of the 500 patients approached to assess for eligibility, 322 (64.4%) had access to a smartphone.

Generalizability

As noted in the Results section, the patient sample was predominantly white and married (or partnered), approximately equally distributed across genders, and very well educated, with a wide range in age and representation of hematologic and solid tumor cancer types. The study findings would therefore likely generalize to similar patients who seek care at an urban comprehensive cancer center like MGH. Further study is needed to test the efficacy of the intervention in larger, more racially and ethnically diverse samples in both community and rural cancer care settings. Moreover, the extent to which patients' high education levels may have selected for a more knowledgeable and motivated sample, and perhaps contributed to the high adherence rates and null findings overall, requires further research.

Subpopulation Considerations

Although we must interpret subgroup analyses with caution given the reduction in sample sizes, the mobile app intervention appeared to be more efficacious for patients with certain risk factors. Specifically, based on prior theory and evidence, we examined particular subgroups likely to have adherence problems, such as patients who reported poor adherence or heightened anxiety at baseline. Analyses of these subgroups revealed that those assigned to the mobile app intervention had significantly higher objective adherence estimates (per the MEMSCaps) over the study period than did those who received standard care alone. Such findings are theoretically consistent and would be meaningful to consider for potential translation into patient care. We also observed that older patients who received the mobile app intervention reported significantly higher QOL (per the FACT-G) than did older patients in the standard care group. These findings certainly warrant further investigation for confirmation before broader dissemination and implementation of the intervention.

Study Limitations

The study has several limitations that may have influenced the results. First, an unfortunate clerical error resulted in loss of data on the MDASI, one of the main study outcomes. However, analyses with imputed data on the full sample did not reveal a different pattern of findings from the available case analyses with respect to intervention effects on symptom severity or interference. In addition, although we used the current “gold standard” for measuring medication adherence with the MEMSCaps as the primary outcome, such monitoring in the control group likely raised awareness and improved adherence,70 potentially diluting the effect of the intervention. Moreover, using MEMSCaps to monitor medication adherence for patients with interval dosing schedules (eg, 2 weeks on, 1 week off) was challenging, especially in defining critical periods for when the patient was supposed to be taking the medication. The study team therefore had to compare data from the MEMSCaps against EHR documentation of planned breaks in the medications to ensure patients were not penalized for missing doses those days. Again, any error in adherence measurement would likely bias against intervention effects. Finally, the study took place at an academic institution with a fairly homogenous patient population with respect to race, ethnicity, level of education, and socio-economic status, which may limit generalizability of findings to other care settings and populations.

Future Research

Follow-up research is needed to test the benefit of the mobile app in populations at high risk for poor adherence as well as across both academic and community oncology care settings. Ideally, future studies should sample patients who are poorly adherent at baseline, therefore minimizing type 2 error. A hybrid efficacy-effectiveness study would be a useful design for further intervention testing and implementation. In addition, to augment the utility of the intervention, investigators may want to examine whether integrating patient-reported data from the mobile app into the EHR helps enhance communication with the care team vs employing a primary triage clinician (eg, oncology nurse) to review and respond to the patient reports vs having a completely stand-alone app that records and stores data natively on the smartphone, which patients can choose to share with clinicians at their discretion. Moreover, a follow-up study could explore how oncology clinicians utilized the weekly adherence and symptom reports to inform care in their patient encounters as well as patient perceptions of why the app did not affect the primary and secondary outcomes. Expanding the mobile app to help patients monitor and manage multiple medications simultaneously may also enhance its usefulness. Finally, in future studies, the inclusion of multiple longitudinal assessments of adherence and symptoms would be needed to discern the impact of the intervention over longer follow-up periods, especially given prior research showing that medication adherence tends to wane over time.18

Conclusions

To our knowledge, this study represents the first examination of the development and testing of a mobile app to improve adherence to oral chemotherapy. With critical feedback from key constituent stakeholders throughout every phase of the project, we first successfully created a patient-centered mobile app, incorporating features to support adherence, symptom management, and communication with the oncology care team. We then conducted a randomized clinical trial to test the benefits of the mobile app vs standard care for improving symptoms and adherence to oral chemotherapy, in a sample of 181 patients with diverse malignancies. Although the mobile app did not have significant effects on the primary and secondary outcomes in the entire sample overall, subgroup analyses demonstrated that the intervention shows promise for patients who may be at risk for poor adherence, such as those who report having problems with medication adherence or anxiety. In addition, the mobile app may positively impact QOL among older patients. Further work is needed to confirm the effectiveness of the intervention in these subpopulations across oncology care settings and to explore the utility of the mobile app in sustaining optimal adherence over longer periods of time. A key factor to ensure successful dissemination and implementation of the mobile app will be the seamless integration of patient-reported data with existing EHR systems. As cancer care continues to evolve with orally administered agents, the innovative use of technology through this mobile app may foster communication with the care team and serve as an extra layer of support for patients to understand and adhere to their recommended treatments.

References

1.
Neuss MN, Polovich M, McNiff K, et al. 2013 updated American Society of Clinical Oncology/Oncology Nursing Society chemotherapy administration safety standards including standards for the safe administration and management of oral chemotherapy. Oncol Nurs Forum. 2013;40(3):225-233. [PubMed: 23619103]
2.
Liu G, Franssen E, Fitch MI, Warner E. Patient preferences for oral versus intravenous palliative chemotherapy. J Clin Oncol. 1997;15(1):110-115. [PubMed: 8996131]
3.
Borner MM, Schoffski P, de Wit R, et al. Patient preference and pharmacokinetics of oral modulated UFT versus intravenous fluorouracil and leucovorin: a randomised crossover trial in advanced colorectal cancer. Eur J Cancer. 2002;38(3):349-358. [PubMed: 11818199]
4.
Banna GL, Collovà E, Gebbia V, et al. Anticancer oral therapy: emerging related issues. Cancer Treat Rev. 2010;36(8):595-605. [PubMed: 20570443]
5.
Given BA, Spoelstra SL, Grant M. The challenges of oral agents as antineoplastic treatments. Semin Oncol Nurs. 2011;27(2):93-103. [PubMed: 21514479]
6.
Barton D. Oral agents in cancer treatment: the context for adherence. Semin Oncol Nurs. 2011;27(2):104-115. [PubMed: 21514480]
7.
Aisner J. Overview of the changing paradigm in cancer treatment: oral chemotherapy. Am J Health Syst Pharm. 2007;64(9)(suppl 5):S4-S7. [PubMed: 17468157]
8.
Hartmann JT, Haap M, Kopp HG, Lipp HP. Tyrosine kinase inhibitors—a review on pharmacology, metabolism and side effects. Curr Drug Metab. 2009;10(5):470-481. [PubMed: 19689244]
9.
Bedell CH. A changing paradigm for cancer treatment: the advent of new oral chemotherapy agents. Clin J Oncol Nurs. 2003;7(suppl 6):5-9. [PubMed: 14705494]
10.
Al-Barrak J, Cheung WY. Adherence to imatinib therapy in gastrointestinal stromal tumors and chronic myeloid leukemia. Support Care Cancer. 2013;21(8):2351-2357. [PubMed: 23708821]
11.
Makubate B, Donnan PT, Dewar JA, Thompson AM, McCowan C. Cohort study of adherence to adjuvant endocrine therapy, breast cancer recurrence and mortality. Br J Cancer. 2013;108(7):1515-1524. [PMC free article: PMC3629427] [PubMed: 23519057]
12.
Wu EQ, Johnson S, Beaulieu N, et al. Healthcare resource utilization and costs associated with non-adherence to imatinib treatment in chronic myeloid leukemia patients. Curr Med Res Opin. 2010;26(1):61-69. [PubMed: 19905880]
13.
Ganesan P, Sagar TG, Dubashi B, et al. Nonadherence to imatinib adversely affects event free survival in chronic phase chronic myeloid leukemia. Am J Hematol. 2011;86(6):471-474. [PubMed: 21538468]
14.
Yoshida C, Komeno T, Hori M, et al. Adherence to the standard dose of imatinib, rather than dose adjustment based on its plasma concentration, is critical to achieve a deep molecular response in patients with chronic myeloid leukemia. Int J Hematol. 2011;93(5):618-623. [PubMed: 21523339]
15.
Ruddy K, Mayer E, Partridge A. Patient adherence and persistence with oral anticancer treatment. CA Cancer J Clin. 2009;59(1):56-66. [PubMed: 19147869]
16.
Partridge AH, Avorn J, Wang PS, Winer EP. Adherence to therapy with oral antineoplastic agents. J Natl Cancer Inst. 2002;94(9):652-661. [PubMed: 11983753]
17.
Escalada P, Griffiths P. Do people with cancer comply with oral chemotherapy treatments? Br J Community Nurs. 2006;11(12):532-536. [PubMed: 17170677]
18.
Greer JA, Amoyal N, Nisotel L, et al. A systematic review of adherence to oral antineoplastic therapies. Oncologist. 2016;21(3):354-376. [PMC free article: PMC4786357] [PubMed: 26921292]
19.
Osterberg L, Blaschke T. Adherence to medication. N Engl J Med. 2005;353(5):487-497. [PubMed: 16079372]
20.
Verbrugghe M, Verhaeghe S, Lauwaert K, Beeckman D, Van Hecke A. Determinants and associated factors influencing medication adherence and persistence to oral anticancer drugs: a systematic review. Cancer Treat Rev. 2013;39(6):610-621. [PubMed: 23428230]
21.
Balkrishnan R. Predictors of medication adherence in the elderly. Clin Ther. 1998;20(4):764-771. [PubMed: 9737835]
22.
Vik SA, Maxwell CJ, Hogan DB. Measurement, correlates, and health outcomes of medication adherence among seniors. Ann Pharmacother. 2004;38(2):303-312. [PubMed: 14742770]
23.
Jacobs JM, Pensak NA, Sporn NJ, et al. Treatment satisfaction and adherence to oral chemotherapy in patients with cancer. J Oncol Pract. 2017: 13(5):e474-e485. [PubMed: 28398843]
24.
Noens L, van Lierde MA, De Bock R, et al. Prevalence, determinants, and outcomes of nonadherence to imatinib therapy in patients with chronic myeloid leukemia: the ADAGIO study. Blood. 2009;113(22):5401-5411. [PubMed: 19349618]
25.
Partridge AH, Wang PS, Winer EP, Avorn J. Nonadherence to adjuvant tamoxifen therapy in women with primary breast cancer. J Clin Oncol. 2003;21(4):602-606. [PubMed: 12586795]
26.
Lebovits AH, Strain JJ, Schleifer SJ, Tanaka JS, Bhardwaj S, Messe MR. Patient noncompliance with self-administered chemotherapy. Cancer. 1990;65(1):17-22. [PubMed: 2293862]
27.
Richardson JL, Marks G, Johnson CA, et al. Path model of multidimensional compliance with cancer therapy. Health Psychol. 1987;6(3):183-207. [PubMed: 3595545]
28.
Lash TL, Fox MP, Westrup JL, Fink AK, Silliman RA. Adherence to tamoxifen over the five-year course. Breast Cancer Res Treat. 2006;99(2):215-220. [PubMed: 16541307]
29.
Lee CR, Nicholson PW, Souhami RL, Deshmukh AA. Patient compliance with oral chemotherapy as assessed by a novel electronic technique. J Clin Oncol. 1992;10(6):1007-1013. [PubMed: 1588365]
30.
Levine AM, Richardson JL, Marks G, et al. Compliance with oral drug therapy in patients with hematologic malignancy. J Clin Oncol. 1987;5(9):1469-1476. [PubMed: 3625261]
31.
Winterhalder R, Hoesli P, Delmore G, et al. Self-reported compliance with capecitabine: findings from a prospective cohort analysis. Oncology. 2011;80(1-2):29-33. [PubMed: 21606661]
32.
Eliasson L, Clifford S, Barber N, Marin D. Exploring chronic myeloid leukemia patients' reasons for not adhering to the oral anticancer drug imatinib as prescribed. Leuk Res. 2011;35(5):626-630. [PubMed: 21095002]
33.
Goodwin JS, Zhang DD, Ostir GV. Effect of depression on diagnosis, treatment, and survival of older women with breast cancer. J Am Geriatr Soc. 2004;52(1):106-111. [PMC free article: PMC1853251] [PubMed: 14687323]
34.
Ayres A, Hoon PW, Franzoni JB, Matheny KB, Cotanch PH, Takayanagi S. Influence of mood and adjustment to cancer on compliance with chemotherapy among breast cancer patients. J Psychosom Res. 1994;38(5):393-402. [PubMed: 7965928]
35.
Greer JA, Pirl WF, Park ER, Lynch TJ, Temel JS. Behavioral and psychological predictors of chemotherapy adherence in patients with advanced non-small cell lung cancer. J Psychosom Res. 2008;65(6):549-552. [PMC free article: PMC4028043] [PubMed: 19027443]
36.
Pirl WF. Evidence report on the occurrence, assessment, and treatment of depression in cancer patients. J Natl Cancer Inst Monogr. 2004(32):32-39. [PubMed: 15263039]
37.
Walker J, Holm Hansen C, Martin P, et al. Prevalence of depression in adults with cancer: a systematic review. Ann Oncol. 2013;24(4):895-900. [PubMed: 23175625]
38.
Wood L. A review on adherence management in patients on oral cancer therapies. Eur J Oncol Nurs. 2012;16(4):432-438. [PubMed: 22051845]
39.
Quality Oncology Practice Initiative. Quality improvement. American Society of Clinical Oncology. Published 2016. Accessed May 25, 2017. http://www​.instituteforquality.org/
40.
Schneider SM, Hess K, Gosselin T. Interventions to promote adherence with oral agents. Semin Oncol Nurs. 2011;27(2):133-141. [PMC free article: PMC3653175] [PubMed: 21514482]
41.
Simons S, Ringsdorf S, Braun M, et al. Enhancing adherence to capecitabine chemotherapy by means of multidisciplinary pharmaceutical care. Support Care Cancer. 2011;19(7):1009-1018. [PMC free article: PMC3109264] [PubMed: 20552377]
42.
Spoelstra SL, Given BA, Given CW, et al. An intervention to improve adherence and management of symptoms for patients prescribed oral chemotherapy agents: an exploratory study. Cancer Nurs. 2013;36(1):18-28. [PubMed: 23235499]
43.
Steinhubl SR, Muse ED, Topol EJ. Can mobile health technologies transform health care? JAMA. 2013;310(22):2395-2396. [PubMed: 24158428]
44.
Darlow S, Wen KY. Development testing of mobile health interventions for cancer patient self-management: a review. Health Informatics J. 2016;22(3):633-650. [PubMed: 25916831]
45.
Concannon TW, Meissner P, Grunbaum JA, et al. A new taxonomy for stakeholder engagement in patient-centered outcomes research. J Gen Intern Med. 2012;27(8):985-991. [PMC free article: PMC3403141] [PubMed: 22528615]
46.
Meurer LN. MCW population-based model for patient-centered care. Published 2008. Accessed May 9, 2017. http://www.mcw.edu/FileLibrary/User/facdev/PopulationHealthModelworksheetforTeamsweb site.pdf
47.
Whittaker R, Merry S, Dorey E, Maddison R. A development and evaluation process for mHealth interventions: examples from New Zealand. J Health Commun. 2012;17(suppl 1):11-21. [PubMed: 22548594]
48.
Oken MM, Creech RH, Tormey DC, et al. Toxicity and response criteria of the Eastern Cooperative Oncology Group. Am J Clin Oncol. 1982;5(6):649-655. [PubMed: 7165009]
49.
Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377-381. [PMC free article: PMC2700030] [PubMed: 18929686]
50.
Stanton AL, Petrie KJ, Partridge AH. Contributors to nonadherence and nonpersistence with endocrine therapy in breast cancer survivors recruited from an online research registry. Breast Cancer Res Treat. 2014;145(2):525-534. [PubMed: 24781972]
51.
Morisky DE, Green LW, Levine DM. Concurrent and predictive validity of a self-reported measure of medication adherence. Med Care. 1986;24(1):67-74. [PubMed: 3945130]
52.
Cleeland CS, Mendoza TR, Wang XS, et al. Assessing symptom distress in cancer patients: the M.D. Anderson Symptom Inventory. Cancer. 2000;89(7):1634-1646. [PubMed: 11013380]
53.
Cella DF, Tulsky DS, Gray G, et al. The Functional Assessment of Cancer Therapy scale: development and validation of the general measure. J Clin Oncol. 1993;11(3):570-579. [PubMed: 8445433]
54.
Peipert JD, Beaumont JL, Bode R, Cella D, Garcia SF, Hahn EA. Development and validation of the Functional Assessment of Chronic Illness Therapy Treatment Satisfaction (FACIT TS) measures. Qual Life Res. 2014;23(3):815-824. [PubMed: 24062239]
55.
Zigmond AS, Snaith RP. The hospital anxiety and depression scale. Acta Psychiatr Scand. 1983;67(6):361-370. [PubMed: 6880820]
56.
Zimet GD, Dahlem NW, Zimet SG, et al. The Multidimensional Scale of Perceived Social Support. J Pers Assess. 1988;52:30-41.
57.
Davis TC, Crouch MA, Long SW, et al. Rapid assessment of literacy levels of adult primary care patients. Fam Med. 1991;23(6):433-435. [PubMed: 1936717]
58.
Brooke J. SUS: a ‘quick and dirty’ usability scale. In: Jordan PW, Thomas B, Weerdmeester BA, McClelland IL, eds. Usability Evaluation in Industry. Taylor & Francis; 1996:189-194.
59.
Bangor A, Kortrum P, Miller J. An empirical evaluation of the system usability scale. Int J Hum Comput Interact. 2008;24(6):574-594.
60.
Wang R, Ware JH. Detecting moderator effects using subgroup analyses. Prev Sci. 2013;14(2):111-120. [PMC free article: PMC3193873] [PubMed: 21562742]
61.
Hayes AF. Introduction to Mediation, Moderation, and Conditional Process Analysis. Guilford Press; 2013.
62.
Rubin DB. Multiple Imputation for Nonresponse in Surveys. Wiley; 1987.
63.
Krolop L, Ko YD, Schwindt PF, Schumacher C, Fimmers R, Jaehde U. Adherence management for patients with cancer taking capecitabine: a prospective two-arm cohort study. BMJ Open. 2013;3(7). [PMC free article: PMC3717446] [PubMed: 23872296]
64.
Mathes T, Antoine SL, Pieper D, Eikermann M. Adherence enhancing interventions for oral anticancer agents: a systematic review. Cancer Treat Rev. 2014;40(1):102-108. [PubMed: 23910455]
65.
Ziller V, Kyvernitakis I, Knöll D, Storch A, Hars O, Hadji P. Influence of a patient information program on adherence and persistence with an aromatase inhibitor in breast cancer treatment—the COMPAS study. BMC Cancer. 2013;13:407. [PMC free article: PMC3844591] [PubMed: 24006873]
66.
Gebbia V, Bellavia M, Banna GL, et al. Treatment monitoring program for implementation of adherence to second-line erlotinib for advanced non-small-cell lung cancer. Clin Lung Cancer. 2013;14(4):390-398. [PubMed: 23313173]
67.
Thakkar J, Kurup R, Laba TL, et al. Mobile telephone text messaging for medication adherence in chronic disease: a meta-analysis. JAMA Intern Med. 2016;176(3):340-349. [PubMed: 26831740]
68.
de Jongh T, Gurol-Urganci I, Vodopivec-Jamsek V, Car J, Atun R. Mobile phone messaging for facilitating self-management of long-term illnesses. Cochrane Database Syst Rev. 2012;(12):CD007459. doi:10.1002/14651858.CD007459.pub2 [PMC free article: PMC6486189] [PubMed: 23235644] [CrossRef]
69.
Concannon TW, Fuster M, Saunders T, et al. A systematic review of stakeholder engagement in comparative effectiveness and patient-centered outcomes research. J Gen Intern Med. 2014;29(12):1692-1701. [PMC free article: PMC4242886] [PubMed: 24893581]
70.
Pagoto SL, McDermott MM, Reed G, et al. Can attention control conditions have detrimental effects on behavioral medicine randomized trials? Psychosom Med. 2013;75(2):137-143. [PMC free article: PMC3570637] [PubMed: 23197844]

Publication List

•.
Fishbein JN, Nisotel LE, Macdonald JJ, et al. Mobile application to promote adherence to oral chemotherapy and symptom management: a protocol for design and development. JMIR Res Protoc. 2017;6(4):e62. doi:10.2196/resprot.6198 [PMC free article: PMC5418526] [PubMed: 28428158] [CrossRef]
•.
Greer JA, Amoyal N, Nisotel L, et al. A systematic review of adherence to oral antineoplastic therapies. Oncologist. 2016;21(3):354-376. [PMC free article: PMC4786357] [PubMed: 26921292]

Acknowledgment

Research reported in this report was [partially] funded through a Patient-Centered Outcomes Research Institute® (PCORI®) Award (#IHS-1306-03616) Further information available at: https://www.pcori.org/research-results/2013/does-smartphone-app-help-patients-cancer-take-oral-chemotherapy-planned

Original Project Title: Mobile Application for Improving Symptoms and Adherence to Oral Chemotherapy in Patients with Cancer
PCORI ID: IHS-1306-0316
ClinicalTrials.gov ID: NCT02157519

Suggested citation:

Greer JA, Jacobs J, Ream M. (2019). Does a Smartphone App Help Patients with Cancer Take Oral Chemotherapy as Planned? Patient-Centered Outcomes Research Institute (PCORI). https://doi.org/10.25302/4.2019.IHS.130603616

Disclaimer

The [views, statements, opinions] presented in this report are solely the responsibility of the author(s) and do not necessarily represent the views of the Patient-Centered Outcomes Research Institute® (PCORI®), its Board of Governors or Methodology Committee.

Copyright © 2019. Massachusetts General Hospital (The General Hospital Corp.). All Rights Reserved.

This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License which permits noncommercial use and distribution provided the original author(s) and source are credited. (See https://creativecommons.org/licenses/by-nc-nd/4.0/

Bookshelf ID: NBK596248PMID: 37851845DOI: 10.25302/4.2019.IHS.130603616

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (3.8M)

Other titles in this collection

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Similar articles in PubMed

See reviews...See all...

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...