SCRIP: Scholarly Research In Progress 2018

Page 1

Volume 2 • November 2018

Scholarly Research In Progress


Table of contents 2.

A Comparison of Documented Indications for Primary Cesarean Section to National Guidelines: A Retrospective Review

Zaira S. Chaudhry, Catherine Yu, and Brian D. Wilcox

6.

Transitioning from Femoral to Radial Access for Primary Percutaneous Coronary Intervention in the Acute Treatment of STEMI: A Community Medical Center Experience

Stephen Long, Nancy Krempasky, and Stephen Voyce

8.

Thyroid Carcinoma in Northeast Pennsylvania – American College of Surgeons Committee on Cancer: Integrated Network Cancer Program Committee: 2016 Quality Metrix

Michael C. Morgan

12.

Intraoperative Identification of a De Garengeot Hernia: A Rare Finding and Discussion of Operative Approach

Antonio Adiletta, Matthew T. Fisher, and Timothy J. Farrell

14.

A Quality Improvement Initiative to Increase Treatment Rates for Severe Hypertension in the Labor & Delivery Unit at Moses Taylor Hospital

Antonio Adiletta, Andrea Borba, Zaira Chaudhry, and Ian Coote

17.

Epiploic Appendagitis: A Case Report and Review of Literature

Nicole Marianelli and Timothy Farrell

19.

“Nothing to Disclose”: Quantification of Conflicts of Interest in Medscape

Ambica C. Chopra, Stephanie D. Nichols, and Brian J. Piper

24. Association of ADHD Diagnosis Among Young Adolescents and Specialty of Managing Physician with Decision for Treatment

Adriana Feliz, Bo La Jung, and Mariamawit Yilma

31.

Recurrent Cellulitis and Complex Regional Pain Syndrome Type 1

Eduardo Ortiz

34. Improving Lipid Management in Patients with Chronic Kidney Disease at Northeast Pennsylvania Nephrology Associates

Shane Warnock, Jordan Chu, Holly Corkill, and Sarah McDonald

37.

Migraines: Current and Future Therapies

Irene Kotok

42. Not Just Another Nursemaid’s: An Enigmatic Pediatric Humeral Fracture

Merly Cepeda, Charles Bay, Alexis Rice, Cameron Rutledge, and Ariel Zhang

52. The Neural Basis of a Bilingual Brain

Anjanie Khimraj

56. Pharmacoepidemiology and Public Policy Regarding Opioid Retail Distribution in Washington State from 2006–2016

Sarah Eidbo, Hunter Obeid, and Amitha Sundaram

71.

Inpatient Dermatology Consultations at Guthrie Robert Packer Hospital: A 10-year Retrospective Chart Review

Eduardo Ortiz and John Pamula

74. A Case Report of a Salivary Duct Carcinoma

Jena Patel, Elise Zhao, Kathleen Heeley, and Mark Frattali

77.

Antibiotic Stewardship: Defining a True Penicillin Allergy

Hannah Snyder, John Orr, Alexandra Lucas, James Palma-D’Souza, and Amber Khan

80. Predicting the Progression of Anaplasmosis and Babesiosis into Pennsylvania Using Lyme Disease as a Model

Jessica J. Wang, Stephanie D. Nichols, Kenneth L. McCall, and Brian J. Piper

Shane Warnock Volume 2 • November 2018

Scholarly Research In Progress

83. An Analysis of Stimulant Retail Drug Sales throughout Various Regions of the United States from 2006–2016: Is There an Association between Retail Stimulant Sales and ADHD Diagnoses Trends?

Christy Ogden and Brian J. Piper

90. Epigenetics: The Impact of Gene Expression on Neuropathic Pain, A Review

Matthew A. Jones Jr.

95. Past, Present, and Future of Lie Detection

Mystie Chen, Robert Gordon, Katherine Shoemaker, and Jessica Wang

101. The Association of Acculturation and Health Care Utilization

Brandon Cope and Michael Tracy

45. Neurobasis of Post-Traumatic Stress Disorder

66. The Gut-Brain Axis and Its Effects on Stress, Anxiety, Depression, and Memory

Marc Incitti, Ashanti Littlejohn, Kimberly Saint Jean, Rebhi Rabah, Brian Sacks, Julia Callavini, Elizabeth Kuchinski, and Mushfiq Tarafder

108. Marginalized Populations in Northeastern Pennsylvania: A Targeted Literature Review

Charles O.A. Bay and Ida Castro

119. 2019 Summer Research Immersion Program 120. Finding Your Way: Opportunities for Student Funding


A message from the editor-in-chief

Student editors Warren Acker, MD Class of 2020 Sana Chughtai, MD Class of 2021 Ilya Frid, MD Class of 2021 Jonathan Livezey, MD Class of 2021 Jena Patel, MD Class of 2019

Acknowledgments The SCRIP would not be possible without the contributions of faculty volunteers committed to the review and assessment of submitted articles. Their feedback provides student authors with an opportunity to strengthen their writing and to respond to critiques. We gratefully acknowledge the following faculty members for their support in providing peer review.

As the Journal of Scholarly Research in Progress (SCRIP) enters its second year of publication, I would like to offer thanks to our readers, our contributors, our faculty reviewers and our student editors for their support of the journal and its mission: to promote and disseminate student scholarly activity at Geisinger Commonwealth School of Medicine. As the editor-in-chief, I would like to share a few things with you. First, we received more than 30 high-quality submissions from our students — three times the number received last year. These submissions included literature reviews, case reports and original research manuscripts on topics ranging from improving lipid management in patient with chronic kidney disease to examining marginalized populations in northeast Pennsylvania. Those submitting authors who have had their work accepted should be proud of their achievement. Second, I take the responsibility of sustaining and building upon the SCRIP’s quality and success very seriously. Suggestions from our contributors and readers to further develop and/or improve the journal are more than welcome; if you would like to share your thoughts, email me at slobo01@som.geisinger.edu. Last, I would like to take this opportunity to invite students who have an interest in being involved in the editorial work of the journal. The role of the student editor is to assist in the writing, peer review, publication and editing of content for the journal. It is a great way to build your academic scholarship portfolio, and I believe that this would help to ensure the journal’s growth and sustainability. Potential candidates may send their updated CV (that includes all relevant research and/or creative scholarship experience, as well as all relevant writing, editing or peer critique experience) to scrip@som.geisinger.edu with the subject “Application for Student Editor.” Sincerely,

Sonia Lobo, PhD Editor-in-Chief

John Arnott, PhD Jennifer Boardman, PhD Patrick Boyd, PhD James Caggiano, MD, FAAP Christian Carbe, PhD Carmine Cerra, MD Kathleen Doane, PhD Anthony Gillott, MD, FACS Elizabeth Kuchinski, MPH Jun Ling, PhD William McLaughlin, PhD Brian Piper, PhD Felix Cyamatare Rwabukwisi, MD, MPH Margrit Shoemaker, MD Olapeju Simoyan, MD Janet Townsend, MD Gabi Waite, PhD Mark White, MD, MPH William Zehring, PhD

Office of Research & Scholarship Phone: 570-504-9662 Sonia Lobo, PhD Associate Dean for Research & Scholarship Associate Professor of Biochemistry Michele Lemoncelli Administrative Assistant to the Associate Dean for Research & Scholarship Katie Pasqualichio Grants Specialist Laura E. Mayeski MT(ASCP), MHA Manager of Research Compliance Thomas Majernick, MS Manager of Research Education Resources

Design and production Heather M. Davis Brian Foelsch Jessica L. Martin Elizabeth Zygmunt

On the cover: The image shows the eye of a Drosophila strain that contains a reporter gene near the chromosome 3R telomere. Heterozygosity produces a red-colored eye. The eye shown is trans-heterozygous with the telomere reporter over an allele of PRKN, indicating that telomeric silencing increases in a parkin background. Image acquired by Greg Shanower, PhD.

1


Scholarly Research In Progress • Vol. 2, November 2018

A Comparison of Documented Indications for Primary Cesarean Section to National Guidelines: A Retrospective Review Zaira S. Chaudhry1*, Catherine Yu, and Brian D. Wilcox

Geisinger Commonwealth School of Medicine, Scranton, PA 18509 *Correspondence: zchaudhry@som.geisinger.edu 1

Abstract Objective: To evaluate the rate and indications for primary cesarean sections performed at a single hospital in northeast Pennsylvania between March 2014 and March 2015. Methods: This was a retrospective chart review of all patients who underwent a primary cesarean section at Moses Taylor Hospital (MTH) in Scranton, Pennsylvania, between March 2014 and March 2015. After obtaining Institutional Review Board approval, medical records were queried using the diagnostic/procedural codes corresponding to cesarean births. The resulting charts were screened, and data collection was limited to charts documenting a primary cesarean birth. Data collection included maternal demographics, gravidity/ parity, estimated gestational age, estimated fetal weight, preoperative diagnosis and evaluation, maternal complications, and neonatal outcomes. Where applicable, guidelines issued by the American College of Obstetricians & Gynecologists (ACOG) were used to assess the adequacy of documented indications for cesarean delivery. Results: There were 2065 patients who gave birth at MTH between March 2014 and March 2015 who did not have a history of a previous cesarean delivery. Of these, there were 451 patients (21.8%) who experienced primary cesarean births. The primary cesarean group had a mean age of 28.8 ± 5.5 years. The average gestational age at the time of cesarean section was 38.4 ± 2.6 weeks. The average birthweight was 3224.0 ± 740.6 grams. The three most common indications cited for cesarean section were nonreassuring fetal status (24.0%), labor arrest disorders (23.1%), and fetal malpresentation (20.0%). 9.3% of primary cesareans were performed on maternal request. 33.0% of charts did not provide adequate documentation in support of the preoperative diagnosis or meet criteria for the preoperative diagnosis according to published ACOG guidelines. Conclusion: In our sample, the preoperative indications for 33.0% of primary cesarean births were inconsistent with documentation in the patients’ medical records or did not meet criteria per ACOG guidelines. Our findings suggest that stricter implementation and/or adherence to ACOG’s guidelines might lead to decreased primary cesarean birth rate in our community setting.

Introduction According to the Centers for Disease Control and Prevention’s National Vital Statistics Report on births, 31.9% of all births in the United States were cesarean sections in the year 2016 (1). This indicates that there has been a considerable increase in cesarean births since 1996 when the rate was 20.7% (1). This increase is concerning because of the risks associated with cesarean births. Cesarean deliveries are associated with 2

adverse, long-term maternal complications, such as chronic pelvic pain and abdominal adhesions (2, 3). Cesarean births are also associated with an increased risk of complications in subsequent pregnancies, including, but not limited to, placenta previa, placental abruption, stillbirth, premature birth, low birth weight, and neonatal intensive care unit (NICU) admission (4,5). Moreover, cesarean deliveries are associated with an increased risk of adverse, short-term maternal outcomes (e.g. admission to the intensive care unit (ICU), blood transfusion, hysterectomy, mortality) and neonatal outcomes (e.g. respiratory morbidity, laceration) (6). In light of these findings, the American College of Obstetricians & Gynecologists (ACOG) developed a statement on the safe prevention of primary cesarean births in effort to reduce the primary cesarean section rate (7). It has been recommended that cesarean sections only be performed when there is a clear benefit that might outweigh the additional risks associated with this mode of delivery (8). It is important to note that few indications for cesarean delivery are absolute (e.g., placenta previa, vasa previa, or cord prolapse); in fact, most indications are subjective as they largely depend on the clinician’s interpretation of the patient’s case. Having a primary cesarean section sets the course for subsequent cesarean deliveries in future pregnancies in most patients. Over 90% of initial cesarean births lead to repeat cesarean sections in subsequent pregnancies (8). It is also estimated that approximately 60% of cesarean births are primary—that is, a woman’s first cesarean delivery regardless of parity (9). Therefore, in order to effectively reduce the cesarean delivery rate, it certainly stands to reason that efforts should be targeted at preventing primary cesarean deliveries. The present study sought to explore primary cesarean births at a northeast Pennsylvania hospital. We conducted a retrospective chart review to determine the rate of primary cesarean section and evaluated the documented indications for primary cesarean delivery in this setting.

Materials and Methods This was a retrospective chart review of all patients who underwent a primary cesarean section at Moses Taylor Hospital in Scranton, Pennsylvania, between March 2014 and March 2015. After obtaining Institutional Review Board approval, patient charts were queried using the International Classification of Disease (ICD) diagnostic codes and Current Procedural Terminology (CPT) codes corresponding to cesarean births during the specified dates. These charts were screened, and data collection was limited to charts documenting a primary cesarean birth. If the index cesarean section was not a patient’s first cesarean birth, the patient was excluded from this study. Only charts meeting the following inclusion criteria were included in the chart review: 1) patients


Indications for Primary Cesarean Section

must have given birth at Moses Taylor Hospital between March 2014 and March 2015; 2) their mode of delivery must have been a cesarean section; 3) the index cesarean section must have been the patient’s first cesarean section. The primary objective was to determine the rate of primary cesarean births and evaluate their indications. The secondary objective was to evaluate maternal and neonatal outcomes following primary cesarean section. Maternal outcomes were evaluated by determining the rate of perioperative maternal complications. Neonatal outcomes were evaluated by determining the proportion of neonates admitted to the NICU and the rate of respiratory morbidity. The following data points were collected from each eligible patient chart: maternal demographics (age, race, gravidity, parity), estimated gestational age, estimated fetal weight, preoperative diagnosis and evaluation (indication for cesarean section), maternal outcome (hemorrhage, transfusion, anemia, uterine rupture, deep vein thrombosis, pulmonary embolism, ICU admission), and neonatal outcome (birthweight, Apgar scores, NICU admission, respiratory morbidity). To assess the accuracy and validity of the documented indications for cesarean section, the guidelines set forth by ACOG to safely prevent primary cesarean births were used to verify the preoperative diagnoses (7, 10). Cases where the documentation clearly matched the indications set forth by ACOG were designated as “confirmed.” For example, if maternal and fetal conditions allow, ACOG recommends at least 2 hours of pushing in multiparous women and at least 3 hours of pushing in nulliparous women before diagnosing arrest of labor in the second stage (arrest or failure of descent) (7). Cases in which the documented indication for cesarean delivery was arrest or failure of descent were designated as “confirmed” if either of the aforementioned criteria were met, whereas cases in which patients were not allowed to push as long as ACOG recommends were designated as “not confirmed.” Likewise, ACOG defines suspected fetal macrosomia as an estimated fetal weight of at least 5000 grams in women without diabetes and at least 4500 grams in women with diabetes (7). Cases in which the documented indication for cesarean delivery was suspected fetal macrosomia were designated as “confirmed” if either of the aforementioned thresholds for estimated fetal weight were met, whereas cases not meeting these thresholds were designated as “not confirmed.” Moreover, cases that were equivocal were reviewed in detail by the senior author who made the final decision on whether cases met ACOG indications. Descriptive statistics were calculated using Microsoft Excel software (Microsoft Corporation, Redmond, Washington).

nonreassuring fetal status (24.0%), labor arrest disorders (23.1%) (failure to progress, arrest or failure of descent, arrest or failure of dilatation, prolonged active or latent phase, unfavorable cervix, failed induction), and fetal malpresentation (20.0%). Cases of fetal malpresentation included breech presentation (n = 84), transverse/complex presentation (n = 3), compound presentation (n = 2), and face/brow presentation (n = 1). Maternal requests accounted for 9.3% of primary cesareans in our sample. A complete list of the indications cited for cesarean section is included in Table 2. In our review, 33.0% of charts did not provide adequate documentation in support of the preoperative diagnosis or meet criteria for the preoperative diagnosis according to published ACOG guidelines. The indications for cesarean section that had the lowest percentage of confirmed diagnoses included suspected macrosomia (22.7%), failed induction (25.0%), and failure to progress (28.1%). The most common diagnosis, the indication of nonreassuring fetal status, was confirmed in 52.8% of cases. At least one maternal complication occurred in 19.7% of mothers, with the most common complication being postoperative anemia (19.1%). Maternal complication rates are presented in Table 3. Adverse neonatal outcomes, presented in Table 4, included 1-min APGAR score < 6 (10.0%), 5-min APGAR score < 6 (2.3%), NICU admission (13.1%), and respiratory morbidity (7.1%).

Results Out of 2065 patients without a prior history of cesarean section who gave birth at MTH between March 2014 and March 2015, there were 451 (21.8%) primary cesarean births. The primary cesarean group had a mean age of 28.8 ± 5.5 years. Mean gravidity and parity were 1.9 ± 1.5 and 0.4 ± 1.0, respectively. The demographics of our sample are presented in Table 1. A trial of labor was attempted in 58.3% of cases. The average gestational age at the time of cesarean section was 38.4 ± 2.6 weeks. The average birthweight was 3224 ± 741 grams.

Table 1. Demographics for patients who underwent primary cesarean section at MTH between March 2014 and March 2015

The three most common indications for cesarean section were

3


Indications for Primary Cesarean Section

Discussion The cesarean delivery rate in the United States reached its highest level of 32.9% in 2009 (1). Despite modest improvements, the current rate of 31.9% in 2016 is still considerably high (1). Moreover, the national rate of primary cesareans was 21.8% in 2016 (1). Because of the increased morbidity associated with cesarean delivery and the low rate of only 12.4% of women choosing a vaginal birth after previous cesarean, it is prudent to investigate the necessity of primary cesarean sections in order to reduce the rate if safely possible (1). In our sample, the primary cesarean rate was 21.8%; this rate is comparable to the previously reported national rate.

Table 2. Documented indications for primary cesarean delivery and the percentage of diagnoses that were confirmed via review of chart documentation organized by increasing percentage of confirmed diagnoses

Our data showed that 33.0% of charts did not provide accurate documentation or meet ACOG published guidelines for indications for primary cesarean section. The preoperative indication for cesarean delivery with the highest number of discrepancies was nonreassuring fetal status. Fetal heart tracing interpretation has been well known to be subject to wide variability in interpretation from different observers. The difference is especially profound when it comes to classification of decelerations (early, late, and variable), which is essential for categorizing tracings into reassuring versus nonreassuring (11). Future studies could investigate whether or not there were discrepancies in interpretation and perhaps implement institution-wide mandatory training for physicians and nurses, which has been shown to improve consensus in interpretation of the tracings (12). The indication with the highest percentage of discrepancy between preoperative diagnosis and confirmed diagnosis was large for gestational age/suspected macrosomia. This finding is not unexpected, given that current methods for diagnosing suspected macrosomia, including fundal height and ultrasonography, are not highly accurate (13). Furthermore, upon review of fetal ultrasound documentation, the estimated fetal weight did not meet ACOG’s criteria for suspected fetal macrosomia in the majority of cases where this was documented as the indication for cesarean delivery.

Table 3. Maternal complication rates after primary cesarean delivery

Table 4. Neonatal complication rates after primary cesarean delivery 4

The results of the present study parallel previously published findings on the indications for primary cesarean delivery. In their multicenter retrospective cohort study, Boyle et al. reported that the top three indications for primary cesarean delivery were failure to progress (35.4%), nonreassuring fetal heart rate tracing (27.3%), and fetal malpresentation (18.5%) (14). Likewise, Barber et al. reported that the increase in primary cesarean rates observed at their academic institution between 2003 and 2009 was attributed to the following documented indications: nonreassuring fetal status (32%),


Indications for Primary Cesarean Section

labor arrest disorders (18%), multiple gestation (16%), suspected macrosomia (10%), preeclampsia (10%), maternal request (8%), maternal-fetal conditions (5%), and other obstetric conditions (1%) (15). Moreover, a study by Metz et al. reported a threefold variation in primary cesarean delivery rates between laborists within the same institution without observed differences in patient characteristics or short-term neonatal outcomes, a finding that certainly highlights how individual physician decision-making can influence primary cesarean rates (16). The findings of the present study must be viewed within the context of its limitations. First, this study was subject to the inherent limitations of a retrospective design. We were, therefore, unable to assess the accuracy of the data or explore physician decision-making. Second, cases where the indication for cesarean section could not be confirmed may be due to inadequate documentation as opposed to a lack of justification for the procedure. Third, given that the present study only analyzed data from a single institution in a community hospital setting, the generalizability of these findings may be limited. Finally, this study did not evaluate long-term maternal or neonatal complications of primary cesarean section. Despite the aforementioned limitations, the present study adds to the current literature on primary cesarean births as it highlights the need to improve physician compliance with ACOG’s guidelines for the safe prevention of primary cesarean delivery. Future studies should explore variations in physician decision-making as well as interventions to safely reduce primary cesarean births.

Conclusion In our sample, the indications for approximately one-third of primary cesarean births could not be verified based on the information included in the patients’ medical records. It is unclear how much of this is attributed to inadequate documentation versus a true lack of justification for the procedure. Our findings suggest that stricter implementation and/or adherence to ACOG’s guidelines might lead to decreased primary cesarean birth rate in our community setting. Further research is necessary to fully elucidate the overall rate of compliance with ACOG’s guidelines for the safe prevention of primary cesarean births.

References 1.

Martin JA, Hamilton BA, Osterman MJK, Driscoll AK, Drake P. Births: final data for 2016. National Vital Statistics Report. 2018;67(1).

2. Almeida ECS, Nogueira AA, Candido dos Reis FJ, Rosa eSilva JC. Cesarean section as a cause of chronic pelvic pain. Int J Gynecol Obstet. 2002;79(2):101-104.

population-based study. Med J Aust. 2005;183(10):515-519. 6. Souza JP, et al. Caesarean section without medical indications is associated with an increased risk of adverse short-term maternal outcomes: the 2004–2008 WHO Global Survey on Maternal and Perinatal Health. BMC Med. 2010;8(71):1-10. 7.

American College of Obstetricians & Gynecologists. Safe prevention of the primary cesarean delivery. Obstetric Care Consensus No. 1. Obstet Gynecol. 2014;123:693-711.

8. Spong CY, et al. Preventing the first cesarean delivery: Summary of a joint Eunice Kennedy Shriver National Institute of Child Health and Human Development, Society for Maternal-Fetal Medicine, and American College of Obstetricians and Gynecologists workshop. Obstet Gynecol. 2012; 120(5):1181-1193. 9. Osterman MJK, Martin JA. Primary cesarean delivery rates, by state: Results from the revised birth certificate, 2006–2012. National Vital Statistics Report. 2014;63(1). Available at: http://www.cdc.gov/nchs/data/nvsr/nvsr63/ nvsr63_01.pdf. 10. American College of Obstetricians and Gynecologists. Practice bulletin no. 116: Management of intrapartum fetal heart rate tracings. Obstet Gynecol. 2010 Nov;116(5):123240. 11. Santo S, Ayres-de-Campos D. Human factors affecting the interpretation of fetal heart rate tracings: an update. Curr Opin Obstet Gynecol. 2012. 24(2):84-88. 12. Govindappagari S, Zaghi S, Zannat F, Reimers L, Goffman D, Kassel I, et al. Improving interprofessional consistency in electronic fetal heart rate interpretation. Am J Perinatol. 2016;33(08):808-813. 13. Haragan AF, Hulsey TC, Hawk AF, Newman RB, Chang, EY. Diagnostic accuracy of fundal height and handheld ultrasound-measured abdominal circumference to screen for fetal growth abnormalities. Am J Obstet Gynecol. 2015;212(6):820.e1-820.e8. 14. Boyle A, Reddy UM, Landy HJ, Huang CC, Driggers RW, Laughon SK. Primary cesarean delivery in the United States. Obstet Gynecol. 2013 Jul;122(1):33-40. 15. Barber EL, Lundsberg LS, Belanger K, Pettker CM, Funai EF, Illuzzi JL. Indications contributing to the increasing cesarean delivery rate. Obstet Gynecol. 2011 Jul;118(1):2938. 16. Metz TD, Allshouse AA, Gilbert SAB, Doyle R, Tong A, Carey JC. Variation in primary cesarean delivery rates by individual physician within a single-hospital laborist model. Am J Obstet Gynecol. 2016 Apr;214(4):531.e1-531.e6.

3. Lyell DJ, Caughey AB, Hu E, Daniels K. Peritoneal closure at primary cesarean delivery and adhesions. Obstet Gynecol. 2005;106(2):275-280. 4. Kennare R, Tucker G, Heard A, Chan A. Risk in adverse outcomes in the next birth after a first cesarean delivery. Obstet Gynecol. 2007;109(2):270-276. 5. Taylor LK, Simpson JM, Roberts CL, Olive EC, Hendersonsmart DJ. Risk of complications is a second pregnancy following caesarean pregnancy in the first pregnancy: A 5


Scholarly Research In Progress • Vol. 2, November 2018

Transitioning from Femoral to Radial Access for Primary Percutaneous Coronary Intervention in the Acute Treatment of STEMI: A Community Medical Center Experience Stephen Long1*, Nancy Krempasky2, and Stephen Voyce2

Geisinger Commonwealth School of Medicine, Scranton, PA 18509 Geisinger Community Medical Center, Scranton, PA 18510 *Correspondence: slong01@som.geisinger.edu 1

2

Abstract One mainstay for acute treatment of ST segment elevation myocardial infarctions (STEMI) is primary percutaneous coronary intervention (PPCI), delivered via either femoral or radial artery access. Ferrante et al. have shown that coronary intervention accessed radially is associated with a reduction in mortality and major adverse cardiac events (MACE) when compared to femoral access. A systemwide transition to radial access, however, is practically complicated by a physician learning curve, which could potentially impact efficiency and prolong the time between arrival at the cardiac catheterization lab and activation of an interventional device (CCL2D time). This retrospective observational study compares the percent of PCIs administered via radial access to the average ED2D and CCL2D times in 6-month intervals from 2013–2017 at Geisinger Community Medical Center (GCMC). These parameters are also examined for the each of the individual operators at GCMC. This study shows no statistically significant change in either ED2D or CCL2D as the transition to radial artery access was initiated at GCMC. The results of this study indicate the feasibility of a transition to radial artery access for the acute treatment of STEMI in a community medical center setting. Further analysis of mortality and MACE rates in this patient population will likely reaffirm the clinical benefit of radial access seen in the literature.

Introduction Patients presenting with ST-segment elevation myocardial infarctions (STEMI) can be treated acutely with primary percutaneous coronary intervention (PPCI) delivered either via the femoral artery (FA) or radial artery (RA). Several comparative studies have shown that RA access is associated with less vascular access site bleeding and more patient comfort (1) as well as a lower occurrence of major adverse cardiac events (MACE), and a decreased overall mortality in STEMI patients (2). These studies have typically been performed by experienced RA operators. A systemwide transition to use of RA access in STEMI-PPCI is limited by the learning curve for the physician and catheterization lab staff to become proficient with the procedure in an emergent setting. This retrospective observational study examines the time course of an FA to RA access transition at Geisinger Community Medical Center from 2013 to 2017. Furthermore, we examine the effect of such a transition on two timesensitive STEMI-PPCI performance parameters. The results of this study are intended to determine the time course of adoption of RA access as a primary strategy in STEMI-PPCI and, by examination of two performance metrics, whether the

6

transition to RA access at a community medical center will affect the time needed to deliver PPCI.

Materials and Methods Records for a total of 339 STEMI cases at GCMC over a 4-year timespan from 2013 to 2017 were divided into 6-month intervals. For each interval, the percent of PPCI cases accessed radially was determined and the average ED2D (time between patient arrival at the Emergency Department and the activation of a PCI device) and CCL2D (time between patient arrival at the cardiac catheterization time and activation of a PCI device) were calculated. Additionally, the STEMI cases were stratified by which of the four primary operators at GCMC performed the procedure. The percent of procedures performed via RA access, the average ED2D times, and the average CCL2D times were calculated for each interval. Linear regression tests were used to determine statistical significance of each data set.

Results The use of RA access for STEMI-PPCI increased linearly over the years studied. As shown in Figure 1A, there was a steady and linear increase in percent of cases performed via RA access, from 12.00% in 2013 to 61.54% in 2017 (r = 0.9266). During this time, there was no statistically significant change in ED2D time, as demonstrated in Figure 1B. To better define whether the increased use of RA access affected the actual PPCI procedure time, we analyzed the temporal data for CCL2D. Figure 1B shows that there was no statistically significant change in CCL2D time while the transition from FA to RA access progressed (mean time of 26.44 min. in 2013 to 25.00 min. in 2017) (r = 0.0774). The performance parameters for the four main operators in the GCMC Cardiac Catheterization Lab were then examined to determine if the perceived adoption in radial access was contributed to by each of the physicians. As shown in Figure 2A, one physician (Physician C) was an early adopter of RA-PPCI, while each of the remaining three operators in the lab steadily increased their percentage of RA access cases. Figure 2B reveals that no operator experienced a statistically significant change in CCL2D time.


Acute Treatment of STEMI

Figure 1: Performance parameters of the GCMC Cardiac Catheterization Lab from 2013 to 2017, including Percent Radial Access (A), ED Door to Device Activation Time (B; red), and CCL Arrival to Device Activation Time (B; blue). N values are indicated in bold over the data points.

Figure 2: Performance parameters of the individual operators in the GCMC Cardiac Catheterization Lab from 2013 to 2017, including Percent Radial Access (A) and CCL Arrival to Device Activation Time (B).

Discussion Multiple studies suggest that RA is associated with fewer MACE than FA in PPCI. The results of this study have shown the clinical viability of transitioning from femoral to radial access for STEMI-PPCI in a community hospital setting. The percentage of cases performed radially increased linearly over the time studied and is currently in excess of 60%. Importantly, there appears to be no significant effect on efficiency—as measured by CCL2D time—of the PPCI procedure. This appears to be consistent for the lab as a whole and for each individual physician. In short, the results of this study support the concept that community hospital cardiac catheterization labs can successfully transition to a RA access strategy for STEMI-PPCI without compromising ED2D and CCL2D, two critical quality measures. This study is limited by its retrospective, observational nature and its focus on catheterization lab performance in particular. This study does not account for variations in patient presentation that may influence choice for vascular access site, which could have influenced the decision to perform radial artery versus femoral artery access. Additionally, the number of elective, non-emergent diagnostic/PCI cases done via radial artery access by the lab and individual physicians

prior to embarking on radial artery access STEMI-PPCI likely influenced the rate of the transition to radial artery access. Lastly, this study only investigated the performance on the GCMC cardiac catheterization lab in emergent STEMI-PPCI cases and did not ascertain the effect the transition to radial artery access may have had on the mortality and adverse events rates in this group of patients.

References 1.

Hildick-Smith DJ, Lowe MD, Walsh JT, et al. Coronary angiography from the radial artery-experience, complications and limitations. Int J Cardiol. 1998 May 15;64(3):231-9.

2. Ferrante G, Rao SV, Jüni P, Costa BR, et al. Radial Versus Femoral Access for Coronary Interventions Across the Entire Spectrum of Patients With Coronary Artery Disease. JACC: Cardiovascular Interventions. 2016 Jul 25;9(14):1419-34.

7


Scholarly Research In Progress • Vol. 2, November 2018

Thyroid Carcinoma in Northeast Pennsylvania – American College of Surgeons Committee on Cancer: Integrated Network Cancer Program Committee: 2016 Quality Metrix Michael C. Morgan1*

Geisinger Commonwealth School of Medicine, Scranton, PA 18509 *Correspondence: mmorgan@som.geisinger.edu 1

Abstract To ensure adherence to current treatment algorithms for thyroid carcinoma, standard 4.6 of the American College of Surgeons Commission on Cancer was evaluated for all thyroid nodule cases graded Bethesda 3 through 5 within the northeast Pennsylvania region for 2016. Retrospective review of this patient cohort (n = 16) was monitored for compliance to the current standard. Analysis was performed not only to determine if the diagnostic evaluation was adequate and the treatment plan concordant with evidence based nationally recognized guidelines, but also to maximize patient outcomes while minimizing the removal of benign organs. Molecular genetic testing and follow-up pathology were employed in all cases within the study, and it was determined that all patients with known carcinogenic mutations did indeed have surgery, while those patients without mutations were and continue to be closely observed with ultrasound follow-up per the current treatment guidelines.

Introduction A thyroid nodule is an identifiable swelling within an otherwise normal thyroid gland. In the United States, the incidence of thyroid nodules among the adult population varies between 1% and 10%. Thyroid nodules are approximately 4 times more common in women. Incidence increases throughout life and is greatest in endemic goitrous regions. Clinically, the major significance of patients who present with thyroid nodules is the possibility of a malignant neoplasm. Fortunately, the ratio of benign neoplasms to thyroid carcinoma is nearly 10:1. And although less than 1% of solitary thyroid nodules are malignant, this still represents approximately 15 000 new cases of thyroid carcinoma per year. Most of these cases are indolent; with more than 90% of affected patients living 20 years after their initial diagnosis (1). Thyroid carcinomas are relatively uncommon in the United States, accounting for approximately 1.5% of all cancer diagnoses. The relative frequencies of the major subtypes of thyroid carcinoma are as follows: Papillary carcinoma (> 85% of cases), follicular carcinoma (5% to 15% of cases), medullary carcinoma (5% of cases), and anaplastic carcinoma (< 5% of cases). Various genetic alterations in the growth factor receptor signaling pathways account for malignancy in all thyroid carcinomas, except for medullary carcinoma, which is not derived from the thyroid follicular epithelium (1). Evaluation of several clinical criteria can give insight into the nature of a given thyroid nodule. Increased risk of malignancy includes solitary nodules greater than 4 cm; calcifications; hypervascularity or ill-defined borders on ultrasound; nodules

8

in younger patients and in males; a history of radiation to the head and neck, especially during the first two decades of life; and less likely, functional nodules that do not readily take up radioactive iodine in imaging studies, i.e., cold nodules (1). The gold standard for evaluation of a thyroid nodule is fine needle aspiration (FNA) cytology. It is a widely used tool that carries a high degree of sensitivity, specificity, and diagnostic accuracy (2). Surgical resection may be necessary for more definitive morphological information regarding a given thyroid nodule’s nature and its neoplastic potential. However, it is crucial that the indications for surgery are aligned properly so that benign organs are not removed from patients without malignant disease. In patients with thyroid carcinoma, prognostication can also be determined by using cancer management guidelines from the American Thyroid Association (ATA). Risk stratification is important in determining the appropriate extent of surgery (total thyroidectomy vs lobectomy), radioactive iodine (RAI) administration, and in managing the intensity of follow-up (3). As aforementioned, most thyroid cancers are indolent; therefore, most patients are at a low risk for disease recurrence after cancer removal. These patients can be treated by lobectomy and are unlikely to benefit from RAI ablation (3). On the other hand, patients with high-risk cancers benefit more from total thyroidectomy, which facilitates postoperative RAI administration followed by more aggressive disease monitoring (4). Management of well-differentiated carcinomas also includes suppression of endogenous TSH levels. Depending on the risk of the malignancy, aggressive thyroid hormone replacement therapy may be necessary to keep TSH levels as low as possible, without inducing symptoms of hyperthyroidism. Serum thyroglobulin levels are used for monitoring tumor recurrence. The Bethesda System for Reporting Thyroid Cytology (TBSRTC) is a widely used tool in thyroid nodule management. Broadly, the Bethesda System is a guideline that implicates cancer risk to a given category. Samples that fall under TBSRTC diagnostic categories 3, 4, and 5 undergo further molecular testing to identify known genetic mutations and to optimize the course of clinical management (2). Category 3 (atypia of undetermined significance or follicular lesion of undetermined significance) are thyroid FNAs that do not fit into the benign, malignant, or suspicious category. However, the atypia present in category 3 is greater than that of benign change. According to TBSRTC, the cancer risk for this category lies between 5% and 15%. The recommended management protocol is repeat FNA after a sufficient time gap. Category 4 (follicular nodule or suspicious for a follicular nodule) is the category aimed at identifying a nodule that may be a follicular neoplasm. Carcinomas have cytomorphologic features that distinguish


Thyroid Carcinoma in Northeast Pennsylvania

them from benign follicular nodules but are not distinguishable from a follicular adenoma. The cancer risk for this category is approximately 15% to 30%, with the TBSRTC recommendation being lobectomy. Category 5 (suspicious for malignancy) includes cases where only a few characteristic features of thyroid carcinoma are present. The nuclear changes that are present can be subtle, and this especially true of the follicular variant of papillary thyroid carcinoma. Therefore, they can be difficult to distinguish from a benign follicular nodule. Sixty to 75% of these cases prove to be papillary carcinomas, and the remaining are usually follicular adenomas. TBSRTC recommends lobectomy or total thyroidectomy (2). The risk of malignancy and typical management for each diagnostic category of a Bethesda-graded lesion is summarized in Table 1.

probability (approximately 80%) of low-risk cancer, such as well differentiated microinvasive thyroid carcinomas and NIFTP (noninvasive follicular neoplasm with papillary-like features) (6). Many of these nodules may be managed by therapeutic lobectomy, which is currently recommended by the ATA guidelines for low-risk papillary and follicular carcinomas and NIFTP (3). Test positive for an isolated BRAF V600E or BRAF V600E-like mutation confers a very high (> 95%) probability of cancer of intermediate or high risk, depending on the size of the nodule. Molecular testing positive for multiple high-risk mutations (BRAF V600E) is virtually diagnostic for cancer and predicts an elevated risk of disease recurrence (6). Most of these patients would likely benefit from total thyroidectomy and in some cases, regional lymph node dissection (3).

New genomics can be used to analyze data derived from FNA by searching for known genetic components that have implicated roles in thyroid cancers. This newly mined data is changing treatment algorithms. For instance, in a given nodule that is positive for a known mutation, the rate of malignancy is somewhere between 80% and 90%; in nodules negative for known mutations, the rate of malignancy drops to less than 5% (4). ThyroSeq® is one of several new diagnostic tools that are being used to refine cancer probability in thyroid nodules with indeterminate FNA cytology (Bethesda 3, 4, and 5) (6). This technology analyzes a sample for 112 genes and other genetic components that have implicated roles in thyroid cancer. Based on genetic analysis, thyroid nodules are then stratified into those that are most likely benign versus those that have a high probability of being malignant. Correct identification of a thyroid nodule with indeterminate FNA cytology allows for appropriate management while minimizing morbidity.

This study evaluated adherence to standard protocol and current guidelines for patients having suspicion of thyroid carcinoma using recommendations from the ATA, NCCN, and Standard 4.6 of the American College of Surgeons Commission on Cancer in northeast Pennsylvania for the year 2016. To uphold Standard 4.6, which monitors the compliance of cancer programs with evidence-based guidelines, an indepth analysis was conducted and the results were presented to the cancer committee to verify that program patients were evaluated and treated in accordance with evidence-based national treatment guidelines.

According to National Comprehensive Cancer Network (NCCN) guidelines, if molecular testing is negative, in conjunction with benign characteristics on ultrasound and cytopathology indicating a risk of malignancy less than 5%, then observation is recommended (7). Based on ultrasound findings, imaging can be repeated anywhere from 12 to 24 months. FNA may be repeated for any new or suspicious findings, or evidence of growth. In nodules of the Bethesda 5 category with negative mutation, surgery may be downgraded from a total to a partial thyroidectomy (7). For nodules that test positive for known mutations, the type and level of mutations provide further refinement for the probability of cancer and estimate cancer aggressiveness and risk. An isolated RAS or RAS-like mutation predicts a high

Materials and Methods A retrospective cohort of patients from 2016 with the diagnosis of thyroid nodule, having a Bethesda grade between 3 and 5, was followed (n = 16). In each case, the thyroid nodule was biopsied using FNA, graded using the Bethesda classification system by pathology, and then sent for molecular genetic analysis using ThyroSeq. Further analysis involved looking at the molecular testing results of each of case, particularly focusing on those cases that came back positive for mutations associated with an increased cancer risk. In cases where cytology was negative, observation with ultrasound followup is currently recommended and was utilized. In cases that came back positive, lobectomy or total thyroidectomy was performed, both per the current standard. Final pathology on the surgical specimen was then interpreted as either positive or negative for carcinoma. The flow of this study and the current standard of care is shown in Figure 1. This framework of analysis was used for all 16 cases.

Table 1. Bethesda system simplified (8) 9


Thyroid Carcinoma in Northeast Pennsylvania

Figure 1. Study design framework

Results Cancer risk associated with the molecular testing results for each case was analyzed and is described in Table 2 and Figure 2. Each was analyzed separately and monitored closely for compliance to current treatment standards. All surgical cases were confirmed with final pathology, and in patients that came back positive for mutations associated with increased cancer risk, their follow-up treatment is detailed below: Two cases aside, all patients graded as Bethesda 3 (n = 9) were found to have no identifiable gene mutations or fusions associated with malignancy. The treatment schedules for these cases were observation and ultrasound follow up within 6 to 12 months. The treatment for the two positive cases was thyroidectomy with extensive ultrasound follow-up. Of the cases that tested negative for high cancer risk mutations, one patient was noted to have a low-risk TSHR mutation on genetic analysis. For patients graded as Bethesda 4 (n = 4), all but one case was found to have identifiable gene mutations and/or fusions associated with malignancy. The treatment schedule for the positive cases was thyroidectomy with follow-up I-131 radiation therapy in one case. For all positive cases graded Bethesda 4, confirmation of malignancy on final pathology was determined. Interestingly, for the single negative case, a strong expression of the PTH gene was detected, and this was suggestive of parathyroid tissue. Despite this odd finding, the treatment plan was not thyroidectomy, but rather continued follow-up and observation. All cases graded Bethesda 5 (n = 3) were found to have identifiable gene mutations and/or fusions associated with malignancy. All of these patients were treated with thyroidectomy and extensive follow-up, with confirmation 10

of malignancy on final pathology. Interestingly, all thyroid carcinomas found in this subgroup of patients were of the papillary type, with a quarter of the cases being of the follicular variant. It was determined that all patients with known mutations found on genetic analysis (n = 8) should and did indeed have surgery, while patients without mutations should be closely observed.

Table 2. Bethesda 3 – 5 Patient Cohort for 2016

Figure 2. Bethesda 3 – 5 Patient Cohort for 2016


Thyroid Carcinoma in Northeast Pennsylvania

Discussion Reviewing the ATA recommendations for each Bethesda category was important in ensuring that the first course of therapy was concordant with evidence-based national treatment guidelines. It is currently recommended that total thyroidectomy may be preferred in patients with indeterminate nodules which are positive for known mutations specific for carcinoma (3). Due to the increased risk of malignancy seen in Bethesda 3 through 5, it was deemed appropriate that Bethesda grading should be the main selection criterion for this study, and as such, only cases graded 3 through 5 were included. Correct identification of a thyroid nodule’s potential for malignancy allows for appropriate management and follow-up while minimizing morbidity. This distinction is also important because it provides greater specificity to the study, especially in terms of narrowing the surgical vs observation groups when refining treatment standards for the future. Through further analysis, it was determined that false negatives could not be ruled out based on biopsy location. In three cases, malignancy was incidentally present on final pathology opposite of the side biopsied. For example, despite positive genetic results of a suspicious right-sided nodule, final pathology incidentally came back positive on the left side after surgery. Therefore, false negatives cannot be ruled out, as there was a high probability of diffuse disease in this small study (3 out of 8), since it was discovered that these patients had multifocal disease only after surgery. Because of this information, those with benign findings on one side cannot be written off. In a similar way, the finding that there were no false positives was also misleading. Although all patients deemed positive on genetic analysis did indeed have carcinoma, as aforementioned, some of the malignancies were found on the other lobe, without confirmation of malignancy from the site of initial biopsy. An obvious and major weakness of this study was the small sample size (n = 16). Findings were limited, as a highly specific patient population was analyzed in a relatively small urban area of northeast Pennsylvania. Only cases graded by pathology as Bethesda 3 through 5 in the year 2016 qualified for this study. Another limitation included one case graded Bethesda 3 where mutations were not detected on genetic analysis. Although this case was likely benign, analysis was limited due to borderline low amount of thyroid epithelial cells present in the specimen. Although the sample size is much too small to make global recommendations, we strongly advise testing for lesions graded Bethesda 3 through 5, per the current standard. Our findings strongly indicate that if the goal is to select appropriate therapy for cases with positive mutations on genetic analysis, surgery is warranted, as all cases with positive mutations did indeed have malignancy on final pathology. Areas for further investigation and research include identifying mutations that have a higher risk for carcinoma within a larger sample size, and better predicting the course of intervention based on certain mutations (i.e., selecting patients for hemithyroidectomy vs. total thyroidectomy given the mutations present or lack thereof).

In summary, adherence to clinical practice guidelines across the spectrum of care for patients having suspicion of thyroid carcinoma was excellent for this cohort of patients in the year 2016. Through detailed retrospective chart review it was determined that all patients with known mutations found on genetic analysis should and did indeed have surgery, while patients without mutations continue to be closely observed with ultrasound follow-up. This finding is concurrent with the recommendations of the ATA, NCCN and fulfills Standard 4.6 for 2016.

Acknowledgments Mark A. Frattali, MD, Michael A. Yoder, MD

References 1.

Kumar V, Abbas AK, Aster JC. Robbins and Cotran Pathologic Basis of Disease. 9th ed. Philadelphia: Elsevier Saunders; 2015. 1092-1100.

2. Renuka IV, Saila Bala G, Aparna C, Kumari R, Sumalatha K. The Bethesda system for reporting thyroid cytopathology: Interpretation and guidelines in surgical treatment. Indian J Otolaryngol Head Neck Surg. 2012 Dec;64:305-11. 3. Haugen BR, Alexander EK, Bible KC, Doherty GM, Mandel SJ, Nikiforov YE, Pacini F, Randolph GW, Sawka AM, Schlumberger M, Schuff KG, Sherman SI, Sosa JA, Steward DL, Tuttle RM, and Wartofsky L. 2015 American Thyroid Association Management Guidelines for Adult Patients with Thyroid Nodules and Differentiated Thyroid Cancer: The American Thyroid Association Guidelines Task Force on Thyroid Nodules and Differentiated Thyroid Cancer. Thyroid. 2016 Jan;26(1):1-133. 4. Yip L, Nikiforova MN, Yoo JY, McCoy KL, Stang MT, Armstrong MJ, Nicholson KJ, Ohori NP, Coyne C, Hodak SP, Ferris RL, LeBeau SO, Nikiforov YE, Carty SE. Tumor Genotype Determines Phenotype and Disease-related Outcomes in Thyroid Cancer: A Study of 1510 Patients. Ann Surg. 2015 Sep;262(3):519-525. 5. Nikiforov YE. Role of Molecular Markers in Thyroid Nodule Management: Then and Now. Endocr Pract. 2017 Aug;23(8):979-988. 6. Nikiforova MN, Yip L, Duvvuri U, Chiosea S, Kuriloff DB, Borrelli N, Hodak S, Urmacher C, Nikiforov YE. Multiple High-Risk Mutations Detected in Thyroid FNA Samples are Associated with Aggressive Cancer. Proceedings of the 86th Annual Meeting of the American Thyroid Association; 2016 Sep 21–25; Denver, CO. Available from: https://thyroseq.com/assets/img/poster210.jpg. 7.

nccn.org [Internet]. Fort Washington, PA: National Comprehensive Cancer Network; c2017 [cited 2017 Oct 10]. Available from: https://www.nccn.org/.

8. Wu H, Swadley M. The Bethesda System for Reporting Thyroid Cytopathology: Into the Clinic. Pathology and Laboratory Medicine International. 2015 Jul;7:47-54.

11


Scholarly Research In Progress • Vol. 2, November 2018

Intraoperative Identification of a De Garengeot Hernia: A Rare Finding and Discussion of Operative Approach Antonio Adiletta1*, Matthew T. Fisher2, and Timothy J. Farrell3

Geisinger Commonwealth School of Medicine, Scranton, PA 18509 Geisinger Wyoming Valley Medical Center, Wilkes-Barre, PA 18711 3 Geisinger Community Medical Center, Scranton, PA 18510 *Correspondence: aadiletta@som.geisinger.edu 1

2

Abstract The presence of an incarcerated vermiform appendix within a femoral hernia defect, a De Garengeot hernia, is distinctly different than an inguinal hernia containing the appendix, an Amyand’s hernia. The De Garengeot hernia is a rare finding with few reported cases. We present a 35-year-old female with a painful groin mass palpable below the inguinal ligament. An ultrasound of the groin revealed a thin-walled fluid collection medial to the femoral vessels. No additional imaging at the time was obtained. Intraoperatively, the patient was found to have her distal appendix incarcerated within the transected hernia sac, thus altering the planned surgical procedure. We present a unique operative approach for managing a De Garengeot hernia.

and brought out of the incision. A window was made at the base of the appendix which was ligated and the completion appendectomy performed. The mesoappendix was then separately ligated. The cecum was returned to the abdomen and the three muscle layers were closed individually. Due to a small amount of spillage, the decision was made to perform a primary repair of the femoral defect. Prolene® suture was used to perform the herniorrhaphy, approximating the inferior portion of the inguinal ligament to Cooper’s ligament inferiorly. The patient tolerated the procedure well and was discharged to home the same day. The final pathology confirmed acute appendicitis. The patient reported mild abdominal bloating at her 2-week postoperative visit that resolved by her 4-week postoperative visit.

Introduction Femoral hernias are a rare type of hernia, making up fewer than 5% of groin hernias. A De Garengeot hernia is defined as a femoral hernia containing the vermiform appendix. It is an even rarer diagnosis, with fewer than 100 cases reported in the literature. The incidence varies between 0.5 and 5% of all femoral hernias (1). We present a case of an intraoperatively diagnosed De Garengeot hernia in a 35-year-old female and a novel surgical approach used to repair the defect.

Case Presentation Our patient is a 35-year-old female who presented with a painful, tender right groin lump for a 6-week duration. An ultrasound revealed a thin-walled fluid collection medial to the femoral vessels (Figure 1). She was diagnosed with a femoral hernia and offered a repair of the hernia. Due to an improvement in symptoms since onset and clinic visit and in anticipation of the upcoming summer, she opted for elective repair. However, her symptoms worsened prior to her surgery date, resulting in an expedited operation. The patient was taken to the operating room for repair of her hernia. An incision was made overlying her groin bulge. She was found to have an obvious hernia sac below the inguinal ligament which was dissected from the femoral vessels and pubic tubercle. Due to the small size of the defect (>1 cm), the sac and contents were unable to be reduced into the abdomen. The decision was made to ligate the sac at the level of the defect. After ligation and upon examination, the distal tip of the appendix was found to be within the hernia sac and transected. The decision was made to extend the skin incision laterally in anticipation of accessing the peritoneal cavity. The external oblique, internal oblique, and transversalis layers were incised in the direction of their respective fibers. The cecum was identified

12

Figure 1. A thin-walled fluid sac medial to the femoral vessels diagnostic of a femoral hernia

Discussion The De Garengeot hernia is an extreme variant of femoral hernias. There are several theories attempting to explain the predisposition to developing a De Garengeot hernia. One suggests that patients have a large or overriding cecum within the pelvis that pushes the appendix into the femoral ring. Another suggests a variable anatomic rotation of bowel during fetal development. Finally, having a relatively mobile cecum may predispose one’s appendix to migration into the femoral canal (2). Due to the rarity of the pathology, there is no clear standard surgical approach. Some cases describe open approaches with or without mesh and laparoscopic approaches with or without mesh. The decision on approach often is predicated on the preoperative knowledge of the pathology or its


De Garengeot Hernia

intraoperative identification. Klipfel et al. performed an emergent right inguinotomy on a 67-year-old female with an incarcerated femoral hernia. The vermiform appendix was found incarcerated within the hernia with signs of necrosis. A conventional appendectomy was not possible due to the length of the appendix and the cecum being inaccessible through the inguinal incision. A laparoscopic appendectomy was performed and the femoral hernia was repaired with a biological mesh using the Rives technique. It was reported the patient recovered quickly and was discharged 2 days later. At her 1-month follow-up, there were no signs of complications or recurrence (3).

Case Reports. 2017;39:273-275. 4. Sinraj AP, Anekal N, Rathnakar SK. De Garengeot’s hernia – a diagnostic and therapeutic challenge. Journal of Clinical and Diagnostic Research. 2016;10:19-20. 5. Shiihara M, Kato T, Kaneko Y, Yoshitoshi K, Ota T. De Garengeot hernia with appendicitis treated by two-wayapproach surgery: a case report. Journal of Surgical Case Reports. 2017;7:1-3.

Sinraj et al. described a 38-year-old female who presented with right inguinal swelling lasting 15 years with pain and vomiting for 2 days. She was found to have an incarcerated femoral hernia. During repair via an inguinal approach, an inflamed appendix was discovered within the hernia. An appendectomy was performed and the femoral ring was closed with polypropylene mesh. Postoperatively, the patient developed a superficial surgical site infection and was treated with intravenous antibiotics (4). Shiihara et al. describe a two-way surgical approach to treat a De Garengeot hernia. A 74-year-old female was found to have a 3 cm x 3 cm mass below the right inguinal ligament. Computerized tomography (CT) revealed a swelling and an enhanced appendix within the femoral hernia, leading to the diagnosis of a De Garengeot hernia. The hernia was first reduced laparoscopically and a subsequent laparoscopic appendectomy was performed. A hernioplasty was performed using a mesh plug via an anterior approach to prevent preperitoneal contamination. The incarcerated hernia sac was reduced and then ligated and resected laparoscopically. The pathology report revealed acute purulent appendicitis. The postoperative course was uneventful. The patient was discharged on postoperative day 8 (5). Unlike other cases reported in the literature, our patient was found to have an incidental De Garengeot hernia discovered intraoperatively, thus altering our planned surgical approach. We performed an open completion appendectomy through the same inguinal incision used to perform the herniorrhaphy. The abdominal cavity was entered and the cecum easily identified, allowing the appendectomy to be performed. This approach proved to be safe for the patient while still allowing the procedure to be performed in an outpatient manner without significant postoperative complications or short-term recurrence.

References 1.

Phillips AW, Aspinall SR. Appendicitis and Meckel’s diverticulum in a femoral hernia: simultaneous De Garengeot and Littre’s hernia. Hernia. 2012;12:727-729.

2. Akbari K, Wood C, Hammad A, Middleton S. De Garengeot’s hernia: our experience of three cases and literature review. BMJ Case Reports. 2014. 3. Klipfel A, Venkatasamy A, Nicolai C, Roedlich MN, Veillon F, Brigand C, et al. Surgical management of a De Garengeot’s hernia using a biologic mesh: a case report and review of literature. International Journal of Surgery

13


Scholarly Research In Progress • Vol. 2, November 2018

A Quality Improvement Initiative to Increase Treatment Rates for Severe Hypertension in the Labor & Delivery Unit at Moses Taylor Hospital Antonio Adiletta1, Andrea Borba1*, Zaira Chaudhry1, and Ian Coote1

Geisinger Commonwealth School of Medicine, Scranton, PA 18509 *Correspondence: aborba@som.geisinger.edu 1

Abstract

Materials and Methods

Hypertension affects 12 to 22% of pregnancies and poses substantial morbidity and mortality to the mother and fetus. Hypertensive disease is the culprit of approximately 10% of maternal deaths in the United States. The present study sought to improve treatment of patients presenting with severe hypertension in the peripartum period at the Moses Taylor Hospital (MTH) Labor & Delivery and Mom & Baby unit. This was accomplished through a quality improvement initiative involving the development of a detailed protocol in compliance with the American College of Obstetricians and Gynecologists (ACOG) guidelines and its dissemination to nursing staff. The protocol defined severe hypertension and its physiologic effects, outlined shortcomings in the treatment of peripartum hypertension at MTH, and summarized ACOG’s guidelines for the treatment of severe hypertension in pregnancy. Our results concluded achievement of a measurable improvement in the treatment rates for consecutive hypertensive readings.

Quality improvement intervention

Introduction Hypertension affects 12 to 22% of pregnancies and poses substantial morbidity and mortality to the mother and fetus (1). Hypertensive disease is the culprit of approximately 10% of maternal deaths in the United States (2). The term “hypertensive disease during pregnancy” encompasses gestational hypertension, preeclampsia, eclampsia, and any of these superimposed on chronic hypertension preceding pregnancy. Given the risks hypertension poses to the mother and fetus, the American College of Obstetricians and Gynecologists (ACOG) has issued practice guidelines for managing hypertension in pregnancy. ACOG recommends that patients with severe hypertension, defined as a systolic reading ≥ 160 mmHg and/ or a diastolic reading ≥ 110 mmHg, be treated with a first-line antihypertensive agent within 60 minutes (3). A previous chart review in 2014 at Moses Taylor Hospital’s (MTH) Labor & Delivery and Mom & Baby unit found that mothers with preeclampsia who had 2 or more consecutive severe hypertension readings only had a 15.6% treatment rate with firstline agents (4). Moreover, even among patients with 5 or more consecutive severe hypertension readings, the rate of treatment was only 41.7% (4). In light of these findings, the present study sought to improve treatment of patients presenting with severe hypertension in the peripartum period at the MTH Labor & Delivery and Mom & Baby unit through a quality improvement initiative involving the development of a detailed protocol in compliance with ACOG’s guidelines and its dissemination to nursing staff.

14

In September/October 2017, a Read and Sign policy was developed and disseminated to the Women’s Health nursing staff at MTH—the policy defined severe hypertension and its physiologic effects, outlined shortcomings in the treatment of peripartum hypertension at MTH, and summarized ACOG’s guidelines for the treatment of severe hypertension in pregnancy. Specific guidelines stated in the Read and Sign policy included: • Defining severe hypertension as any systolic pressure ≥160 mmHg and/or diastolic pressure ≥110 mmHg • Requiring any severe hypertensive reading to be recorded and repeated within 15 minutes; in the event that a consecutive reading met the requirements for severe hypertension, nursing staff were to report the findings to the physician and inquire about administering first-line treatment • In accordance with ACOG’s recommendations, first-line antihypertensive treatment included administration of IV labetalol, IV hydralazine, or PO nifedipine for those without IV access within 60 minutes of the second consecutive sexvere hypertensive reading Screening and data collection Pre-intervention data was collected by retrospective review of charts from January 2017 to March 2017 of patients with diagnostic codes indicating any hypertensive disease during pregnancy. Documented vital signs during hospitalization were reviewed and any episode of severe hypertension was recorded. The Medication Administration Report was then reviewed to determine whether any treatment was given for the documented episode of hypertension. Post-intervention data was collected by retrospective review of charts from November 2017 to January 2018 of patients with diagnostic codes indicating any hypertensive disease during pregnancy. Again, vital signs were screened and severe hypertensive episodes were recorded along with any administered treatments. In addition, nursing documentation was reviewed to determine if a recheck of the hypertensive blood pressures was performed within 15 minutes of the initial reading. Statistical methods Compilation of data and statistical analyses were performed using Microsoft Excel (Microsoft Corporation, Redmond, Washington, USA). Sample means were compared using a twotailed, two-sample t-test. Proportions were compared using an “N-1” chi-squared test. An a priori significance level of P < 0.05 was selected.


Treatment Rates for Severe Hypertension

Results There were 37 patients in the pre-intervention group and 33 patients in the post-intervention group. There was a statistically significant difference in age, with the pre-intervention group being older, on average, than the post-intervention group. Moreover, there was considerable variability in the racial breakdown of both groups. Demographic variables for both groups are presented in Table 1. To account for patients with multiple hypertensive episodes, treatment rates were calculated according to hypertensive events and not on a per-patient basis. Hypertensive events were defined as 2 or more blood pressure readings meeting the threshold for severe hypertension (systolic ≥ 160 mmHg and/or diastolic ≥ 110 mmHg). Figure 1 illustrates the breakdown of consecutive severe hypertensive readings in both groups.

A stratified analysis demonstrated no statistically significant difference in treatment rates between the pre-intervention and post-intervention groups when looking at treatment rates for 2, 4, and 5 or more consecutive severe blood pressure readings (P > 0.05). However, there was a statistically significant difference in treatment rates for 3 consecutive severe blood pressure readings between groups, with a higher rate noted in the post-intervention group (P = 0.03). It is worth noting that the overall treatment rates did not differ significantly between groups. A stratified comparison of treatment rates is presented in Table 2. In the post-intervention group, 65.10% of severe hypertensive readings were reassessed within 15 minutes and 34.90% of severe hypertensive readings were not reassessed within 15 minutes.

Table 2. Comparison of treatment rates

Table 1. Sample demographics

15


Treatment Rates for Severe Hypertension

Figure 1. Consecutive severe hypertensive readings

Discussion

Conclusion

Our pre-intervention data showed an overall treatment rate (≥2 consecutive hypertensive readings) of 14.42%, and the post-intervention overall treatment rate increased to 18.75%, denoting an increase of 4.33 percentage points following implementation of our intervention. However, this difference did not reach statistical significance (P = 0.46). There was a statistically significant increase in pre-intervention versus post-intervention treatment rates for 3 consecutive hypertensive readings following implementation of our intervention (P = 0.03).

Our study addressed the need to improve the treatment of severe hypertension in pregnancy at MTH. We were able to achieve a measurable improvement in treatment rates for consecutive hypertensive readings. Future studies may want to focus on creating a standardized method to ensure proper comprehension of the Read and Sign policy as well as a means of incentivizing staff to comply with the protocol.

Our study was limited by the number of charts we were able to review given the timeline of our project. This in turn limited our sample size, which may have affected our ability to reach statistical significance when comparing overall treatment rates. Another limitation included the lack of standardized education for our Read and Sign intervention and a lack of incentive to adhere to the protocol. There was no method to ensure that staff at Moses Taylor read and comprehended the new protocol. It is worth noting that our pre-intervention group had a greater mean age than our post-intervention group. Given that there was a statistically significant difference in age between groups, the possibility that patient age may have influenced physician decision-making cannot be ruled out. Although it is possible that physicians were less or more likely to treat patients on the basis of their age, this study was not adequately powered to perform this type of subgroup analysis. Future studies may want to evaluate whether patient demographics (e.g., age, race) influence physician decision-making in the setting of severe hypertension in the peripartum period. 16

Acknowledgments We would like to thank Commonwealth Health and the following individuals for their assistance and support throughout this project: Tina Carey, Maria Tagliaferri, Mary Sewatsky, MD, Joan Kies, Margrit Shoemaker, MD, and the Women’s Health clinical staff at MTH.

References 1.

American College of Obstetricians and Gynecologists (ACOG) Committee on Obstetric Practice. ACOG practice bulletin. Diagnosis and management of preeclampsia and eclampsia. Int J Gynaecol Obstet. 2002;77:67-75.

2. Creanga AA, Berg CJ, Syverson C, Seed K, Bruce FC, Callaghan WM. Pregnancy-related mortality in the United States, 2006-2010. Obstet Gynecol. 2015 Jan;125(1):5-12. 3. American College of Obstetricians and Gynecologists (ACOG). Hypertension in pregnancy. Report of the American College of Obstetricians and Gynecologists’ Task Force on Hypertension in Pregnancy. Obstet Gynecol. 2013 Nov;122(5):1122-31. 4. Wilcox B, Chaudhry ZS, DiBileo A. Evaluating Preeclampsia Quality Measures Through a Retrospective Chart Review at Moses Taylor Hospital. Presentation at the Geisinger Commonwealth School of Medicine Research Symposium, July 29, 2014.


Scholarly Research In Progress • Vol. 2, November 2018

Epiploic Appendagitis: A Case Report and Review of Literature Nicole Marianelli1* and Timothy Farrell2

Geisinger Commonwealth School of Medicine, Scranton, PA 18509 Geisinger Community Medical Center, Scranton, PA 18510 *Correspondence: nmarianelli@som.geisinger.edu 1

2

Abstract Epiploic appendagitis is caused by infarction of colonic fat outpouchings leading to abdominal pain that can mimic the presentation of other abdominal processes and lacks pathognomonic findings that would otherwise aid in its diagnosis. Diagnosis can be made through abdominal computed tomography (CT), but requires familiarity of the disease and a high index of suspicion in both the emergency room physician and the radiologist. It can be managed conservatively with nonsteroidal anti-inflammatory medications or with definitive surgery to remove the infarcted appendage.

Introduction Epiploic appendagitis is an infarction of an epiploic appendage, a small peritoneal pouch arising from the colon containing fat and small vessels. Infarction can occur due to torsion or spontaneous thrombosis of the venous draining system (1). Clinically, patients present with acute onset of sharp abdominal pain and are typically otherwise healthy (2, 3). Their presentations can mimic diverticulitis, appendicitis, and other abdominal pathology, which can make diagnosis difficult. Due to the lack of pathognomonic clinical findings, as well as variable familiarity with the disease on imaging, misdiagnosis can frequently occur, leading to unnecessary hospitalization and medical treatment as well as increased patient costs (1). Familiarity with radiologic imaging of this condition as well as a high index of suspicion can lead to more accurate diagnosis.

Figure 1. CT scan of the abdomen and pelvis with contrast prior to surgery reveals a well-circumscribed soft tissue mass (red arrow) within the mesentery adjacent to the right ascending colon.

Case Description A 39-year-old female presented to the Emergency Department with right lower quadrant abdominal pain for 4 days that was gradually increasing in severity. She also complained of nausea, anorexia, and constipation. The patient was previously healthy and had no prior abdominal surgeries. On physical exam, the patient had moderate right lower quadrant tenderness on palpation and guarding. Bowel sounds were normal and the abdomen did not appear distended. Blood work revealed no significant abnormalities. Computed tomography (CT) scan of the abdomen and pelvis was completed with IV contrast and revealed a 5 x 3 x 2.5 cm well-circumscribed bilobed fat density soft tissue mass within the mesentery in the right lower abdomen, adjacent to the medial margin of the proximal ascending colon (Figure 1). There was also surrounding stranding, inflammatory changes, and probable fluid adjacent to the mesentery. The patient was taken to the operating room the following day for exploratory laparotomy, which revealed an infarcted portion of omentum consistent with epiploic appendagitis (Figure 2 and Figure 3). The patient had an uneventful recovery period and was discharged home after 2 days.

Figure 2. Intraoperative image shows the infarcted epiploic appendage measuring approximately 5.8 cm in length.

Figure 3. Gross specimen pathology confirmed a benign, nodular portion of adipose tissue with fat necrosis and reactive changes indicating an infarcted epiploic appendage. 17


Epiploic Appendagitis

Discussion Epiploic appendages are outpouchings of fat from the colonic serosa ranging from 1 to 2 cm thick and 0.5 to 5 cm long, containing small arterioles and venules (3). They extend from the cecum to the rectosigmoid colon and increase in both size and number distally (4). Proposed physiologic functions include colon support, immune response, and colonic absorption (3). Pathologically, these structures can twist on their vascular stalk, leading to infarction, venous thrombosis, and necrosis. If untreated, they may become fibrotic or calcified and detach as a loose body within the abdominal cavity (5). Literature has reported that epiploic appendagitis occurs most commonly in the rectosigmoid junction, but it can also occur in the ileocecal region and the ascending, transverse, and descending colon (6). Epiploic appendagitis can affect people of any age, but is most common in the fourth and fifth decades of life and is more common in males than females (3, 7). It is hypothesized that it can occur following intense exercise and is thought to be more common in patients with larger peritoneal cavities, such as those who are obese (5). As stated, epiploic appendagitis can be a diagnostic challenge for emergency room physicians as well as radiologists. This is because its presentation can mimic other pathologic abdominal processes. Clinically, it presents most frequently as lower abdominal pain that is localized, sharp, and nonmigratory (3, 7). Vomiting, fever, and leukocytosis are usually absent, which can help to differentiate this disease from other abdominal pathologies. Occasionally, a mass can be felt. Differential diagnosis for this presentation includes appendicitis, acute diverticulitis, neoplastic process, and other intra-abdominal processes. Patients presenting with abdominal pain typically undergo imaging, such as ultrasound, X-ray, and CT scan. In the absence of pathology, epiploic appendages are not visible on imaging (5). Ultrasound findings suggestive of this disease include an oval, hyperechoic, noncompressible mass surrounded by a hypoechoic ring (1, 8). Abdominal CT scan will show fat-containing bodies that are attached to the colon with hyperattenuated peripheral rims (1, 8). Epiploic appendagitis is a considered a self-limited disease that will resolve with conservative management including oral nonsteroidal anti-inflammatory medications (9). The appendages can calcify, detach, and remain as free bodies in the peritoneum (5, 10). Rarely, adherence to other abdominal structures can lead to bowel obstruction or intussusception (11). Given the benign nature of this disease, misinterpretation of imaging studies and misdiagnosis can lead to unnecessary surgery, which ultimately provides the diagnosis. Unnecessary surgical intervention can lead to increased cost and use of hospital resources as well as increased morbidity to the patient. It is recommended by some that conservative therapy be attempted first, with surgery being reserved for patients who fail conservative therapy, or with those with new or worsening symptoms (7, 9, 12). Others prefer laparoscopic exploration with ligation and removal of the appendage to provide definitive treatment and prevent complications (3, 9). Thus, both conservative management and surgical intervention can be considered appropriate courses of action given the patient presentation, clinical course, and physician and patient preference. 18

In conclusion, epiploic appendagitis is a relatively common yet often missed cause of acute abdominal pain. Radiologic findings can help suggest the diagnosis to clinicians and guide treatment. A high index of suspicion and increased clinician awareness can prevent unnecessary surgery and waste of hospital resources.

Disclosures Patient consent was obtained. No competing interests to declare.

References 1.

Mollà E, Ripollés T, Martínez MJ, Morote V, Roselló-Sastre E. Primary epiploic appendagitis: US and CT findings. European Radiology. 1998;8(3):435-8.

2. Rao PM, Wittenberg J, Lawrason JN. Primary epiploic appendagitis: evolutionary changes in CT appearance. Radiology. 1997;204(3):713-7. 3. Sand M, Gelos M, Bechara FG, Sand D, Wiese TH, Steinstraesser L, et al. Epiploic appendagitis – clinical characteristics of an uncommon surgical diagnosis. BMC Surgery. 2007;7(1):11. 4. Arshad Rashid SN, Suhail Yaqoob Hakim, Manzoor Ahamad Chalkoo. Epiploic appendagitis of caecum: a diagnostic dilemma. German Medical Science. 2012;10. 5. Ghahremani GG, White EM, Hoff FL, Gore RM, Miller JW, Christ ML. Appendices epiploicae of the colon: radiologic and pathologic features. RadioGraphics. 1992;12(1):59-77. 6. Macari M, Laks S, Hadju C, Babb J. Caecal epipolic appdendagitis, an unlikely occurence. Clinical Radiology. 2008;63(8):895-900. 7.

Ozdemir S, Gulpinar K, Leventoglu S, Uslu HY, Turkoz E, Ozcay N, Korkmaz A. Torsion of the primary epiploic appendagitis: a case series and review of literature. American Journal of Surgery. 2010;199(4):453-8.

8. Rioux M, Langis P. Primary epiploic appendagitis: clinical, US, and CT findings in 14 cases. Radiology. 1994;191(2):523-6. 9. Legome EL, Belton AL, Murray RE, Rao PM, Novelline RA. Epiploic appendagitis: the emergency department presentation<sup>1</sup>. Journal of Emergency Medicine. 22(1):9-13. 10. Ross JA, McQueen A. Peritoneal loose bodies. British Journal of Surgery. 1948;35(139):313-7. 11. Kessler SE, Martin G. Epiploic Appendagitis: A Benign Process at Risk of Unnecessary Hospitalization and Interventions. Journal of General Internal Medicine. 2017;32(6):711. 12. Vinson DR. Epiploic appendagitis: a new diagnosis for the emergency physician. Two case reports and a review. Journal of Emergency Medicine.17(5):827-32.


Scholarly Research In Progress • Vol. 2, November 2018

“Nothing to Disclose”: Quantification of Conflicts of Interest in Medscape Ambica C. Chopra1*, Stephanie D. Nichols2, and Brian J. Piper1

Geisinger Commonwealth School of Medicine, Scranton, PA 18509 Husson University School of Pharmacy, Bangor, ME 04401 *Correspondence: achopra@som.geisinger.edu 1

2

Abstract Background: Online medical reference websites are computerized informational resources utilized by health care providers to enhance their education and to provide quality care to patients. However, these resources may not adequately reveal pharmaceutical–physician interactions and their potential conflicts of interest (CoIs). This investigation: 1) evaluates the correspondence of two well-utilized CoI databases: ProPublica’s Dollars for Docs (PDD) and the Centers for Medicare and Medicaid Services Open Payments (CMSOP) and 2) quantifies CoIs among Medscape authors. Methods: Two data sets were used: the 100 most common drugs and the top 50 causes of death. These topics were entered into Medscape. The authors (N = 139) were then input into CMSOP and PDD and compensation and number of payment were determined for 2013–2015. A subset of highly compensated authors that reported “Nothing to disclose” were examined. Data analysis and figures were prepared with GraphPad Prism and Excel. Results: There was a high degree of similarity between CMSOP and PDD for compensation (R2 > 0.998, p < 0.0001) and number of payments (R2 > 0.992, p < 0.0001). The amount received was 1.4% higher in CMSOP ($4,059,194) than in PDD ($4,002,891). The articles where the authors had received the greatest compensation were in neurology (Parkinson’s disease = $1,810,032.07), oncology (acute lymphoblastic leukemia = $616,726.9), and endocrinology (Type I diabetes = $377,388). Two authors that reported “Nothing to disclose” had received appreciable and relevant compensation. Conclusions: CMSOP and PDD produce equivalent results. CoIs were common among Medscape authors, but selfreporting may be an inadequate reporting mechanism. Recommendations are offered on how to mitigate or reduce pharmaceutical-physician interactions in influential point-ofcare electronic resources.

Introduction A conflict of interest (CoI) may occur when a person has competing interests. In the context of patient care, CoI can arise in a multitude of ways. There may be extensive financial relationships between some providers (physicians, nurse practitioners, physician assistants) and industry (1). Financial relationships exist between providers and pharmaceutical, medical device, and biotechnology companies (2). A CoI occurs when a provider accepts gifts that can be in the form of meals, drug samples, and other items (2). Other examples of CoI include acting as a promotional speaker or a writer for products, or having a financial investment in a pharmaceutical

or medical product company (2). Many investigations have concluded that providing incentives to providers to market certain products or medications increases the company’s revenues as well as society’s spending on health care (2). In 2016, US prescription sales were $448.2 billion, and this was a 5.8% increase from the previous year (3). Although it is impossible, and undesirable, to eradicate profit-driven medical companies, attempts to mitigate pharmaceutical-physician interactions have been underway since 1990 (4). Many self-regulatory initiatives, such as the American Medical Association’s guidelines for gifts to a physician from industry, were employed to reduce pharmaceutical-physician interactions in medical centers (4). In order to prevent bias in journal publications, the Journal of the American Medical Association began requiring papers that described industry-sponsored studies to provide their raw data to independent biostatisticians (5). From 1993 to 2005, several states (California, Maine, Vermont, West Virginia, District of Columbia, and Minnesota) enacted laws aimed at increasing transparency, which necessitated disclosure of industry payments to physicians (4, 6). Several more states proposed similar legislation in 2006 (6). The Sunshine Act was introduced in 2007 and “required pharmaceutical and medical device makers to collect, track, and report financial relationships with physicians and teaching hospitals…” in 2007 (4). Even with the increased transparency, CoI between the industry and physicians continued and took more shapes and forms than simple gift-giving. Pharmaceutical companies and device makers influence a physician’s drug prescribing pattern or the way physicians come to clinical decisions through strategic marketing and social psychological processes (4). When this finding came to light, many medical schools and practitioners began limiting their interactions with pharmaceutical representatives (7). Organizations such as ProPublica used investigative journalism methods to compile the number of payments physicians received (8). The Centers for Medicare and Medicaid Services (CMS) created the National Physician Payment Transparency Program, which required drug manufacturers and device makers to publicly report every payment made to physicians, podiatrists, dentists, optometrists, chiropractors, and teaching hospitals (9). The purpose was to create greater transparency and discourage future industry–physician interactions. However, even with increased transparency and awareness, certain pharmaceutical–physician interactions may go unnoticed. Medscape is a medical reference site that people of varied education levels can use for medical advising and as a source of continuing education (10). The writers of these articles on diseases, symptoms, or drugs are health care 19


Conflicts of Interest in Medscape

professionals including physicians, pharmacists, and nurses. Medscape was founded in 1995 and was defined as an “online resource for better patient care” for health professionals and interested consumers (10). It sought to provide peerreviewed and edited information by renowned clinicians in fields such as AIDS, other infectious diseases, urology, and surgery (10). Medscape started off as a low-budget experiment by SCP Communications Inc. and therefore was launched on its own without any significant partnerships. After the launch, Medscape contained a couple thousand articles that attracted a few hundred people (10). Unlike other publishers, who sought to make money through advertisements and/or subscriptions, Medscape sought to attract large audiences by showcasing trusted publications as well as its own content. Medscape’s revenue came from receiving articles from publishers and publishing them on its online site. The revenue was then shared by Medscape and the publishers. Within a year, Medscape persuaded 14 journals to participate (10). Now, there are 20 societies and over 20 publishers and data suppliers who provide publications to the Medscape database (10). Originally, Medscape’s primary target was American clinicians but was available to anyone who registered for a free account on the website. This opendoor policy gave both physicians and non-physicians equal access to health-care–related materials. Medscape eventually created its own journal of medicine called Medscape General Medicine (MedGenMed) which published material from 1999 to 2009 (11). During this time, MedGenMed published peerreviewed original articles, clinical trials, reviews, public health commentaries, editorials, and others (11). Medscape currently provides medical news, journals, and Continuing Medical Education to healthcare professionals and non–health-care professionals. The articles and publications are all considered, peer-reviewed, and written by leading physicians in America (10). If an author of a well-used site is receiving funding from an outside company to advocate for a certain drug or treatment modality, it may put patients at risk for biased guidance or encouraging practices that are inconsistent with evidencebased medicine. Medscape does provide some limited CoI disclosure information in terms of specific companies and involvement, but does not disclose the quantity of remuneration. A $10 lunch may have a different potential for influence than if compensation was sufficient to purchase a vacation, a car, or a new home. One-on-one time may also be more influential than group meetings or seminars. The two databases that report pharmaceutical–physician interactions are the Centers for Medicare and Medicaid Service’s Open Payments (CMSOP) and ProPublica’s Dollars for Docs (PDD). The Physician Payment Sunshine Act established the National Physician Transparency Program, also known as Open Payments. It is part of section 6002 of the Affordable Care Act that requires medical device manufacturers to report payments given to physicians or teaching hospitals (12). It also discloses any ownership or investments that the physician might hold in any medical companies. The goal of this program is to show the extent and nature of industry–physician interactions and to discourage this interaction in the future. The Open Payments transparency program was established in 2014 and includes a national database that shows the payments physicians have received from pharmaceutical and medical device companies (13). 20

The payments reported are greater than $10 (12). The PDD database was released by ProPublica in 2010 (14). ProPublica is an American nonprofit investigative journalism organization. PDD contained 30 000 payments made to approximately 17 000 American physicians (1). ProPublica obtained this data from several companies and makes this database publicly available. CoIs have previously been quantified for some influential materials (e.g., 15). Three-quarters of the working groups responsible for writing the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), the “bible” of psychiatry, were composed of members where the majority had ties to the pharmaceutical industry (16). Over one-quarter (26.2%) of Goodman and Gilman’s Pharmacological Basis of Therapeutics, a cornerstone pharmacology textbook in medical and pharmacy schools alike, had an undisclosed patent (17). Examination of 6 medical textbooks revealed an unreported $20 million and 677 undisclosed patents, primarily to the authors of Harrison’s Principles of Internal Medicine (18). Inaccurate disclosures, defined as a failure to report receipt of >$5,000, were made by 89.3% of clinical practice guideline authors (19). However, no prior research has systematically examined point-of-care computerized resources like Medscape. Therefore, the objectives of this study were to 1) determine whether PDD and CMSOP produce results which are consistent with each other; and 2) quantify the extent that Medscape authors received external funding from medical device companies. As some have found that self-reported CoI disclosure is inadequate (19), qualitative observations were made on a subset of authors that reported “Nothing to disclose” on industry-reported disclosures.

Materials and Methods Procedures An account on Medscape was created. A list of the 100 most common drugs in the US was compiled and an article pertaining to each was found on Medscape. PDD and CMSOP searches were conducted and the number of payments and amounts received were recorded for 2013, 2014, and 2015 (the most recent year available) (8, 9, 20). Next, the top 50 causes of death in the US in 2014 were analyzed, and an article relating to each was matched with a Medscape article (21). Then, each of the 50 causes was typed into either the Medscape search bar or the Google search bar. If the search was conducted on Google, the cause was typed followed by the word “Medscape.” This technique assisted in screening for relevant Medscape articles. Next, the most descriptive and longest article best discussing the cause of death was recorded in Microsoft Excel along with the authors, coauthors, and editors. For example, one of the top causes of death is prostate cancer (malignant), and this was matched to a Medscape article titled “Metastatic and Advanced Prostate Cancer.” All except 2 causes of death were successfully matched (96.0%). After recording the relevant data in Excel, each clinician’s name was input into PDD. The years, the number of payments and payment amounts was recorded. The author’s gender was


Conflicts of Interest in Medscape

also recorded. The authors, coauthors, and editors were then searched on CMSOP. The years, the number of payments and payment amounts received were recorded. If the author was not found, the author’s name was searched on National Provider Index (NPI), as each health care provider has a unique 10-digit NPI code (22). Medscape does report on CoI disclosures. The medications and medical devices listed in PDD were examined among authors or editors that reported “Nothing to disclose.” All information examined and reported is publicly available. This study was approved by the Institutional Review Board of The Wright Center of Scranton. Data analysis A Coefficient of Determination test was conducted (R2 test) to determine the association between CMSOP and PDD. A chisquared (χ2) or Fisher’s exact test was used to determine if there was a significant association between the two data sets (drugs or causes of death) and the presence of a database entry. Two chisquared tests were completed, one for CMSOP and the other for PDD. The calculations for the chi-squared tests were performed with GraphPad (23). GraphPad Prism and Microsoft Excel were utilized for graph construction. A p < 0.05 was considered statistically significant.

Results For the drug articles, there were no authors. Instead, the Medscape Drugs and Diseases Advisory Board (N = 8) was analyzed with 75.0% found on both ProPublica’s Dollars for Doc (PDD) and Centers for Medicare and Medicaid Services Open Payments (CMSOP). Of the 50 causes of death, 48 had relevant matching articles on Medscape, resulting in 131 unique authors (17.6% female). If author was defined to include those who authored more than one article, then there were 161 total authors, excluding the pharmacists and nurses who contributed to the articles but who were ineligible for PDD and CMSOP. An R2 test was used to evaluate the correlation between the CoI databases. The total number of payments were reported for years 2013 to 2015. In 2013, 2014, and 2015, the R2 values were 0.997, 0.998, and 0.992, respectively. An R2 test was used again to analyze the payment amounts received in 2013, 2014, and 2015. The R2 values were 1.000, 0.999, and 0.998, respectively. Figure 1 shows the association between databases for compensation for all years. These findings indicate a very

Figure 1. The total payments received by Medscape writers from years 2013 (A), 2014 (B) and 2015 (C), as reported by ProPublica’s Dollars for Docs (PDD) and Center for Medicare and Medicaid Services (CMSOP) Open Payments. This scatterplot depicts the payments physicians received in the Dollars for Docs (DD) database as well as the Open Payment (OP) database by year.

high correspondence between CMSOP and PDD for both measures. The chi-squared test explored associations between whether an entry existed and the authors/advisory board. The twotailed chi-squared test was not significant (χ2(1) = 1, p = 0.656), indicating that both variables were independent of each other. The total amount received was 1.4% higher in CMSOP ($4,059,194) than in PDD ($4,002,891) for 2013 to 2015. Figure 2 depicts the total amount received for Medscape authors for drugs and causes of death for each year and shows a high degree of similarity, independent of the CoI database. The highest earning clinician-authors by year for the 50 causes of death were examined for each year. For 2013, Dollars for Docs and Open Payments revealed that the highest-earning author was SRB, who received $86,242 in some form from industries and contributed to the Medscape article titled “Parkinson Disease.” For 2014 and 2015, SRB again received the most with $302,080 and $305,356, respectively. In 2014 and 2015, GTG received the second most with $207,428 and $154,474 and contributed to “Type 1 Diabetes Mellitus.” In 2015, KS received $148,202 and contributed to the Medscape articles 21


Conflicts of Interest in Medscape

titled “Acute Lymphoblastic Leukemia” and “Multiple Myeloma.” A ranking of the highest-compensated articles, which is the sum of multiple authors and editors, are shown in Table 1. Oncology articles occupied 6 of the 10 highest rankings (2–4, 6, 7, and 9). Other areas included neurology (1), endocrinology (5), pulmonology (8), and cardiology (10). Further examination was made with PDD among the authors or editors who reported “Nothing to disclose.” A Parkinson’s disease author received compensation for a device used in assessing Parkinson’s, DaTscan™ Ioflupane ($4,071) and several Parkinson’s disease pharmacotherapies (apomorphine: $15,462; carbidopa/levodopa: $17,721; rasagiline: $29,521) in 2015. A Type I diabetes author who reported “Nothing to disclose” received compensation for three drug treatments for

diabetes: dapgliflozin ($75), liraglutide ($26,100), and exenatide ($128,000) in 2015.

Discussion There are several key findings in the study. Examination of PDD and CMSOP revealed the total amounts received by physician-authors both on Medscape Drugs and Diseases Advisory Board and for the top causes of death articles was completed. Under every author on a Medscape article, there is a disclosure statement present. While some authors reported money received from pharmaceutical companies or medical device manufacturers, others did not. This discrepancy could result from the authors correctly interpreting that their compensation was not relevant to the article. However, selfreporting CoI may in some cases also be inadequate (19). An R2 test was used to analyze the association between the total number of payments in PDD and CMSOP. Since the R2 values were extremely close to 1, this showed that was a strong and positive correlation between the number of payments as reported by these CoI databases. The R2 tests for the number of payments and payment amounts demonstrated that PDD and CMSOP produce congruent results that closely mirror each other despite minor differences. One difference is that PDD rounds the payment to the dollar, whereas CMSOP shows the cent values as well. Because PDD and CMSOP are consistent with each other, this shows both databases are reliable and have accurately compiled pharmaceutical– physician transactions. One difference is that CMSOP shows payments up to 2016, whereas as of April 2018, PDD only shows payments up to 2015.

Figure 2. Total compensation for causes of death and drugs received by Medscape authors from 2013 to 2015, as reported by ProPublica’s Dollars for Docs and Center for Medicare and Medicaid Services (CMS) Open Payments conflict of interest databases.

On the other hand, PDD provides a substantial level of detail about the specific medications where authors received compensation. The neurologist who wrote the Parkinson’s article also received over $100,000 from companies related to Parkinson’s treatments and diagnosis. However, the majority of remuneration was related to antiepileptic drugs. The endocrinologist who wrote the diabetes article received the preponderance of compensation for anti-diabetes pharmacotherapies. This study was not designed to determine whether the presentation of Parkinson’s, diabetes, or oncology topics in Medscape differed in any way among authors with and without CoIs. This was not the objective of this report, but these types of questions could be examined in future research. Because Figures 1 and 2 showed that PDD and CMSOP mirrored each other, one chi-squared test was done as the results would be equivalent for each database. The PDD database was used. There was a lack of association present, and this indicates that having an entry on PDD or CMSOP is independent of the clinician-authors or advisory board members from Medscape.

Table 1. Top 10 highest-compensated articles in Medscape, as reported by the Centers for Medicare and Medicaid Services Open Payments. Number of Open Payments eligible authors or editors is shown in parentheses. *Includes compensation from author reporting “Nothing to disclose.” 22

Two key limitations should be noted. Since only physicians are searchable in CoI databases, authors who are


Conflicts of Interest in Medscape

pharmacists, scientists, and nurses were not recorded or compared. We do not have a thorough understanding of industry interactions among non-physicians. Second, the presence of potential CoI were appreciable. However, the goal of this study was not to determine whether CoIs impacted Medscape content. Medscape is presumed to be an unbiased site that the general public and clinicians use for medical advising. It reaches a wide audience in part due to its nature as a freely available internet resource. If an author of a well-used site is receiving funding from an outside company to advocate for a certain drug or treatment, this may put clinicians and patients at risk of misguidance or receiving information that is incongruent with evidence-based medicine. Self-reported CoIs, or their absence, should be interpreted with caution. For example, the author of an article on Type I diabetes may not believe that compensation related to a drug for Type II diabetes is relevant. The time frame covered for disclosures is also important. The intent of this report was not to impugn the reputation of any individual or organization, but instead to explore CoI disclosures and more critically evaluate the presented information. In conclusion, given the prevalence of potential CoIs, and occasional instances of inaccuracies, further research evaluating other point-of-care online resources (e.g., UpToDate) may be warranted.

Disclosures ACC and SDN have no disclosures. In the past 3 years, BJP has received research support and travel from the Center for Wellness Leadership, a nonprofit organization for a medical marijuana study, travel from the National Institute of Drug Abuse, and is a Fahs-Beck Fellow.

Acknowledgments Software to complete this research was provided by the National Institute of Environment Health Sciences (NIEHS T32-ES007060-31A1). No specific funding was received for this study. We thank ProPublica for making the Dollars for Docs database publicly available. An earlier version of this paper was completed as part of course requirement for Readings in Biomedical Sciences with Darina Lazarova, PhD.

References 1.

Norris S, et al. Characteristics of Physicians Receiving Large Payments from Pharmaceutical Companies and the Accuracy of their Disclosures in Publications: an Observational Study. BMC Medical Ethics. 2012;13:24. doi:10.1186/1472-6939-13-24.

2. Institute of Medicine (US) Committee on Conflict of Interest in Medical Research, Education, and Practice; Lo B, Field MJ, editors. Conflict of Interest in Medical Research, Education, and Practice. Washington (DC): National Academies Press (US); 2009. 6, Conflicts of Interest and Medical Practice. 3. Schumock G, et al. National Trends in Prescription Drug Expenditures and Projections for 2017. American Journal of Health-System Pharmacy. 2017;74(15):1158.

Representatives Interactions and Conflict of Interest: Challenges and Solutions. INQUIRY: The Journal of Health Care Organization, Provision, and Financing. 2016:53. 5. Hirsch L. Conflicts of Interest, Authorship, and Disclosures in Industry-Related Scientific Publications: The Tort Bar and Editorial Oversight of Medical Journals. Mayo Clinic Proceedings. 2009;84(9):811-821. 6. Ross J, et al. Pharmaceutical Company Payments to Physicians: Early Experiences with Disclosure Laws in Vermont and Minnesota. JAMA. 2007;297(11):1216-1223. 7.

Zipkin DA, et al. Interactions Between Pharmaceutical Representatives and Doctors in Training: A Thematic Review. Journal of General Internal Medicine. 2005;20(8):777-786.

8. Ornstein C, et al. Dollars for Docs. ProPublica. 2016. 9. Open Payments Data. Open Payments. CMS. 2016. 10. Frishauf P. Medscape – The First 5 Years. Medscape General Medicine. 2005; 7(2):5. 11. Romaine M, et al. So Long but Not Farewell: The Medscape Journal of Medicine (1999–2009). The Medscape Journal of Medicine. 2009;11(1):33. 12. Marshall D, et al. Disclosure of Industry Payments to Physicians: An Epidemiologic Analysis of Early Data from the Open Payments Program. Mayo Clinic Proceedings. 2016;91(1):84-96. 13. Maya B, et al. Does the Open Payments Database Provide Sunshine on Neurosurgery? Neurosurgery. 2016;79(6):933-938. 14. Jones R, et al. We’ve Updated Dollars for Docs. Here’s What’s New. ProPublica. 2016. 15. Liu J, et al. Payments by US pharmaceutical and medical device manufacturers to US medical journal editors: Retrospective observational study. BMJ. 2017;359:j4619. 16. Cosgrove L, et al. Comparison of DSM-IV and DSM-5 panel members’ financial associations with industry: A pernicious problem persists. PLoS Medicine. 2012; 9:e1001190. 17. Piper B, et al. A quantitative analysis of undisclosed conflicts of interest in pharmacology textbooks. PloS One. 10:e0133261. 18. Piper B, et al. Undisclosed Conflicts of Interests among Biomedical Textbook Authors. AJOB Empirical Bioethics. 2018. Doi:10.1080/23294515.2018.1436095. 19. Andreatos N, et al. Discrepancy between financial disclosures of authors of clinical practice guidelines and reports by industry. Medicine. 2017;96:e5711. 20. Top 200 Drugs to Memorize. Pharmacy Tech Test. 2017. 21. Top Causes of Death in America. Graphiq. 2014. 22. NPPES NPI Registry. National Plan & Provider Enumeration System. CMS. 2018. 23. Motulsky H, et al. QuickCalcs. GraphPad Software. 2018.

4. Patwardhan A. Physicians-Pharmaceutical Sales 23


Scholarly Research In Progress • Vol. 2, November 2018

Association of ADHD Diagnosis Among Young Adolescents and Specialty of Managing Physician with Decision for Treatment Adriana Feliz1, Bo La Jung1*, and Mariamawit Yilma1

Geisinger Commonwealth School of Medicine, Scranton, PA 18509 *Correspondence: bjung@som.geisinger.edu 1

Abstract Prescription drug abuse continues to be a growing issue in the United States leading to what is known as the opioid epidemic. The three most abused prescription drugs in the United States are opioids, central nervous system (CNS) depressants, and CNS psychostimulants. CNS psychostimulant is the dominant treatment option for attention deficit hyperactivity disorder (ADHD) and is known to carry a high risk for habituation. The National Ambulatory Medical Care Survey (NAMCS) was used to perform secondary data analysis to assess whether the specialty of managing physician influences the decision for the treatment option chosen. This research hypothesized that specialists (psychiatrists) would have a higher rate of prescribing CNS psychostimulant medication among other treatment options for patients with ADHD compared to family medicine and pediatric physicians. Further, it hypothesized that there is no association between the specialty of managing physician for prescribing CNS psychostimulant versus CNS non-psychostimulant medication. Our results demonstrate an association between managing physician and treatment type (chi-squared: 11.7674, p-value: 0.0082), including a statistically significant difference between the specialty of managing physician in choosing to provide no treatment versus providing combination therapy (behavioral therapy and CNS medication) (OR: 8.4977, CI: 2.0373 – 35.4450, p-value: 0.0033). However, there is no statistically significant difference between the specialty of ADHD managing physician with the choice for prescribing CNS psychostimulant over non-psychostimulant medication (OR: 1.0386, CI: 0.2362 – 4.5678, p-value: 0.96). Overall, this study found that the specialty of ADHD managing physician will not affect the odds of prescribing medication for ADHD over providing other treatment options. In addition, the specialty of physician will not affect the odds of prescribing habitual forming CNS psychostimulant over CNS nonpsychostimulants.

Introduction Prescription drug abuse and its related health consequences is a significant health problem in the United States. The Centers for Disease Control and Prevention (CDC) estimates that 91 Americans die every day from a prescription-drug– related unintentional overdose (1). According to the National Institute of Drug Abuse, the three most abused prescription drugs in the United States are opioids, commonly used for pain treatment; central nervous system (CNS) depressants, commonly used for anxiety and sleep disorders; and CNS psychostimulants, commonly used to treat attention deficit hyperactivity disorder (ADHD) (2). The use of prescription drugs with a high risk for habituation may lead to the development of a substance use disorder (SUD) (2). There are many known drivers of the problem associated with 24

the increased use of prescription drugs, including provider clinical practices, insufficient oversight to curb inappropriate prescribing, and a belief by many people that prescription drugs are not dangerous (3). Several retrospective and prospective studies reveal that the core symptoms associated with ADHD are risk factors for the development of a SUD (4). ADHD is a brain disorder marked by an ongoing pattern of inattention and/or hyperactivity-impulsivity that interferes with functioning or development (5). Patients with ADHD have a 6.2 times higher risk of developing a SUD. Individuals with ADHD experience an earlier age of onset and a longer duration of SUDs (4). During the past 50 years, both health insurance providers and the National Registry of Mental Health have steadily reported a significant increase in the number of mental health disorder diagnoses among children and adolescents (4). Although many medical specialties are licensed to diagnose and treat ADHD, research has not yet provided a definitive answer as to which specialty is best suited for the treatment of ADHD. Research studies found that overall primary care clinicians take a rule-based approach to prescribing, with a focus on treatment optimization, while psychiatrists take the child’s view into account and value psychosocial approaches when determining treatment (7). Interestingly, the study identified that the physicians who focus on treating broader patterns of impairment were wary of the potential side effects of long-term treatment with CNS psychostimulants (7). Psychiatrists scored high on their focus on symptom control and preference for long-term medication use, while pediatricians reported using more rule-based approaches (7). Current studies have been unable to discover the effects of these changing treatment patterns on children’s health outcomes and their families (8). ADHD has no known cure and relies on treatment options such as medication, psychotherapy, and counseling to help reduce symptoms and improve daily life (7). A combinatorial treatment approach has been found to be the best and safest approach for managing the symptoms for ADHD in young adolescents (7). However, CNS psychostimulants have been the dominant treatment due to their immediate effect on symptom control. These stimulants increase energy, attention, and alertness, as well as elevate blood pressure, heart rate, and respiratory rate (8). CNS psychostimulant medications were provided, prescribed, or continued at about 80% of ADHD visits among young adolescents in 2012 to 2013 (6). Despite their benefits in effective and immediate symptom relief, CNS psychostimulants can be very addictive, carrying a high risk for abuse and dependence. Due to the risk for habituation of psychostimulants and increased risk of SUD associated with the core symptoms of individuals with ADHD, it is critical to determine a specialty


ADHD Diagnosis Among Young Adolescents

that is most suited to treat ADHD among young adolescents. This research proposed that by interpreting the data from the National Ambulatory Medical Care Survey (NAMCS), the identification of the suitable specialty can be made. Identifying a specialty can lead to policymakers creating laws to ensure that the most suitable specialty manages ADHD patients, with a focus on reducing the risk of developing a SUD. Recent studies on ADHD diagnosis and treatment showed that there has been an overall increase in ADHD diagnoses among young adolescents where psychostimulants continue to be the dominant treatment. It also highlighted differences between specialty of the managing physicians with regard to treatment approaches: rule-based or informal approach. The goals of this research study were to determine if specialty of provider influences the decision of treatment option (behavioral therapy, CNS medication, combination therapy, no intervention) for ADHD and whether there is an association between specialty of managing physician (psychiatrists, pediatricians, family medicine physicians) of ADHD and prescribing CNS psychostimulants over non-psychostimulants in young adolescents (age 4 to 17 years old). We hypothesized that psychiatrists would have a higher rate of prescribing CNS medication for patients with ADHD compared to the general physician (family medicine and pediatrician), and that there is no association between the specialty of managing physician of ADHD for prescribing CNS psychostimulant vs CNS nonpsychostimulant medication.

either seen by family medicine physicians, pediatricians or psychiatrists were selected for the study. Fifth, patients with the diagnosis of ADHD were identified and isolated using the International Classification of Diseases (ICD) code 9. After all five data-cleaning steps, the sample size for the study was 155 participants. In the absence of a survey codebook with a precise definition for CNS psychostimulants medication and CNS non-psychostimulants medication, we created a medication classification list. Using the medication classification list, each ADHD medication was classified and assigned to one of two main groups: CNS psychostimulant medication or CNS non-psychostimulant medication. Finally, other treatment options were identified, such as mental health counseling and psychotherapy, and grouped into behavioral therapy. This led to the definition of treatment options as being CNS medication only (CNS psychostimulants and CNS nonpsychostimulants) or behavioral therapy only (mental health counseling and psychotherapy), both treatment or neither treatment. The survey questions that are associated with the variables are listed in Table 1.

Materials and Methods Study design and description of human subjects The participants in the National Ambulatory Medical Care Survey (NAMCS), conducted by the National Center for Health Statistics (NCHS) from 2015, served as the study population for this research project. NAMCS is a national probability sample survey of non-federal office-based physicians. The parent study used a cross-sectional survey design. NAMCS physician sample is composed of doctors of allopathic medicine (MDs) and doctors of osteopathic medicine (DOs). The basic sampling unit for the NAMCS is the physician-patient encounter or visit. Only visits to the offices of non-federally employed physicians classified by the American Medical Association (AMA) or the American Osteopathic Association (AOA) as “office-based, patient care” were included. The parent’s study population was refined further by only including study subjects that were within the range of 4 to 17 years of age. This age range was defined as a young adolescent. Four years of age was chosen as the lower limit because the American Academy of Pediatrics guidelines for the diagnosis and treatment of ADHD begin at this age. Process of data cleaning and analysis The data used for this analysis came from secondary data analysis of the NAMCS database for the year 2015. First, the data set was obtained in an SPSS format and was converted into a 2013 Excel spreadsheet to be analyzed. Second, the data was isolated by age, including only patients from ages 4 to 17 years old. Third, unnecessary variables that would not produce contributable findings to the objective of the study were deleted. Fourth, patients who were

Table 1. National Ambulatory Medical Survey question and survey order

Exposure variable The exposure variable for this study was the specialty of ADHD managing physician (associated with survey question #343). To create a larger comparison sample, the variable “General Medicine” was developed by merging family medicine physician and pediatrician data. “General Medicine” data were then compared to “Psychiatrist” data to obtain contributable findings for the objective of the study. Outcome variables The first outcome variable for this study was the different types of ADHD treatment. This variable was defined as a patient receiving either CNS medication (associated with survey questions #250–279), behavioral therapy (associated with survey question #188 and #191), combination therapy, 25


ADHD Diagnosis Among Young Adolescents

receiving both medication and behavioral therapy (associated with survey questions #188, #191, #250–279), or no treatment. CNS medication includes CNS psychostimulants and nonpsychostimulants. Behavioral therapy was defined as a combination of categories from the codebook, including psychotherapy and mental health counseling. To further achieve contributable findings, the first outcome variable, different types of ADHD treatment, was subdivided into two variables. The first subdivision categorized two different types of ADHD treatment as patients receiving behavioral therapy only or patients receiving CNS medication only. The second subdivision categorized different types of ADHD treatment as patients receiving no treatment or patients receiving combination therapy. The second outcome variable for this study was the types of CNS medication (associated with survey questions #250–279). This variable was defined as either receiving CNS psychostimulants or CNS non-psychostimulants. ADHD medications defined as CNS psychostimulants were lisdexamfetamine, methylphenidate, dexmethylphenidate, dextroamphetamine-amphetamine, and dextroamphetamine. ADHD medications defined as CNS non-psychostimulants were clonidine or guanfacine. Of note, all drugs were converted into generic names for the convenience of grouping.

confounding variables such as age and gender influence the association between specialty of physician and prescription of CNS medication for treatment of ADHD. Ethical considerations National Care Health Survey (NCHS) is legally bound to assure the confidentiality of all responses. NCHS assures its participants that the survey will be treated with respect, and all the responses they provided will not put them at risk. It takes all the necessary precautions to protect the confidentiality of any identifiable information by releasing information only to those individuals or organizations that were mentioned to the respondent before he or she participated in the study. To minimize the risk of breaching the confidentiality of respondents, names, addresses, or any personal information that could directly identify individuals will be left only on internal files unless absolutely needed. The information released to the public will be de-identified. This study heavily relies on the data collected through NCHS—it will not impose any physical risks, and the potential for breaching the confidentiality of de-identified survey data is minimal. Patient consent was implied when participants (physicians) completed and submitted the survey.

Results

Descriptive variables

Descriptive analysis

The descriptive variables in this study include continuous variables, dichotomous variables, and categorical variables. The continuous variable was determined to be age (associated with survey question #3) with a range from 4 to 17 years old. Age (associated with survey question #3) was further subdivided into two categories—pre-teen, from 4 to 12 years old; and teen, from 13 to 17 years old—to form the dichotomous variable for the study. The dichotomous variable, sex (associated with survey question #6), was included in the study with the option of male or female. The categorical variable, race (associated with survey question #10), was separated into white only, black/African-American only, Asian only, Native Hawaiian/Pacific Islander only, Native Indians/ Alaskan only, more than one race or not identified.

The study population consisted of 155 children diagnosed with ADHD. Univariate analysis of age was performed with the age range from a minimum of 4 years old to a maximum of 17 years old. The most frequent age in the study was 9 years old, indicated by the mode value (Table 2). The mean age of 11.4323 and the median age of 11.0 indicates a Gaussian distribution. Among those 155 study participants currently diagnosed with ADHD, the majority, 101 participants (65%), were male, with only 54 participants (35%) being female (Figure 1 and Table 3).

Statistical analysis The data were analyzed utilizing Microsoft Excel 2013, EpiInfo 7 and IBM SPSS Statistics (v2015). After the data were coded and categorized, descriptive and analytical tests such as frequency tables, pie charts, bar graphs, chi-squared independence test, simple logistic regression and multivariate logistic regression were used to analyze the data. Chi-squared analysis with an alpha value of 0.05 was used to determine whether the specialty of physician and type of treatment is independent of one another. Data were analyzed to determine descriptive statistics such as the proportions for each variable. Simple logistic regression was used to determine the odds ratio and 95% confidence interval to see whether there is an association between the specialty of physicians and the prescription of CNS medication. Simple logistic regression was also used to test the association between the specialty of managing physician and the treatment option. Finally, multivariate logistic regression was used to determine whether

26

Study participants were primarily identified as white, with 98 participants. African-American/black made up the next represented racial group, with 13 participants. Two participants identified as more than one race and the smallest represented racial group was Asian, with only 1 participant. Forty-one participants did not identify with any racial group (Figure 2 and Table 4). Distribution frequency with race of ADHD patients illustrates the uneven distribution of race of ADHD patients in this study population (Table 4). When stratifying patient visit with physician specialty, data show the majority of the visits among young adolescents with ADHD, from age 4 to 17 years old, were managed by a pediatrician (94 children, 61%), then by a psychiatrist (53 children, 34%), and then by a family medicine physician (8 children, 5%) (Figure 3 and Table 5). Distribution of treatment options prescribed for young adolescent ADHD patients illustrates 122 participants were prescribed medication for treatment, 42 participants were prescribed behavioral therapy (mental health counseling or psychotherapy intervention), 29 participants received combinational therapy (medication and behavioral therapy), and 21 participants received no treatment (Figure 4). The


ADHD Diagnosis Among Young Adolescents

types of treatment received by participants were further separated into 3 different age groups (4 to 5 years old, 6 to 11 years old, 12 to 17 years old) (Figure 5). Among children of age 4 to 5 years old, 75% received medication only, 25% received both treatments, and none received behavioral therapy only or no treatment. Among children 6 to 11 years old, 58.4% received medication only, 20.7% received both treatments, 6.5% received behavioral therapy only, and 14.4% received neither treatment. Within the 12 to 17 years old category, 60.8% received medication only, 9.5% received behavioral therapy only, 16.2% received both treatments, and 13.5% received neither treatment. Overall, 60% of the participants received Table 4. Frequency table showing distribution of ADHD patients by race

Table 2. Univariate analysis of age of study participants

Figure 3. ADHD visits stratified by physician specialty

Figure 1. Gender distribution of ADHD patients

Table 5. Frequency table showing ADHD visits stratified by physician specialty

Table 3. Frequency table showing gender distribution of ADHD patients

Figure 2. Distribution of ADHD patients by race

Figure 4. Distribution of treatment options prescribed for young adolescent ADHD patients 27


ADHD Diagnosis Among Young Adolescents

CNS medication and behavioral therapy or no therapy). General physician and no therapy was used as a baseline for comparison. Results show an odds ratio of 8.4997, confidence interval of 2.0373 to 35.4450, and p-value of 0.0033 (Table 8). To test the association between the specialty of managing physician and type of prescribed medication options, bivariate simple logistic regression test was performed (Table 9). This tested the association between the exposure variable, specialty of physicians (general medicine and psychiatry), with the outcome variable, type of prescribed treatment options (CNS stimulant medication or CNS non-stimulant medication). General physician and CNS non-stimulant medications were used as a baseline for comparison. Results show an odds ratio of 1.0386, confidence interval of 0.2362 to 4.4678, and p-value of 0.9600.

Figure 5. Types of treatments (no therapy, CNS medication only, both medication and behavioral therapy) stratified by age groups of ADHD patients

medication only, 7.75% received behavioral therapy only, 18.71% received both treatments, and 13.54% received neither treatment. Analytical tests To test the first hypothesis and compare the association of specialty of the physician with types of treatment prescribed for ADHD patients, a chi-squared test of independence was conducted. Exposure variable (ADHD managing physician) and outcome variable (types of treatments) were compared to discover if the type of treatment prescribed by the physician can be correlated with the specialty of the physician. The null hypothesis was that the specialty of diagnosing physician and ADHD treatment preferences are independent. The alternative hypothesis was that the specialty of diagnosing physician and ADHD treatment preferences are not independent. A chi-squared test results comparing specialty of physicians and types of treatment prescribed for ADHD patients shows bivariate analysis of the chi-squared test results (Table 6). Chi-squared was 11.7674 and degree of freedom was 3. The p-value was 0.0082 and the alpha was set at 0.05. To further test the first hypothesis, the association between the specialty of the physician with the type of prescribed treatment options, two bivariate simple logistic regression tests were conducted. The first bivariate simple logistic regression tested the association between the exposure variable specialty of physicians (general medicine and psychiatry) and type of prescribed treatment options (CNS medication or behavioral therapy). General physicians and behavioral therapy were used as a baseline for comparison for the study. Bivariate simple logistic regression testing the association between the specialty of managing physicians and types of prescribed treatment options (CNS medication or behavioral therapy) shows an odds ratio of 0.9050, confidence interval of 0.2523 to 3.2462, and p-value of 0.8783 (Table 7). A second bivariate simple logistic regression was performed to test the association between the exposure variable, specialty of physician and type of prescribed treatment options (both 28

The multivariate logistic regression was performed to test the association between the exposure variable specialty of physicians (general medicine and psychiatry) and types of prescribed treatment options (CNS stimulant medication or CNS non-stimulant medication) by controlling for potential confounders: gender (male or female) and age groups (pre-teen and teenager) (Table 10). General physicians, CNS non-stimulant medications, female and pre-teen was used as a baseline for comparison. Results show an odds ratio of 1.0984, confidence interval of 0.2456 to 4.49124 and p-value of 0.9023.

Table 6. Chi-squared test results comparing specialty of physicians and types of treatment prescribed for ADHD patients

Table 7. Bivariate simple logistic regression testing the association between the specialty of managing physicians and types of prescribed treatment options (CNS medication or behavioral therapy)

Table 8. Bivariate simple logistic regression testing the association between the specialty of managing physicians and types of prescribed treatment options (CNS medication and behavioral or no therapy)


ADHD Diagnosis Among Young Adolescents

Table 9. Bivariate simple logistic regression testing the association between the specialty of managing physicians and types of prescribed treatment options (CNS stimulant medication or CNS non-stimulant medication)

Table 10. Multivariate logistic regression testing the association between the specialty of managing physicians and types of prescribed treatment options (CNS stimulant medication or CNS non-stimulant medication) by controlling for age groups and gender

Discussion Descriptive analysis The descriptive analysis results describe the distribution of the study population by age, race, gender, the specialty of physicians, and treatment options. Since the majority of the study sample were male, the external validity of the study may be affected in terms of generalizability to the general population. We hypothesize that male was predominant in the study population because symptoms of ADHD may correlate with the typical characteristic behavior of young males. The lack of diversity within the represented racial groups in this study population poses problems with external validity, as the racial representation in the study population is predominantly white, creating barriers for generalizability. The distribution of treatment options prescribed for young adolescent ADHD patients illustrates that the most favorable treatment prescribed for ADHD patients is CNS medications followed by behavioral therapy (mental health counseling or psychotherapy intervention). The results found an equal preference for either behavioral therapy subgroup option. Even when types of treatment received was further separated into different age groups, medication is used the most among all age groups. Analytical analysis To test the first hypothesis, a chi-squared test and simple logistic regression analysis test was conducted. The results of the chi-squared test had a p-value of 0.0082, which is less than the significance level 0.05. Due to this, we cannot accept the null hypothesis. Thus, there is a relationship between the specialty of a physician and the type of ADHD treatment regimen for patients age 4 to 17 years old. To further investigate the association of the specialty of physicians and the treatment options further, two separate simple logistic regressions were performed. Simple logistic

regression analysis testing the association between the specialty of physicians and prescription of treatment options (combinational therapy and no therapy) indicated that the odds of getting combinational therapy among children managed by specialty physician (psychiatrist) were 8.4497 times higher than those managed by general practitioners. As the confidence interval does not include the value “1” and p-value is less than 0.05, there is a statistically significant difference between the specialty of the prescribing physician and the prescription of combinational therapy. When looking at association between the specialty of physicians and prescription of treatment options (CNS medication and behavioral therapy), since the confidence interval range includes the value “1” and the p-value is greater than 0.05, there is no statistically significant difference between the specialty of the prescribing physician and the prescription of CNS medication or behavioral therapy. To test the second hypothesis, we conducted a simple logistic regression to test the association between the specialty of physicians and treatment options (CNS stimulant medications vs CNS non-stimulant medications). The results indicated that the odds of CNS stimulant ADHD medication prescription among children managed by specialty physician (psychiatrist) were 1.0386 times higher than those managed by general practitioners. As confidence interval includes the value “1” and the p-value is greater than 0.05, there is no statistically significant difference between the specialty of the prescribing physician and the prescription of CNS stimulant ADHD medications. The study included multivariate analysis to test if gender and age confounded the association between the specialty of physicians and prescribed treatments (CNS stimulant vs CNS non-stimulant). The results revealed the odds of CNS stimulant prescription among male teenagers managed by specialty physician were 1.0984 times higher than those managed by general medicine practitioners after controlling for age groups and gender. As the confidence interval range includes “1” and p-value is greater than 0.05, it concludes that there is no statistically significant difference between the specialty of the physician and the prescription of CNS stimulants after controlling for potential confounders such as age groups and gender. Even though the results were not statistically significant, as there was a slight increase in the odds ratio after controlling for age groups and gender, these covariates are possible positive confounders. The limitation of this study includes challenges during the data-cleaning process. There was difficulty in isolating ADHD medications from the list of medication on the original survey. Each patient had multiple medications labeled as different variables, as well as numerous ADHD medications with no codebook. To address this difficulty, an ADHD medication codebook was created for an easier cleaning process. Further, labeling for each diagnosis on the original survey was not found, leading to difficulty in identifying ADHD diagnosed patients. ICD-9 code was used to review and identify all diagnoses listed in the data set and created a numerical system to simplify the identification of patients with ADHD diagnoses. Another limitation involved a small sample size from the questionnaire for the family medicine specialty. This small sample size created difficulty for logistic regression analysis. To mediate this issue, pediatric and family medicine 29


ADHD Diagnosis Among Young Adolescents

were merged to create a new variable named General Medicine. This allowed for larger sample size for a more significant/accurate logistic regression analysis. The study also lacked diversity in gender and race, creating limitations with generalizability to other populations. Further, this study possesses an abundance of white and male participants. The reason for this occurrence may be due to disparities in socioeconomic status interfering with access to healthcare, variable cultural definitions of mental health diagnosis and treatment, and behaviors of males that mirror the core symptoms of ADHD.

Conclusion Prescription drug abuse and its related health consequences continue to be significant health problems in the United States. With psychostimulants being the dominant form of treatment for ADHD, this is associated with high risk of development of substance use disorder. It is important to be aware of the prevalence of the prescription of these medications. The study indicated that the specialty of managing physician would not affect the odds of prescribing medication for the treatment of ADHD. In addition, the specialty of the physician will not affect the odds of prescribing habit-forming CNS psychostimulant medication. Being that general medicine practitioners as well as specialists have the same odds of prescribing CNS medication, they can both be targets for public health initiatives aimed at reducing the prescription of habit-forming CNS psychostimulant medication. Public health initiatives can use the results from this study as a resource to educate family medicine physicians, pediatricians, and psychiatrists to create awareness of how medication is still a dominant treatment choice and promote alternative treatment methods to reduce the dependence on habit-forming CNS medications as the gold standard for treatment of ADHD. Future research can be done to overcome limitations addressed in this research. It is suggested that future research replicate this study and diversify their study populations to increase generalizability. It is also suggested that future studies be done to replicate the study using data from diagnosing physicians to determine if specialty of physician affects the rate of diagnosis for ADHD. It is also recommended to increase the sample size of the study populations in each comparison category to increase the power of the study and eliminate Type II errors.

Acknowledgments We would like to express special thanks to Professor Catherine Freeland, MPH, who assisted us throughout this community health research.

References 1.

Understanding the Epidemic. Drug Overdose, National Center for Injury Prevention and Control. 2017. Available at: https://www.cdc.gov/drugoverdose/epidemic/index. html. Accessed Aug. 17, 2017.

2. Prescription Drugs & Cold Medicines. National Institute on Drug Abuse. 2017. Available at: https://www.drugabuse. gov/drugs-abuse/prescription-drugs-cold-medicines. Accessed Aug. 23, 2017. 30

3. Addressing prescription Drug Abuse in the United States: Current Activities and Future Opportunities. Journal of Drug Addiction, Education and Eradication. 2015;11(1):75110. 4. Jain S, Jain R, Islam J. (2011) Do stimulants for ADHD increase the risk of substance use disorder? Current Psychiatry. (8)20-24 5. Garfield C, Dorsey E, Zhu S, et al. Trends in Attention Deficit Hyperactivity Disorder Ambulatory Diagnosis and Medical Treatment in the United States, 2000–2010. Academic Pediatrics. 2012;12(2):110-116. doi:10.1016/j. acap.2012.01.003. 6. Merten E, Cwik J, Margraf J, Schneider S. Overdiagnosis of mental disorders in children and adolescents (in developed countries). Child and Adolescent Psychiatry and Mental Health. 2017;11(1). doi:10.1186/s13034-016-01405. 7.

NIMH. Attention Deficit Hyperactivity Disorder. 2017. Available at: https://www.nimh.nih.gov/health/topics/ attention-deficit-hyperactivity-disorder- adhd/index.shtml. Accessed Aug. 17, 2017.

8. Albert M, Rui P, Ashman JJ. (2017, January 25). Physician Office Visits for Attention-deficit/Hyperactivity Disorder in Children and Adolescents Aged 4–17 Years: United States, 2012–2013. National Centers for Health Statistics. Retrieved June 7, 2017, from https://www.cdc.gov/nchs/ products/databriefs/db269.htm.


Scholarly Research In Progress • Vol. 2, November 2018

Recurrent Cellulitis and Complex Regional Pain Syndrome Type 1 Eduardo Ortiz1*

Geisinger Commonwealth School of Medicine, Scranton, PA 18509 *Correspondence: eortiz@som.geisinger.edu 1

Abstract Complex regional pain syndrome (CRPS) is a chronic neurologic condition involving the limbs. It is characterized by edema, allodynia, hyperalgesia, and abnormalities of blood flow with autonomic and motor dysfunction that frequently occurs following trauma. We present the case of a 47-yearold female with CRPS of the left lower leg that developed following a knee injury. After 8 months of persistent pain and edema, she presented to the Emergency Department with cellulitis of the left lower leg, necessitating an above-the-knee amputation. Two years later, following a right ankle sprain, she developed CRPS of the right leg and recurrent cellulitis. Alterations in blood flow and inflammation observed in CRPS ultimately result in trophic changes that may predispose to ulceration and recurrent infection, as observed in our patient. Genetic factors and autoimmunity may also play a role. Studies have shown that some patients develop immunoglobulin G autoantibodies directed toward autonomic neurons. This finding suggests that some individuals are predisposed to developing CRPS, which may explain the bilateral involvement in our patient that occurred after isolated traumatic events, as well as the more rapid progression of disease observed the second time around, as circulating autoantibodies may already have been present.

Introduction Complex regional pain syndrome (CRPS) is a chronic and painful neurologic condition involving the limbs that presents with edema, allodynia, hyperalgesia, and abnormalities of blood flow with autonomic and motor dysfunction that frequently occurs following trauma (1). The pathophysiology of CRPS is thought to be related to altered sympathetic nervous system function that accompanies an exaggerated inflammatory response following trauma. CRPS type 1, previously referred to as reflex sympathetic dystrophy, occurs in patients without evidence of peripheral nerve injury, and is the most common form of CRPS, responsible for 90% of cases. In contrast, CRPS type 2 presents after partial injury to a major peripheral nerve, with similar clinical findings of allodynia and hyperalgesia (2, 3). Epidemiologically, CRPS reportedly occurs up to 3 times more frequently in women compared to men. Onset is related to a traumatic inciting event, most commonly fracture or surgery (3). Although causal relationships have not been established, risk factors that have been identified include menopause and osteoporosis, a history of migraine headaches and asthma, angiotensin-converting enzyme (ACE) inhibitor therapy, and cigarette smoking. Cigarette smoking in particular has been linked to poorer prognosis (1). The clinical presentation of CRPS is multifaceted, with a

combination of a sensory changes, autonomic symptoms, motor symptoms, inflammation, and trophic changes (3). The differential diagnosis is thus broad, and work-up includes ruling out emergent conditions such as compartment syndrome and deep vein thrombosis. Other conditions that may share clinical features with CRPS include peripheral neuropathy, peripheral vascular disease, Raynaud phenomenon, erythromelalgia, and acute infection of the skin, muscle, or joint.

Case Presentation A 47-year-old female presented with progressive pain, edema, and limited range of motion of the lower left leg over the course of 4 months following a fall that injured her left knee. Repeated radiographs and ultrasound were negative for fractures and clots. MRI demonstrated generalized subcutaneous edema and minimal patchy areas of bone marrow edema (Figure 1). A nuclear medicine 3-phase bone scan of the knee showed diffuse uptake (Figure 2). These findings, in combination with clinical presentation, were suggestive of complex regional pain syndrome. Despite temporary relief that was obtained after placement of a thoracic spinal cord stimulator, the patient continued to have persistent pain and edema over the next 8 months when she presented to the Emergency Department with worsening pain, diffuse erythema, and ulceration on anterior aspect of left lower leg with yellow drainage. The patient was admitted for her first episode of cellulitis and successfully treated with intravenous antibiotics, vancomycin and piperacillintazobactam, and discharged on oral cephalexin. Two weeks following discharge and still on oral antibiotics, the patient reported increasing hypersensitivity and burning pain with episodic shooting. She was admitted again for cellulitis at an outside hospital just a few weeks later. After another 2 episodes of cellulitis over several months that were managed with intravenous cefazolin and followed by oral cephalexin based on wound cultures that demonstrated methicillinsensitive Staphylococcus aureus (MSSA), the decision was made to perform an above-the-knee amputation 4 months after the last episode due to continued pain, ulceration, limited range of motion, and repeated infections. One month prior to the amputation, a culture of a persistent leg ulcer was performed that was negative, only demonstrating normal skin flora. Gross inspection of the surgical specimen identified numerous irregularly shaped, superficially located ulcers ranging in size from 0.2 x 0.2 cm to 2.5 x 1.2 cm. Femoral, posterior tibial, and anterior tibial artery lumens were patent without observable calcification. A popliteal lymph node was identified that was 1.5 x 1.3 x 0.5 cm in size, the cut surface of which was tan-white and smooth. No microscopic description of the amputated limb was performed.

31


Recurrent Cellulitis

that were subsequently discontinued when joint aspirate returned negative for growth, and improvement was seen with nonsteroidal anti-inflammatory drugs and compression. However, she presented 6 months later to the Emergency Department after developing increasing hypersensitivity to pain, edema, erythema, and superficial ulcerations with drainage (Figure 3 and Figure 4). An attempt at outpatient management with oral trimethoprim-sulfamethoxazole was unsuccessful, and she returned a week later and admitted for inpatient intravenous treatment with vancomycin and piperacillin-tazobactam. She experienced another episode of cellulitis 3 months later that was again unsuccessfully managed with outpatient oral therapy with cephalexin, necessitating inpatient treatment with intravenous cefazolin for resolution.

Figure 1. MRI of left ankle showing generalized subcutaneous edema and minimal patchy areas of bone marrow edema

Figure 3. Photograph of left lower leg demonstrating erythema, edema, and weeping superficial erosions and ulcerations

Figure 2. Three-phase nuclear medicine bone scan demonstrating diffuse uptake by the left knee, in addition to diffuse pari-articular uptake at left ankle. Focal uptake at right hindfoot attributed to arthritis or post-traumatic changes.

Two years later, the patient presented to the Emergency Department with a right-sided ankle sprain sustained 5 days prior during a fall. Within a week, she returned to the Emergency Department for progressive symptoms and persistent edema. Presumptively diagnosed with cellulitis, the patient was initially managed with intravenous antibiotics 32

Figure 4. Photograph of left lower leg


Recurrent Cellulitis

Discussion The clinical severity of CRPS is highly variable, with only 7% of patients presenting with severe complications including infections, ulcers, chronic edema, dystonia, and myoclonus (3). The majority of these were observed in this patient. Additionally, the presentation of CPRS varies depending on the stage, as the affected limb may alternate between chronic (cold) and acute (warm) phases. The cold phase is characterized by increased sympathetic function leading to vasoconstriction, producing a cold, cyanotic, and clammy limb. This is thought to be secondary to up-regulation and hypersensitivity of adrenergic receptors that occurs due to a reduction of catecholamines in the warm phase as a compensatory response. During the warm phase, an exaggerated inflammatory response produces vasodilation, edema erythema, and warmth as a result of pro-inflammatory cytokines, including interleukin (IL)-1β, IL-2, IL-6, tumor necrosis factor-α (TNF-α), calcitonin gene-related peptide, bradykinin and substance P (1). A study investigating vascular abnormalities observed in CRPS type 1 demonstrated that compared to the unaffected contralateral limb, the affected limb shows increased perfusion, while the cold phase shows the opposite with cooler temperature and decreased blood flow (4). These alterations in blood flow and inflammation that ultimately result in trophic changes observed in CPRS, predisposing to complications such as ulceration and recurrent infection, as observed in our patient.

against alpha-1a adrenoceptors. Pain. 2014;155(11):24082417. 6. Kohr D, Tschernatsch M, Schmitz K, Singh P, Kaps M, Schafer KH, et al. Autoantibodies in complex regional pain syndrome bind to a differentiation-dependent neuronal surface autoantigen. Pain. 2009;143(3):246-251.

Genetic factors and autoimmunity have also been hypothesized to play a role in the development of CRPS. Studies have shown that some patients develop immunoglobulin G autoantibodies directed towards autonomic neurons and somatic peripheral nerves (1, 5.) These antigens include sympathetic and myenteric plexus neurons (6). Additionally, autoantibodies directed against the alpha-1a adrenergic receptors have been identified in patients with CRPS (5). These findings suggest that some individuals are predisposed to developing CRPS, which may explain the bilateral involvement in our patient that occurred after isolated traumatic events years apart. This also explains the more rapid progression of disease observed the second time around, as circulating autoantibodies may already have been present.

References 1.

Goh EL, Chidambaram S, Ma D. Complex Regional Pain Syndrome: A Recent Update. Burns & Trauma 2017;5(2).

2. Stanton-Hicks M, Jänig W, Hassenbusch S, Haddox JD, Boas R, Wilson P. “Reflex sympathetic dystrophy: changing concepts and taxonomy.” Pain 1995;63(1);127-133. 3. Raja SN, Grabow TS. Complex Regional Pain Syndrome I (Reflex Sympathetic Dystrophy). Anesthesiology 2002;96(5):1254-1260. 4. Wasner G, Schattschneider J, Heckmann K, Maier C, and Baron R. Vascular abnormalities in reflex sympathetic dystrophy (CRPS I): mechanisms and diagnostic value. Brain. 2001;124(3):587-99. 5. Dubuis E, Thompson V, Leite MI, Blaes F, Maihofner C, Greensmith D, et al. Longstanding complex regional pain syndrome is associated with activating autoantibodies

33


Scholarly Research In Progress • Vol. 2, November 2018

Improving Lipid Management in Patients with Chronic Kidney Disease at Northeast Pennsylvania Nephrology Associates Shane Warnock1*, Jordan Chu1, Holly Corkill1, and Sarah McDonald1

Geisinger Commonwealth School of Medicine, Scranton, PA 18509 *Correspondence: swarnock@som.geisinger.edu 1

Abstract Objectives: Following the evidence-based clinical guidelines for statin therapy in patients with chronic kidney disease (CKD) established by the Kidney Disease: Improving Global Outcomes (KDIGO) organization, the primary aims of this project were to assess the percentage of patients treated within guidelines at Northeast Pennsylvania Nephrology Associates and develop interventions for improvement as needed. Methods: An initial data collection and evaluation of 185 randomized patients meeting KDIGO criteria for statin therapy was performed. Patients not on a statin and with no documented contraindication were identified as the target population for intervention. Significant differences in statin therapy between patient populations was evaluated with chi-squared statistical tests. Interventions included patient and physician education, as well as a pop-up survey built into the office’s medical record system. Post-intervention data collection and evaluation was performed to assess for intervention effectiveness. Results: Initial evaluation revealed 27% of patients meeting KDIGO guidelines for statin therapy, and no documented contraindications, were not on a statin at the time of data collection. Further analysis revealed a significant difference (P < 0.05) between male and female patients, with 38% of females meeting KDIGO guidelines not being on a statin vs only 19% of males. Post intervention data collection was limited to 76 patients of a single physician due to inconsistent utilization of the designed interventions throughout the office. For the physician implementing the interventions, 22 patients (29%) were not on statin therapy without contraindication prior to intervention. This was reduced to 15 patients (20%) postintervention. Further analysis revealed significant differences in the treatment of male and female patients (P < 0.05) was dependent on the provider. Conclusion: The results of the analysis of patients from a single provider proved the methods of intervention in this study to be effective in increasing the number of CKD patients treated within KDIGO guidelines; however, physician utilization of the interventions was underwhelming and further education and collaborative work is needed to reach all patients of this office. Future work will include reevaluating the designed interventions and assessing physician adherence to the use of these interventions while reanalyzing their effectiveness.

Introduction Chronic kidney disease (CKD), defined as the presence of at least 3 months of albuminuria or impaired kidney function, which is defined as an estimated glomerular filtration rate < 60 mL/min/1.73 m2, has been shown to be associated with an 34

increased risk of cardiovascular disease. It has been shown that there is a linear increase in cardiovascular mortality with decreasing glomerular filtration rate (GFR), as well as high rates of coronary heart disease and myocardial infarction, comparable to those seen in patients with diabetes. Abnormal lipid metabolism is commonly seen in patients with CKD, and although CKD patients should be considered high risk for cardiovascular disease, they are often unevaluated for dyslipidemia and undertreated with cholesterol lowering agents as recommended by evidence-based clinical guidelines (1, 2). In previous practice, guidelines for lipid management were largely based on specific cholesterol level targets. However, the Kidney Disease: Improving Global Outcomes (KDIGO) organization has established evidence-based clinical guidelines for lipid management in patients with CKD based on cardiovascular risk assessment instead. This riskassessment approach is in parallel with guidelines published by the American College of Cardiology and the American Heart Association. KDIGO recommends statin therapy for lipid management for all CKD patients stage 3 or greater who are 50 years of age and older (or 18 to 49 with history of diabetes mellitus, coronary artery disease, previous stroke) and not dependent on dialysis (1). Although these guidelines have been clinically proven to benefit CKD patients, a common clinical issue faced with statin use is the discontinuation of therapy due to the patient experiencing symptoms of myalgia. While clinical trial data does not suggest the side effects of statins significantly reduce compliance, clinically patients are often found to be intolerant of these effects and decide to end therapy. In the event of a patient experiencing myalgia, it is recommended to stop the offending statin until the myalgia resolves before beginning therapy with an additional statin (3). Given the high risk of cardiovascular disease in patients with CKD, physicians should utilize the KDIGO guidelines and work closely with each patient to formulate a plan of care that involves choosing the right statin on an individual basis. With that in mind, an assessment of the percentage of patients treated within the KDIGO guidelines at the Northeast Pennsylvania Nephrology Associates was conducted. Following the initial assessment, the necessity for the development of any interventions for improvement was evaluated and administered.

Materials and Methods During Phase 1, initial data collection was performed, evaluating 211 patients for age, sex, disease status (HTN, DM, CKD stage, GFR, proteinuria), smoking history, and lipid management status (statin use, statin allergy, other lipid medications used, and previous statin use). Using the KDIGO guidelines for lipid management in patients with CKD, the


Improving Lipid Management

patient list was narrowed to 185 total patients (105 male, 85 female) that met the criteria for statin therapy. Of these 185, those patients that were not on a statin and did not have a documented contraindication, were identified as the target group for intervention. Chi-squared analysis was used to evaluate the results of this data collection. Three interventions were designed to improve statin use in CKD patients that met KDIGO guidelines. Patient education pamphlets that detailed statin benefits, side effects, and utility in these patients’ health care were designed and distributed to CKD patients in the Northeast Pennsylvania Nephrology Associates office. A physician educational information sheet that detailed Phase 1 results of statin use within the office was distributed to the nephrologists of Northeast Pennsylvania Nephrology Associates. Finally, an EMR pop-up survey was designed and implemented that alerted physicians to a patient’s statin status and encouraged either placing the patient on a statin or documented a reason the patient was not on a statin.

were not offered statin therapy due to physician preference. The results of the post-intervention data analysis are encouraging in that with continuing patient education on statin therapy, the number of patients treated per KDIGO guidelines may be increased further.

Figure 1. Lipid management in patients meeting KDIGO criteria for statin therapy

After these interventions were implemented for > 15 days, data were collected from 76 patients using the same parameters that were utilized in Phase 1 data collection. These data were then compared to the results of Phase 1 collection to evaluate for the intervention effectiveness. Chi-squared analysis was utilized to analyze the data from these 76 patients and compare it to the pre-intervention data.

Results and Discussion Analysis of the initial data collection of 211 patients revealed 185 patients, 105 males and 80 females, meeting KDIGO criteria for statin therapy. As can be seen in Figure 1, of these 185 patients, 133 (72%) were currently receiving statin therapy, 2 (1%) had a documented statin allergy, and 50 (27%) were not on a statin and had no documented contraindication. These results were summarized and presented to the office’s providers, emphasizing the 27% of patients not being treated per KDIGO guidelines without apparent contraindications and highlighting the possibility for improvement in patient care in this group through the implementation of the previously described interventions. Once the survey on statin therapy was implemented into the office’s EMR system and in place for more than 15 business days, the second round of data collection was completed. However, it was discovered at this point that only 1 provider was utilizing the survey and patient information sheet, while the other 3 providers in the office were bypassing the survey and not handing out the information sheets. This limited the ability of this study to assess the effectiveness of the designed interventions in increasing the use of statin therapy per KDIGO guidelines. Analysis was able to be performed on 76 patients of the single provider utilizing the interventions, and the results, which are presented in Figure 2, supported the effectiveness of the interventions. For this provider, it was found that prior to intervention, 22 patients (29%) were not treated within KDIGO guidelines without documented contraindication. This number was reduced to 15 patients (20%) post-intervention. Furthermore, the survey indicated that of these remaining 15 patients, 11 patients declined due to personal preference to not be on statin therapy and 4 patients

Figure 2. Post-intervention lipid management in patients meeting KDIGO criteria for statin therapy

Lastly, an important incidental finding of this study was the observed difference in the appropriate use of statin therapy in male and female patients. Further analysis performed on the initial data collection found the difference to be statistically significant (chi-squared, P < 0.05). As seen in Figure 3, 38% of females meeting KDIGO criteria for statin therapy without documented contraindication were not taking a statin, while only 19% of males fell into this same category. In determining why this discrepancy was occurring, there was no difference observed in the 11 patients who declined statin therapy during the post-intervention data collection, with 6 being female and 5 being male. Providers were also compared to one another for their use of statin therapy in male and female patients, with Provider 1 having no statistically significant difference and Provider 2 having a statistically significant difference (both using chi-squared, P < 0.05). The percent of patients indicated for statin therapy and no documented contraindication for Provider 1 was 18% for males and 22% for females, while Provider 2 revealed 36% of males and 54% of females. These findings point to a provider-specific undertreatment of female patients with statin therapy when indicated, as well as an undertreatment of all patients per KDIGO guidelines. The future work of this project will focus on collaborative work with the providers of the office to further educate on the importance of lipid management in this patient population and the discrepancies observed in the treatment of female patients. Additionally, the method of implementation for the patient survey in the office’s EMR system is currently being readdressed to improve physician use of the intervention. 35


Improving Lipid Management

Patient education over subsequent office visits is also an important next step and will involve the editing of the patient educational handout to a clearer and more understandable sheet. Once necessary adjustments are made, another round of data collection is to be completed.

Figure 3. Lipid management in male vs female patients meeting KDIGO criteria for statin therapy

Conclusion The largest barriers to improving lipid management in patients with CKD meeting criteria for statin therapy per KDIGO guidelines appear to be both patient- and provider-centered. Patients showed hesitancy to the idea of being placed on a statin, while providers displayed some resistance in following the guidelines outlined by the KDIGO group. Intervening with patient education handouts and surveys on statin therapy within the patient’s charts were helpful to the one provider utilizing these tools; however, the number of patients declining statin therapy was still significant and further work will be continued to increase patient education.

Acknowledgments We would like to acknowledge and thank John Prior, DO, for his mentorship throughout this project, the key role he took in the implementation of the patient survey in his office’s EMR system, and his continued interest to work with us moving forward. Additionally, we thank the entire staff of the office of Northeast Pennsylvania Nephrology Associates for their support in the collecting of patient data for this project.

References 1.

Kidney Disease: Improving Global Outcomes (KDIGO) Lipid Work Group. KDIGO Clinical Practice Guideline for Lipid Management in Chronic Kidney Disease. Kidney inter., Suppl. 2013;3:259-305.

2. Rosenson R. Overview of the Management of Chronic Kidney Disease in Adults. In: UpToDate, Curhan G, ed. Waltham (MA): UpToDate; 2017. 3. Rosenson R. Statins: Actions, Side Effects, and Administration. In: UpToDate, Freeman M, ed. Waltham (MA): UpToDate; 2018. 36


Scholarly Research In Progress • Vol. 2, November 2018

Migraines: Current and Future Therapies Irene Kotok1*

Geisinger Commonwealth School of Medicine, Scranton, PA 18509 *Correspondence: ikotok@som.geisinger.edu 1

Abstract Migraine headaches are debilitating and affect a significant percentage of the adult and pediatric population. Migraine episodes significantly contribute to the loss of productivity and medical costs in adult populations, and effective longterm approaches to treatment or prevention have been elusive. With continued research, understanding of migraine pathophysiology has significantly evolved from its original pathophysiologic understanding as a vascular disorder to one of a neurological origin. About 20 years ago the initial connection was made between the calcitonin gene-related peptide (CGRP) receptors and migraines, and it ushered in new efforts in drug discovery with small molecule receptor antagonists and monoclonal antibodies. Currently, there are at least 4 pharmaceutical compounds in this new class being reviewed by the FDA for the final approval, and more are in the pipeline with Phase II and Phase I trials.

Introduction Migraine is one of the world’s most common neurologic disorders with pervasively debilitating effects. According to the Global Burden of Disease Survey, it affects 17% of females and 9% males worldwide, and accounts for 2.9% of all of years lost to disability, earning it seventh place among specific causes of disability (1). As such, migraines present significant societal costs in terms of health care and productivity loss (1). Migraines may begin in early childhood, with sharp increase in prevalence from 10 to 14 years old; prevalence continues to increase steadily until 35 to 39 years of age. According to the International Classification of Headache Disorders, migraines are classified as either episodic or chronic. Episodic or acute migraines occur less than 15 days per month and are estimated to affect about 12% of the adult population with more than a third of those requiring some form of preventive therapies (2). Up to 2.5% of patients with episodic migraine within a year progress to chronic type (3). Chronic migraines are defined as migraine headaches lasting longer than 15 days per month. These affect at least 1% of the adult population with 100% need of preventive management (2). Migraine is a chronic neurological disorder characterized by severe headaches that are accompanied by autonomic nervous system abnormalities (4). A migraine attack is generally divided into 4 key phases (5). The premonitory phase, often referred to as prodrome, precedes the main migraine episode. Prodromal symptoms include fatigue, neck pain, and difficulty concentrating, and can appear hours to days before the headache (5). Further, migraine episodes are classified as being with or without an aura. While the majority of patients have migraine headaches without aura, those who do have them with auras experience a variety of neurologic disturbances such as scotomas, somatosensory changes, motor deficits, or language disturbances. Auras typically start within an hour of the headache onset, and last

less than an hour. Migraine episodes can also be followed by a postdromal phase with physiologic sequelae manifesting as drowsiness, irritability, heightened sensory perception, difficulty concentrating, and, for certain individuals, even euphoria. Postdromes can last anywhere from a few hours to several days (5). The International Classification of Headache Disorders, third edition (ICDH-3), defines migraine headache as a headache attack that lasts between 4 and 72 hours either untreated or unsuccessfully treated. In addition, the headache must have 2 of the following 4 characteristics: unilateral location, pulsating in quality, moderate or severe pain intensity, aggravation by physical activity such as walking or climbing stairs or avoidance of such. Required diagnostic criteria include either nausea and vomiting or photophobia, phonophobia during the headache. Lastly, the headache is not accounted for by any other ICHD-3 diagnosis. Migraine headaches are often misdiagnosed due to the focus on severity and quality of pain. Migraine headaches can be moderate, bilateral, and constant; thus, it is important to pay attention to any associated prodrome or accompanying symptoms (6). Current treatment options: Prevention and acute abortive options Migraine prevention is key to the treatment of the disorder in order to minimize the debilitating effects of the condition and the corresponding societal burden imposed by it. While a number of options are available for migraine prevention, their utilization is limited due to side effects such as cognitive slowing, drowsiness, or weight gain, as well as incomplete efficacy (1). Since none of the medications currently used for prevention of migraines have been developed specifically for that purpose, they have side effects that patients are not willing to tolerate long term, resulting in low adherence to treatment. Adherence to migraine prophylactic medications in clinical practice has been reported between 27% and 32% at 6 months with steady declines over time. In addition, migraines are often underdiagnosed and undertreated (3). Acute migraine treatments are for those who do not qualify for the preventive treatment or those who do qualify but fail preventive measures. Currently available treatment options are limited to triptans, ergots, NSAIDs, combinations of acetaminophen-aspirin-caffeine or sumatriptan-naproxen, antiemetic agents, single-pulse transcranial magnetic stimulation (TMS), with triptans and NSAIDs being the leading options. According to clinical trials, none of the options are completely effective for everyone with acute migraine episode, and none provide relief in under 2 hours. In clinical practice, the slow and incomplete response to triptans causes dissatisfaction among a significant number of patients (6). Triptans are the only class of pharmacological agents specifically indicated for acute migraine treatment (1). They are not without the side effects and their use is limited due 37


Migraines

to vasoconstrictive effects (7). However, the primary concern with frequent use of triptans is not the associated side effects, but rather the potential for developing medication-overuse headaches in those who use them more than 10 days per month (6). Triptans are selective serotonin agonists that target serotonin 1B and 1D receptors causing cranial blood vessel vasoconstriction, as well as reduction in transmission of pain signals, and inhibition of release of vasoactive peptide in the trigeminal dorsal horn. Triptans are most commonly administered orally. However, oral administration has drawbacks in patients with migraine attacks associated with nausea and vomiting (8). Non-oral preparations of triptans, such as intranasal, subcutaneous injectable, and rectal suppositories are available to achieve therapeutic levels more quickly when the gastrointestinal route is not an optimal route of administration (6). A comprehensive meta-analysis of intranasal sumatriptan, the most frequently prescribed triptan, concluded that it is effective as the treatment for acute migraine but was associated with a sixfold increase in the risk of taste disturbance when compared to the placebo (8).

patient-delivered therapy for acute migraines with aura (6).

NSAIDs, both over the counter and prescription formulations, have been found to be effective in some patients, and for others have additive benefit when taken with a triptan. Chronic use of these can lead gastric irritation and excessive bleeding (6). Dihydroergotamine (DHE) tablets in combination with caffeine are used as an abortive agent for acute migraine attacks, and intravenous DHE is commonly used for refractory migraines with positive outcomes for a number of patients. Nausea and dizziness, and, at times, residual headaches are common side effects associated with DHE. Due to the vasoconstrictive effects, DHE is contraindicated in individuals with peripheral vascular disease or coronary artery disease (6, 14, 15).

Certain lifestyle factors and medications have been connected as triggers for migraine attacks. Irregular sleep, irregular caffeine intake and high stress levels have been linked to migraines. In women of reproductive age, migraine attacks tend to occur either prior or following the menstrual period. Furthering the connection between hormonal balance and migraines, oral contraceptives and postmenopausal hormone therapy have been found to exacerbate migraines. Nasal decongestants, SSRIs, and PPIs have also been found to adversely affect migraines. Moderating doses or discontinuing these medications can significantly decrease frequency and intensity of migraines (6).

Antiemetics play an important role as adjunctive therapy in patients with severe nausea and vomiting associated with an acute attack. Parenteral administration is commonly used for patients presenting to the Emergency Department. In less acute settings, other non-oral formulations of antiemetics are used to improve their efficacy and therapeutic response to migraine medications (6).

New treatment targets

Transcranial magnetic stimulation (TMS), application of a magnetic field created by an electrical current passing through a coil, may be placed around the head. Pulsation of the magnetic field near the tissue stimulates induction of electrical current in the brain. Currents of certain size, duration, and location can depolarize neural tissue and generate an action potential that will be propagated by the body’s normal nerve conduction mechanisms. TMS was initially introduced into the realm of medicine in the mid-1980s—however, its application to the treatment of the migraines has been a more recent development. In 2010 a well-designed, two-part, randomized, double-blind, parallel group and sham-controlled study on single-pulse TMS demonstrated pain-free results were statistically significant for TMS device vs sham device at all time points. Since then, there have been additional trials that support effectiveness of single-pulse TMS in treatment of acute migraines. There are no clinically significant adverse effects that have been reported regarding TMS (9, 10). Currently, FDA approval has been granted for the use of a handheld device that is applied in the occipital region for 38

Specific guidelines for initiation of prophylactic therapy for migraines are lacking. The decision to start therapy is based on headache frequency, severity, intensity, and response to therapies for acute attacks. Generally, preventive therapies are started if migraines occur once per week or have a total duration of 4 days or more per month. Currently, there are no medications specifically approved for preventive management. All current preventive migraine therapies have been initially introduced to the market for different indications. Thus, choosing the right medication depends on how well it works as migraine prophylactic as well any associated conditions the patient may have. The following classes of medication are currently available for migraine prophylaxis: tricyclic antidepressants, beta-blockers, anticonvulsant agents, candesartan (an ARB inhibitor found to be effective), flunarizine (a non-specific calcium antagonist), botulinum toxins, nonprescription therapies (Coenzyme Q10, magnesium, melatonin, petasites, riboflavin), and supraorbital nerve stimulation (6).

Elucidating the pathophysiology of migraines has not been straightforward largely due to the multifactorial nature of the disorder. Late in the 19th century, migraines were initially classified as a disease of the nervous system. Then, in the mid- to late 20th century, with the introduction of the triptan class of pharmaceuticals for the treatment of acute migraine, the disorder was reclassified as vascular (1, 7). The new classification was primarily supported by the work of Harold Wolff and his collaborators who demonstrated that infusion with ergotamine “decreased the amplitude of extracranial artery pulsation in parallel with a decrease in headache severity” (7). That connection furthered the notion that the pulsatile nature of the migraine headache was due to pulsating vessels. Current research has shed some light on the pathophysiology of migraines, making it more evident that migraines have a neurological basis and that vasodilation is not necessary or sufficient to elicit pain associated with migraines, making vasodilation an epiphenomenon rather than the cause of migraines (1). Furthermore, imaging studies have demonstrated that sumatriptan does not constrict extracranial arteries (7). And while significant strides have been made in the understanding of the migraine pathophysiology, “the etiology and pathogenesis of migraine are still not yet completely understood” (4).


Migraines

Current efforts in the search for effective acute and preventive treatments for episodic and chronic migraines focus on the following main areas: calcitonin-related peptide (CGRP) antagonism, serotonin 5HT1F agonists (ditans), acid-sensing ion channels, glutamate receptor antagonists, orexin receptor antagonists (rexants), nitric oxide synthase inhibitors, transient receptor potential (TRP) channels, as well as various approaches to neuromodulation (7). Of the above listed therapies, those addressing the antagonism of CGRP are closest to market with Phase III clinical trials have either concluded or are in their final stages and have a potential to change the landscape of migraine management, both acute and preventive. CGRP is a 37-amino-acid signaling neuropeptide that exists in alpha and beta forms. It was originally discovered in 1982 as a byproduct of alternative splicing of the calcitonin gene. The neurotransmitter is predominantly expressed in sensory neurons with particular concentration in perivascular trigeminal nerve fibers and the spinal trigeminal nucleus. CGRP, once released into the bloodstream, acts as a potent vasodilator of extracranial and intracranial vasculature and centrally modulates vascular nociception (11). Research into CGRP has established its role as one of the most potent peripheral microvascular vasodilators. However, no direct link has been established for the peptide’s role in physiological regulation of blood pressure in humans. CGRP has also been connected to a number of physiologic and pathologic processes such as pulmonary hypertension, heart failure, myocardial infarction, coronary artery disease, atherosclerosis, vessel remodeling, sepsis, neurogenic inflammation and pain, arthritis, skin irritation and wound healing, diabetes and obesity, and aging. The role of CGRP in each of these areas is known to various degrees and much more research must be done to establish the exact function of the peptide (16). A number of studies have established a direct cause and effect between the peptide and migraine headaches with several demonstrating the link with exogenous CGRP infusion triggering migraine episodes in migraine sufferers. It was also found that CGRP levels in blood and saliva were elevated in individuals with headache and facial pain disorders, and was not limited to pain associated with migraines. The phenomenon was observed in trigeminal neuralgia, cluster headache, chronic paroxysmal hemicranias, and rhinosinusitis. The research findings of CGRP’s role in migraine attacks made it an ideal target for new therapy development (11, 12). New class of migraine medications Small molecular CGRP receptor antagonists were the initial successful therapy developed for acute migraine attack treatment, being both effective and well-tolerated. However, when tried as a preventive measure, they were found to have hepatotoxicity that was prohibitive to further development. Further research of migraine preventive therapies led to developing CGRP monoclonal antibodies (mAbs) which were found to be both effective and free of hepatotoxic side effects (11, 12). Currently, there are 4 CGRP mAbs targeting either the ligand or receptor for migraine preventive therapies. One of the

therapies, erenumab, gained FDA approval in May 2018, and the other 3 either have completed Phase 3 studies and in the FDA approval process or are near completion of Phase 3 studies with plans to file for the FDA approval in the near future. Below is the summary highlighting key features of each of the mAbs entities (11, 19). ALD403, or eptinezumab, is a genetically engineered, desialylated, fully humanized anti-CGRP IgG1 antibody with half-life of 31 days (Alder Biopharmaceuticals). AMG334, erenumab, is a human IgG2 monoclonal antibody against the CGRP receptor with half-life of 21 days (Amgen and Novartis). This product has been approved by the FDA following results of 2 Phase 3 trials that demonstrated statistically significant reduction in migraine days in populations that found some relief with products already on the market and populations that have been resistant to currently available therapies. The first one is called STRIVE (Study to Evaluate the Efficacy and Safety of Erenumab in Migraine Prevention). It was a multicenter, randomized, double-blinded, placebo-controlled, parallel-group study with 955 patients enrolled, and 89.9% who completed the trial. Study patents were recruited from across the globe and distributed between 3 groups (70 mg erenumab, 140 mg erenumab, and placebo). A dose-dependent effect was demonstrated by the study with the rate of adverse effects being similar between the groups of erenumab and placebo group with most common being nasopharyngitis and upper respiratory tract infection. A second Phase 3 trial is LIBERTY, it is a Phase 3b, multicenter, randomized, 12-week, double-blind, placebo-controlled study evaluating the safety and efficacy of erenumab in patients with episodic migraine who have failed up to 4 prior preventive treatments for migraines. The primary endpoint was the 50% reduction in monthly migraine days from baseline over the last 4 weeks (weeks 9–12) with a 52-week open-label extension study (11, 20). LY2951742, galcanezumab, is a fully humanized anti-CGRP IgG4 mAb with a half-life of 28 days (Eli Lilly & Co) and TEV48125, fremanezumab, is a fully humanized anti-CGRP IgG2a mAb with a 40- to 48-day half-life. The key differences between these entities are that AMG334 binds the CGRP receptor while the other three mAbs bind various sites on the ligand. The route of administration for ALD403 is via an intravenous infusion, while the other three entities are being evaluated for a subcutaneous delivery. It is made using yeast, a novel method that has a potential to impact scalability and product costs. ALD403 dosage is for quarterly infusion, the rest of the entities on the list are monthly injections. Eli Lilly and Teva compounds are also being evaluated in clinical trials as a treatment for cluster headaches. Amgen and Novartis have completed two large Phase 3 randomized clinical trials (RCT) for AMG334 and have demonstrated statistically significant reduction in number of migraine days vs placebo. Filing for approval is being completed this year. Eli Lilly is working on completing a Phase 3 RCT in migraine prevention, evaluating 2 doses vs placebo. Alder Biopharmaceutical had two pivotal Phase 3 RCTs ongoing, one for episodic and one for chronic migraine. Teva has Phase 3 RCTs ongoing for episodic and chronic migraine indications with September 2018 as target completion date. For the acute treatment of a migraine attack, ubrogepant, a

39


Migraines

Headache: The Journal of Head and Face Pain. 2017Jan;57(9):1375-84.

small-molecule CGRP receptor antagonist, is being evaluated in Phase 3 RCTs (11). Future of migraine treatments Physicians treating migraine headaches and patients affected by them are awaiting approval of the GCRP antagonist that have wrapped up Phase 3 studies and are currently under FDA review. There has been resurgence in interest to develop a small-molecule CGRP receptor antagonist as a preventive therapy without the hepatotoxicity that was discovered during initial exploration (11). Studies have demonstrated that exogenous infusion of pituitary adenylate cyclase-activating peptide (PACAP) can induce migraine headaches, similar to what is observed with CGRP peptide. PACAP and its receptors continue to be a focus of ongoing research as a potential target for treatment and prevention of migraine headaches (17, 18).

Conclusion Migraine treatment options have been static for many years, with a number of the prophylactic remedies that were limited in their efficacy and safety profiles, and some never officially approved by the FDA for the management of migraines. The discovery of CGRP role in migraine pathophysiology has brought a whole new promising drug class for the treatment and prevention of both acute and chronic migraines. In clinical trials, the monoclonal antibodies for the CGRP receptor have demonstrated superior results to the currently available therapies. One of the key advantages of the new class of drugs is that they do not have a contraindication in patients with cardiovascular disease as is the case with triptans. While CGRP receptor monoclonal antibodies have initially found more clinical success when it comes to side effects, pharmaceutical companies are now revisiting the small molecular antagonists as a potential pharmacologic agent while eliminating the originally present hepatotoxic side effects. With more than 20 years of CGRP research, there are still many unanswered questions about its overarching role in human physiology. The peptide is being implicated in atherosclerosis, heart failure/MI, pulmonary hypertension, diabetes, obesity, wound healing, and arthritis, so the longterm effects of the drugs still need to be examined. In the meantime, this class of drugs is promising the much-needed relief to hundreds of thousands that suffer from the debilitating effects of migraine headaches.

Acknowledgments I would like to thank Margrit Shoemaker, MD, assistant chair of Internal Medicine at GCSOM, for her unwavering support and guidance, and for the invaluable review and comments on the manuscript.

References 1.

Tso AR, Goadsby PJ. Anti-CGRP Monoclonal Antibodies: The Next Era of Migraine Prevention? Current Treatment Options in Neurology. 2017;19(8).

2. Cohen JM, Dodick DW, Yang R, Newman LC, Li T, Aycardi E, et al. Fremanezumab as Add-On Treatment for Patients Treated With Other Migraine Preventive Medicines. 40

3. Woolley J, Bonafede M, Maiese B, Lenz R. Migraine Prophylaxis and Acute Treatment Patterns Among Commercially Insured Patients in the United States. Headache: The Journal of Head and Face Pain. 2017;57(9):1399-1408. 4. Hou M, Xing H, Cai Y, Li B, Wang X, Li P, et al. The effect and safety of monoclonal antibodies to calcitonin generelated peptide and its receptor on migraine: a systematic review and meta-analysis. The Journal of Headache and Pain. 2017 Jul;18(1). 5. Dussor G, Yan J, Xie JY, Ossipov MH, Dodick DW, Porreca F. Targeting TRP Channels For Novel Migraine Therapeutics. ACS Chemical Neuroscience. 2014;5(11):1085-96. 6. Charles A. Migraine. New England Journal of Medicine. 2017 Oct;377(6):553-61. 7.

Tso AR, Goadsby PJ. New Targets for Migraine Therapy. Current Treatment Options in Neurology. 2014;16(11).

8. Menshawy A, Ahmed H, Ismail A, Abushouk AI, Ghanem E, Pallanti R, et al. Intranasal sumatriptan for acute migraine attacks: a systematic review and meta-analysis. Neurological Sciences. 2017;39(1):31-44. 9. Lan L, Zhang X, Li X, Rong X, Peng Y. The efficacy of transcranial magnetic stimulation on migraine: a metaanalysis of randomized controlled trails. The Journal of Headache and Pain. 2017;18(1). 10. Barker AT, Shields K. Transcranial Magnetic Stimulation: Basic Principles and Clinical Applications in Migraine. Headache: The Journal of Head and Face Pain. 2016;57(3):517-24. 11. Schuster NM, Rapoport AM. Calcitonin Gene-Related Peptide-Targeted Therapies for Migraine and Cluster Headache. Clinical Neuropharmacology. 2017;40(4):16974. 12. Hong P, Wu X, Liu Y. Calcitonin gene-related peptide monoclonal antibody for preventive treatment of episodic migraine: A meta-analysis. Clinical Neurology and Neurosurgery. 2017;154:74-8. 13. Messali A, Sanderson JC, Blumenfeld AM, Goadsby PJ, Buse DC, Varon SF, et al. Direct and Indirect Costs of Chronic and Episodic Migraine in the United States: A Web-Based Survey. Headache: The Journal of Head and Face Pain. 2016;56(2):306-22. 14. Nagy AJ, Gandhi S, Bhola R, Goadsby PJ. Intravenous dihydroergotamine for inpatient management of refractory primary headaches. Neurology. 2011Feb;77(20):1827-32. 15. Schenkat DH, Schulz LT, Johnson BD. DihydroergotamineInduced Vasospastic Angina in a Patient Taking a Calcium Channel Blocker. Annals of Pharmacotherapy. 2011;45(78):1026. 16. Russell FA, King R, Smillie S-J, Kodji X, Brain SD. Calcitonin Gene-Related Peptide: Physiology and Pathophysiology.


Migraines

Physiological Reviews. 2014;94(4):1099-142. 17. Walker CS, Sundrum T, Hay DL. PACAP receptor pharmacology and agonist bias: analysis in primary neurons and glia from the trigeminal ganglia and transfected cells. British Journal of Pharmacology. 2014;171(6):1521-33. 18. Guo S, Vollesen AL, Olesen J, Ashina M. Premonitory and nonheadache symptoms induced by CGRP and PACAP38 in patients with migraine. Pain. 2016;157(12):2773-81. 19. Novartis and Amgen announce FDA approval of Aimovig (TM) (erenumab), a novel treatment developed specifically for migraine prevention. https://novartis.gcs-web.com/ Novartis-and-Amgen-announce-FDA-approval-of-Aimovigerenumab-a-novel-treatment-developed-specifically-formigraine-prevention. 20. Goadsby PJ, Reuter U, Hallstrรถm Y, Broessner G, Bonner JH, Zhang F, et al. A Controlled Trial of Erenumab for Episodic Migraine. New England Journal of Medicine. 2017;377(22):2123-32.

41


Scholarly Research In Progress • Vol. 2, November 2018

Not Just Another Nursemaid’s: An Enigmatic Pediatric Humeral Fracture Brandon Cope1* and Michael Tracy2

Geisinger Commonwealth School of Medicine, Scranton, PA 18509 Coordinated Health Scranton Orthopedics, Scranton, PA 18519 *Correspondence: bcope@som.geisinger.edu 1

2

Abstract Medicine is both an art and a science—limiting unnecessary costs is where this dichotomy is particularly evident. Physicians of time past relied exclusively on their ability to elicit the history of present illness and incorporate physical findings in order to reason diagnoses. Before the discovery of X-ray examination and other imaging modalities, a nursemaid’s elbow was an example of an injury that did not “need” imaging studies. With the history and physical examination, physicians were able to treat a relatively debilitating injury in 15 minutes or less. This case is important because in spite of this historically significant injury and well-documented bedside reduction maneuvers, a physician must avoid the inevitable temptation to throw out pertinent physical exam techniques and a critical differential of possible injuries.

Figure 1. Left proximal humeral shaft fracture suffered by the patient

Introduction

Discussion

In the medical literature, the discussion of radial head subluxation (RHS) and nursemaid’s elbow may highlight the ability to diagnose and treat the injured patient without obtaining imaging studies. This case reiterates the importance of a thorough physical examination and the use of appropriate imaging when point tenderness is exposed in the injured limb. With point tenderness to the arm, a child with a presumed RHS, otherwise known as nursemaid’s elbow, should be evaluated using radiographic imaging to prevent additional potentially destructive physical manipulation of the patient’s upper extremity.

Clinical diagnosis of RHS can be problematic for several reasons. The classic presentation of elbow pain in a child after a traction injury to the elbow in extension and pronation is not the only possible mechanism of injury (1–5). The classic traction injury may account for only 60% of cases (2, 4). Additionally, a history obtained from a young child may be incomplete. Lastly, because RHS can occur in the setting of abuse, the caregiver’s account of the injury may be unclear as well (6).

Case Presentation A 4-year-old male presented to the emergency room with an injury to his left upper extremity after falling while playing on the monkey bars at the park. Upon presentation, the patient refused to use his arm and held it in a pronated position with slight flexion at the elbow. A presumptive diagnosis of RHS was made. However, extensive manipulation of the elbow resulted in no symptomatic or functional relief. The patient was discharged from the Emergency Department and instructed to follow up in the orthopaedic clinic (Figure 1). Plain radiographs were taken in the orthopaedic clinic. The radial head was found to be concentrically reduced. A humeral shaft fracture was identified and treated nonsurgically. The fracture was followed with serial radiographs. By 6 months post-injury, the fracture was well healed radiographically, and the patient had returned to all activities without pain or limitation. A good clinical outcome was achieved by applying the principles of management of proximal humeral shaft fractures with limited (< 20%) valgus angulation. 42

Pain localization from a child with RHS can range from the wrist to shoulder areas (7). Whether this is due to the age of the patient, fear of the child, traumatic experience, or true, actual localized pain to these regions is unclear. A cooperative child with an isolated, closed arm injury experiencing pain with movement and acceptable alignment of the bones could describe either a closed humeral shaft fracture or RHS. These are factors that may contribute to misdiagnosis and put the patient’s health at risk. Misdiagnosing a humeral shaft fracture could have severe consequences such as secondary deep vein injuries, profunda brachii artery injury, or radial nerve damage. These injuries are frequently caused by a fractured segment of bone and not the direct trauma; iatrogenic manipulation could provoke such damage. If vascular injuries secondary to the fracture the patient must go to surgery (8). In a study by Niver et al. (2013) to look at the prevalence of secondary nerve damage, as many as 31% of cases of humeral fractures had accompanying radial nerve damage. Of these patients, 40% lost function in the affected arm, which can ultimately affect a patient’s daily life and livelihood (9, 10). Morbidity secondary to RHS misdiagnosis and mismanagement is identified in many case studies in the literature. A study conducted by Cohen-Rosenblum and Bielski (11) warned about misdiagnosis leading to displacement


Enigmatic Pediatric Humeral Fracture

of a fracture and the need for possible surgical intervention. They recommend that patients with elbow pain lacking the typical history of abrupt longitudinal traction should undergo elbow imaging prior to any attempts at reduction. The authors emphasized that it is a “common clinical mistake to treat all elbow injuries as a nursemaid’s elbow.”

tenderness, ecchymosis, erythema, and edema. These cases could provide valuable information surrounding diagnosis criteria and evaluation methods including any inadequacies in eliciting point tenderness.

In the orthopaedic community there is a lack of consensus over the decision to send a patient for a diagnostic X-ray. Some authors suggest diagnostic X-ray, (5, 12, 13) while others argue for a clinical diagnosis of RHS and reduction without imaging (4, 14, 15). They assert that foregoing radiographic evaluation streamlines patient care. Some studies have encouraged reduction attempts spaced 15 minutes apart until the patient is asymptomatic (18).

In the current climate of cost-cutting, financial awareness, and an unstable health care system, even the ordering of plain radiographs requires justification. Classically, RHS is an example of an injury in which good history-taking, observational skills, clinical examination, and experience can be used to avoid the need for further imaging studies. However, this case serves as a warning against premature exclusion of RHS when developing a differential diagnosis with regard to elbow pain, and highlights the importance of radiographic imaging to distinguished between pathologies. Additionally, it highlights potentially serious complications if RHS and/or a potential bone fracture are missed. Both the IRBC study and this case highlight the need to consider humeral shaft fractures within the differential diagnosis.

A major limitation concerning the surrounding literature is that the vast majority of studies are retrospective and include data based on cases that had a confirmed diagnosis of radial head subluxation on the discharge notes (4, 5, 14–17). This is great for assessing cases of RHS that have a more traditional traction injury, or present with more typical features associated with radial head subluxation. However, the literature is sparse for cases like this one, where RHS is misdiagnosed, leading to attempted reduction of the presumed subluxation but failed to return function to the child. The then-prompted X-ray image showing an occult fracture of the upper extremity had not been documented. Thus, until recently, there had been little knowledge on the incidence of misdiagnosis in the presumed cases of RHS and any injuries secondary to mismanagement. An attempt to answer this question led to a study by the Institutional Review Board Committees (IRBC) to look at the prevalence of missed fractures in children with a clinical diagnosis of RHS (18). This has been the only study to attempt to describe the prevalence of fractures, or the risk of a fracture as it relates to the history of a mechanism of injury in children presenting with the classic flexed elbow/pronated wrist position. The study analyzed all upper extremity injuries over the course of a year in children less than 6 years of age who presented in the clinical RHS posture (n = 136). In the study, exclusions were made for patients with point tenderness, ecchymosis, erythema, and edema. Despite these exclusions, the IRBC study included 1 patient presenting with the classic flexed elbow/pronated wrist position with an underlying humeral shaft fracture. Three other upper extremity fractures were also misdiagnosed. This is very significant in regards to the finding of our case. Our patient fit into this category exactly. The IRBC study disclosed that 2 of the patients with upper extremity fractures that mimicked RHS in their presentation had a question of point tenderness that was dismissed due to how characteristic the mechanism of injury (traction) and how much the general presentation of the patient indicated an RHS. In other words, the study underscored how important point tenderness may be in cases of occult extremity fractures even when there is evidence of a possible, albeit wrong, diagnosis of RHS. The IRBC proposed that a larger series of children is needed to delineate some of the study’s significance. They concluded that it was necessary to conduct a study that includes all upper extremity injuries in children under age 6 presenting with point

Conclusion

Disclosures Consent was given by the patient’s mother. There are no competing interests.

Acknowledgments The case was seen by Michael R. Tracy, MD. Dr. Tracy is a partner at Coordinated Health Scranton Orthopedics and specializes in the treatment of shoulder and elbow arthritis, rheumatoid conditions, and sports and traumatic injuries. This includes joint replacement surgery, arthroscopic rotator cuff repair, elbow ligament reconstruction, and operative fixation of fractures of the clavicle, shoulder, and elbow. He completed his orthopedic residency at Wake Forest University and a fellowship in shoulder and elbow surgery at the Rothman Institute at Thomas Jefferson University.

References 1.

Magill HK, Aitken AP. Nursemaid’s elbow. Surg Gynecol Obstet. 1954;98:753-756.

2. Diab H, Hamed MM, Allam Y. Obscure pathology of pulled elbow: dynamic high-resolution ultrasound-assisted classification. J Child Orthop. 2010;4:539-543. 3. Schutzman SA, Teach S. Upper-extremity impairment in young children. Ann Emerg Med. 1995;26:474-9. 4. Sacchetti A, Ramoska EE, Glascow C. Nonclassic history in children with radial head subluxations. J Emerg Med. 1990; 8:151-3. 5. Wong K, Troncoso A, Catello D, Salo D, Fiesseler F. Radial head subluxation: factors associated with its recurrence and radiographic evaluation in a tertiary pediatric emergency department. J Emerg Med. 2016;51(6):621-627. 6. Irie T, Sono T, Hayama Y, Matsumoto T, Matsushita M. Investigation on 2331 Cases of Pulled Elbow Over the Last 10 Years. Pediatric Reports. 2014;6(2):5090.

43


Enigmatic Pediatric Humeral Fracture

7.

Asher MA. Dislocations of the upper extremity in children. Orthop Clin North Am. 1976;7:583-591.

8. Usama KK. Brachial artery injury analysis of 80 cases. Kufa Med Journal. 2009; 12(1):175-183. 9. Bitsch M, Hensler MK, Schroeder TV. Traumatic lesions of the axillary and brachial artery. Ugeskr Laeger. 1994; 156(26):3890-3. 10. Niver GE, Ilyas AM. Management of radial nerve palsy following fractures of the humerus. Orthop Clin North Am. 2013; 44(3):419-24. 11. Cohen-Rosenblum A, Bielski R. Elbow pain after a fall: nursemaid’s elbow or fracture? Pediatric Annals. 2016;45:214-217. 12. Frumkin K. Nursemaid’s elbow: a radiographic demonstration. Ann Emerg Med. 1985;14:690-3. 13. Snyder HS. Radiographic changes with radial head subluxation in children. J Emerg Med. 1990;8:265-9. 14. Choung W, Heinrich SD. Acute annular ligament interposition into the radiocapitellar joint in children (nursemaid’s elbow). J Pediatr Orthopaed. 1995; 15:4544. 15. Kaufman D, Leung J. Evaluation of the patient with extremity trauma: an evidence based approach. Emerg Med Clin North Am. 1999;17(1):77-95. 16. Quan L. The epidemiology and treatment of radial head subluxation. Am J Dis Child. 1985;139:1194. 17. Chang I, et al. Factors associated with radiologic tests in patients with radial head subluxation. Pan-Pacific Emerg Med Congress. 2014;PS2-13:195. 18. Macias CG, Wiebe R, Bothner J. History and radiographic findings associated with clinically suspected radial head subluxations. Pediatr Emerg Care. 2000;16:22-5.

44


Scholarly Research In Progress • Vol. 2, November 2018

Neurobasis of Post-Traumatic Stress Disorder Merly Cepeda1*, Charles Bay1, Alexis Rice1, Cameron Rutledge1, and Ariel Zhang1

1 Geisinger Commonwealth School of Medicine, Scranton, PA 18509 *Correspondence: mcepeda@som.geisinger.edu

Abstract Post-traumatic stress disorder (PTSD) is a pathological disorder that can result from exposure to a variety of traumatic events. Analysis of neurobiological abnormalities assist in mapping identifiers of PTSD and the causative and resultant structural and functional changes that accompany the disease. This review summarizes PTSD findings and explores how an assessment of brain activation patterns in PTSD patients may allow investigators to predict response to therapy while also allowing better understanding of how prolonged exposure (PE) therapy is positively affecting the brain of PTSD patients. This review also explores the future directions of PTSD and how they will focus on targeting the neurotransmitter glutamate as a potential form of treatment, as well as the use of functional magnetic resonance imaging (fMRI) for a lab diagnosis.

Introduction Trauma can result in a debilitating pathological disorder commonly known as post-traumatic stress disorder (PTSD). Why do some people experience PTSD, and others do not? Is the prevalence of PTSD related to gender or ethnicity? These are questions researchers have investigated time and time again. We usually associate PTSD with veterans, but war is not the only type of trauma that can result in this pathological disorder. Victims of PTSD have experienced, or have been in close proximity to, a wide variety of traumatic events. The Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-5), states that being diagnosed with PTSD requires the patient to suffer from certain symptoms for over a month (1). A diagnosis of acute stress disorder is given to those who experience common symptoms for a shorter time span (less than a month) (2). The symptoms that are used to diagnose PTSD include having a repeated remembrance of the traumatic event—for example, having nightmares, flashbacks, recurrent traumatic thoughts, or memories. According to the DSM-5, PTSD patients can also have a change in behavior (Table 1). This change of behavior constitutes avoiding things that trigger a recollection or familiar impression of the trauma, like certain activities, places, or people (1). A change in behavior and mood can also be described as a symptom. A patient experiencing this symptom can show a strained relationship with people, negative emotions toward the event as well as themselves, lack of interest in their surroundings as well as themselves, and not being able to experience positive emotions (1). These patients can also experience an increase in arousal causing hyperactive behavior. This means that they can become easily agitated, aggressive, and extremely alert at all times (1). This state of arousal can intervene with sleep schedules to a debilitating point, and as a consequence there can be a complete lack of concentration.

Table 1. DSM-5 posttraumatic stress disorder diagnostics criteria and symptoms required to be diagnosed with PTSD

The International Classification of Diseases, 11th version (ICD11) proposed a distinction between PTSD and complex PTSD (CPTSD) (3). A patient with CPTSD has the common symptoms associated with PTSD, including other symptoms of negative views on self-organization. CPTSD is more common as a result of childhood sexual trauma or childhood torture. Karatzias et al. analyzed the diagnosis of PTSD and CPTSD on a sample size of 1893 in the United States (3). Using the International Trauma Questionnaire, the sample size was diagnosed with having PTSD or CPTSD (3). The results showed a prevalence rate of approximately 3.3% for CPTSD, and about 4.0% for PTSD. These results were very similar to the approximated prevalence of PTSD in the United States of 7.8% (3). PTSD has also been studied comorbid with other depressive or anxiety disorders. According to the National Comorbidity Survey, approximately half of those diagnosed with PTSD have had experience with other depressive disorders or episodes (4). Some types of events that can result in this fear-filled disorder include car/motor vehicle accidents, natural disasters or mass disasters, acts of terrorism, assault, violence, childhood trauma, and abuse (1). The National Violence against Women Prevention Research Center viewed the mental disorders that

45


Post-Traumatic Stress Disorder

can evoke from rape in the year 2000 (5). One of the mental disorders tested was PTSD (5). It was estimated that in the United States alone, about 3.8 million women over 18 years old suffer from PTSD as a result of rape (5). There has been evidence showing that a natural disaster like an earthquake can cause the habitants to show PTSD symptoms after the cataclysmic event. Jia et al. performed a population-based study in Sichuan, China, 15 months after the severely devastating earthquake of 2008 (6). This study consisted of 327 survivors, with 175 young adults and the rest elderly. Over 20% of the elderly analyzed in this study showed PTSD symptoms, and approximately 8.0% of the younger adults also demonstrated some traumatic stress (6). After various studies, researchers have shown that earthquake survivors frequently suffer from PTSD (1). Researchers have continued to observe the stress on victims of motor vehicle accidents. Beck and Coffy reviewed many studies to research the association of comorbid PTSD postmotor vehicle accidents (7). In a sample size of approximately 230 participants, about 41% were diagnosed as showing symptoms of PTSD (7). It is believed that PTSD is common after motor vehicle accidents because those suffering the accident are not aware of the symptoms of PTSD, and therefore do not seek help right away (7). This can result in a more chronic PTSD, and much more difficult to overcome. According to a pivotal study by Emory researchers Bradley et al., in a group of 10 PTSD patients, approximately 30% will not complete psychotherapy due to difficulty of investing the necessary time and effort. Of the remaining patients, about 30 to 50% will remain symptomatic and impaired by their PTSD. As a highly effective and highly utilized treatment, only about 40% of patients will achieve remission through prolonged exposure to therapy (8).

Materials and Methods A review of the literature was performed using PubMed, Clinical Trials, and ScienceDirect databases. Neurobiology of PTSD In the analysis of interactions between adverse environmental stimuli, stress management, pathology, and neurobiology of PTSD, various influences are considered. PTSD changes/ influences are manifested endocrinologically, biochemically, and anatomically. Endocrine-based changes can be summed up as abnormal regulation of cortisol and thyroid hormones. Neurochemical changes include abnormal regulation of catecholamines (dopamine and norepinephrine [NE]), serotonin, and amino acids (gamma-aminobutyric acid [GABA] and glutamate). Lastly, structural changes consist of reduced hippocampal and prefrontal cortex volumes accompanied by activation changes in the amygdala and prefrontal cortex. This section will explore and elaborate on many of the neurobiological changes of PTSD discussed above. Prior to explaining the neurological features, changes, and effects of PTSD, it must be made clear that different forms of PTSD take on different manifestations. Variability in severity and timing of psychological trauma, comorbid conditions, genetic makeup, personality, and patterns of signs and symptoms can lead to inconsistent findings in neurobiology.

46

Neuroendocrine, HPA, and HPT axis The HPA axis serves as the central coordinator of the mammalian neuroendocrine stress response system (9). The inner workings of the HPA axis can be explained as follows: Stress (e.g., infection, trauma, surgery) activates the HPA axis. This stimulation induces hypothalamic secretion of corticotropin-releasing hormone (CRH). CRH promotes the release of adrenocorticotropic hormone (ACTH) by the anterior pituitary gland, triggering the emission of cortisol by the adrenal cortex. Cortisol serves as a regulator of the immune system, metabolism, and brain function in physiological stress management. The actions of cortisol include muscle wasting, increased glycogen deposition and gluconeogenesis, calcium resorption, decreased antibody production, anti-inflammation and anti-allergenic, as well as neural excitability. Sustained cortisol exposure has adverse effects on hippocampal neurons, causing dendritic branching reduction and spine loss, neurogenesis impairment, and hippocampal atrophy (9). Upon experiencing trauma (stress), cortisol levels are initially high. These levels are at first sustained due to the constant prevalence of trauma through nightmares, recurring daydreams, intrusive memories, as well as triggers. Due to the negative feedback mechanism of cortisol within the HPA axis, the high concentration of cortisol circulates and binds to corresponding receptors, simultaneously inhibiting release of CRH and ACTH from the hypothalamus and anterior pituitary gland, respectively, and impairing the hippocampus (Table 2). Elevated cortisol levels precipitate increased cortisol receptor binding, further exacerbating sustained stress and fear responses. Despite the inhibition of secretion of both hormones, CRH levels are sustained in PTSD patients. Excess CRH in the CSF of PTSD patients has also been the cause for blunted ACTH release from the anterior pituitary due to downregulation by the pituitary gland as well as hippocampal atrophy (9). Inhibition of ACTH release is causative for the decreased cortisol levels (hypocortisolism) seen in combat veterans with PTSD. Low levels of cortisol at the time of a traumatic event may serve as a risk factor for the development of PTSD. In conclusion, hypocortisolism is both a result of PTSD as well as a risk factor for the disease. The HPT axis, while similar to the HPA axis, dictates metabolic versus anabolic states via thyroid hormone level control. The HPT axis is of particular interest in PTSD studies due to speculation about its role in stress-related disorders. Previous research has correlated trauma with thyroid abnormalities; however, no current studies show direct links between the HPT axis and PTSD (9). The activation and cascade of events of the HPT axis is likened to that of the HPA axis in that the HPT axis is also activated by stress. This stress causes the secretion of thyroid-releasing hormone (TRH) by the hypothalamus, leading to the release of thyroid-stimulating hormone (TSH) by the pituitary gland, and concluding with the emission of tetra-iodothyronine (T4) and tri-iodothyronine (T3) by the thyroid gland. The T4 pro-thyroid hormone is converted to T3, the active form of the hormone, in the periphery. The hormones play a role in stimulating tissue metabolism and are critical for normal brain development, although excess T3 causes severe catabolism of muscle (decreasing body protein mass). Studies on WWII combat veterans with PTSD revealed disproportionate T3:T4 levels, with elevated T3 compared to


Post-Traumatic Stress Disorder

normal T4 concentrations (9). Elevated T3 levels may correlate with anxiety in patients with PTSD due to increased SNS activity including tremors, tachycardia, and nervousness.

stress response. The somas of NE are in the locus coeruleus, but stem out to brain regions responsible for the stress response. These regions include the prefrontal cortex, amygdala, hippocampus, thalamus, and hypothalamus. Patients with PTSD show increased levels of NE, along with increased ANS activity, pulse, blood pressure, arousal, and startle response, as well as response to memories (9). Stressinduced sympathetic nervous system (SNS) activation causes the release of NE and epinephrine from the adrenal medulla. This increased release of NE from sympathetic nerve endings changes blood flow to organs needed for the fight-or-flight response. The interaction between NE and CRH increases the conditioning of fear as well as emotional encoding of memories, while inducing hyperarousal. Cortisol serves as an inhibitor of NE-CRH interactions, as it does in the HPA axis. Brain circuitry of PTSD

Table 2. Summary of neurobiological features with identified abnormalities and functional implications in patients with PTSD, adapted from (9)

Neurochemical factors of PTSD As previously stated, neurochemical changes include abnormal regulation of catecholamines (dopamine and norepinephrine), serotonin, and amino acids (GABA and glutamate). All of the listed neurochemicals are found in brain circuits that regulate/integrate stress and fear responses (9). Catecholamines are the family of neurotransmitters, including dopamine (DA) and NE. Stressors induce mesolimbic DA release, which can moderate HPA axis response; however, no current studies show a direct link between PTSD and mesolimbic DA release specifically (9). Mesolimbic DA is implicated in fear conditioning. Patients with PTSD show increased urinary excretion of DA, proving increased levels of the neurotransmitter. NE, in a similar fashion to the HPT axis, has been of particular interest in PTSD studies to its role in mediating autonomic

There have been characteristic structural and functional brain changes (hippocampus, amygdala, and the anterior cingulate, insula, and orbitofrontal cortical regions) identified in PTSD patients using brain imaging models. The listed areas interconnect to mediate variable conditioning to stress and fear. Changes in these areas are proposed to have a direct link to PTSD. The hippocampus plays a role in memory formation and storage, memory-based fear conditioning, and the stress response. Exposure to prolonged stress, along with high initial cortisol and CRH, causes damage to the hippocampus that is manifested in reduced hippocampal volume. This loss of volume has been linked to exposure to prolonged stress and cortisol but may also serve as a potential risk factor in the development of PTSD. The medial prefrontal cortex (mPFC) and anterior cingulate cortex (ACC) are both involved in higher-level functions: attention allocation, decision-making, morality, and impulse control. Along with decreased hippocampal volume, PTSD patients also showed a decrease in prefrontal cortex volume, including the ACC and mPFC. Reduced hippocampal and prefrontal cortex volume, shown via MRI, leads to failure to terminate stress responses and impaired abilities in distinguishing between safe and unsafe situations in PTSD patients (9). In conclusion, structure dictates function. Prolonged exposure to trauma impacts neural structure, hence changing the functions of these traumatized structures. PTSD assessment and treatment It is natural for a person to feel scared, upset, or different after a traumatic event. However, if these negative feelings last longer than a few months and start to disrupt their daily life, it is time to seek help for PTSD treatment. Before seeking professional help, a person can complete a self-screening to see if they have symptoms of PTSD. A short screen like Primary Care PTSD Screen for DSM-5 (PC-PTSD-5) could be used as a self-screening tool. However, a positive result on a PTSD self-screening does not mean this person has PTSD. A positive self-screening result requires further assessment. A proper diagnosis of PTSD could be provided by a primary care physician or a clinician who has a complete working knowledge of PTSD. The doctor will first perform a physical examination. Patients who have symptoms of PTSD went through a traumatic, life-threatening event and may suffer

47


Post-Traumatic Stress Disorder

from various body injuries. A physical examination could also rule out the possibility that an injury is causing the symptoms. A psychological evaluation is also necessary for PTSD assessment. Patients with PTSD are likely to develop substance abuse and other mental disorders, such as depression, anxiety, and sleep disorder. During the psychological evaluation, patient and the psychologist will talk about the signs and symptoms, how these symptoms affect their life, and the events that trigger the symptoms. The major part of the assessment of PTSD is using DSM-5. There are two types of DSM-5. As mentioned above, the PC-PTSD-5 is a 5-item screen administered by a primary care physician. A patient can score from 0 to 5 on PC-PTSD-5. A score of 0 indicates a negative result for PTSD; a score from 1 to 5 indicates the patient exhibits the symptoms of PTSD and requires further assessment. The PC-PTSD-5 screen starts with a question asking if the patient has experienced any events that were frightening, horrible, or traumatic. Patients who select “no” for the first question would receive a negative result and do not need to continue the screen. Questions from PC-PTSD assess the patient’s change in mood and cognition, such as anxiety and depression level. A study showed that patients found PC-PTSD-5 questions were easy to understand and felt comfortable completing the screen in a primary care setting (10). Patients also preferred the screen administered by a primary care physician, rather than by other health-related professionals or via self-report. By using a score of 3 as the cutoff point, the study successfully identified 94.8% of patients with PTSD (10). To further confirm the PTSD diagnosis, patients who screen positive will complete the Clinician-Administered PTSD Scale for DSM-5 (CAPS-5). CAPS-5 is conducted in the form of interview. It is a 30-item screen and takes about 40 to 60 minutes to complete. CAPS-5 is administered by clinicians and clinical researchers who have an extensive knowledge on PTSD. CAPS-5 is able to assess symptoms from the previous week and make a current and lifetime diagnosis of PTSD. Questions on CAPS-5 focus on different symptoms, duration of symptoms, level of distress, changes in life after the traumatic events, and severity of symptoms. CAPS-5 allows the administrator to examine the severity of PTSD by using a range of severity. Questions in CAPS-5 are not structured in a dichotomous fashion, unlike PC-PTSD-5. By ranking the severity of the symptoms, CAPS-5 is able to detect the improvement or decline of the symptoms with repeated screening. One of the limitations of CAPS-5 is that it can only assess one trauma (11). For patients who suffered from multiple traumatic events, results of CAPS-5 would not be accurate. CAPS-5 was conducted based on the data from military veterans. Although CAPS-5 is used on both military and non-military patients, non-military patients might experience different symptoms and traumatic events (11). Children have the same potential for PTSD as adults. Traumatic events such as natural disaster, physical or sexual abuse, and other violent incidents can lead to PTSD in children. Screens for children with PTSD are usually behaviorally and developmentally sensitive, because children’s cognitive function is still under development. The diagnostic screen is the Clinician-Administered PTSD Scale for DSM-5 – Child/ Adolescent Version (CAPS-CA-5), a modified version of the 48

CAPS-5. The child/adolescent version includes some ageappropriate questions and picture responses to help the children better understand the screening questions. However, a researcher pointed out that there were still some major limitations of this screen (11). For example, the screen asked children to describe their experience and the effect of the event on their internal state. Such questions required cognitive and language skills that are beyond a child’s developmental stage. Diagnosis of PTSD for children is usually a combination of parents’ observation and clinicians’ interpretation based on children’s behaviors (12). The major treatment for PTSD is psychotherapy. Currently, the three most effective psychotherapies for PTSD are cognitive processing, prolonged exposure, and eye movement desensitization and reprocessing therapies (2, 13). These are all individual therapies that last about 3 months. During cognitive processing therapy, patients learn how to modify and reduce the distressing feelings in daily life. Patients first describe the thoughts and emotions caused by traumatic events, and then identify the impact on their life and beliefs. Patients finally learn to evaluate and modify these negative feelings related to the traumatic event (2). In prolonged exposure therapy, patients learn how to gradually approach negative memories or feelings related to the trauma. Patients practice in vivo exposure and imaginal exposure during and outside of the therapy. This therapy also helps patients stop avoiding feelings or events that could remind them of the trauma. Due to the fact that prolonged exposure focuses mainly on the exposure related to the traumatic event, it is likely a patient will have anxiety-related feelings during the therapy. It is critical for therapists to establish a safe environment that makes patients feel comfortable. The third therapy is eye movement desensitization and reprocessing (EMDR). EMDR helps the patient better process the negative feelings related to the trauma. During EMDR, the patient will focus on a back-and-forth movement, such as the therapist’s finger, or a repeated sound while recalling the traumatic event. During this therapy, the patient briefly focuses on the traumatic event during a bilateral stimulation. The bilateral stimulation reduces the negative feelings and memory related to the event. Another treatment available for PTSD is medication. Patients with PTSD are likely to have other mental disorders, such as depression and anxiety (14). Currently, sertraline (Zoloft®) and paroxetine (Paxil®) are the medications approved by FDA to treat PTSD. Sertraline and paroxetine are serotonin reuptake inhibitors. They increase serotonin level in the brain by blocking the reabsorption of serotonin in the synaptic cleft. Other medications commonly prescribed to PTSD patients are anti-anxiety medication and medications that help reduce sleeping disorders. Novel research on the effects of psychotherapy Considering our discussion on the current treatment paradigms for PTSD, we focus on prolonged exposure (PE) therapy, which is regarded as the best psychotherapy choice for treating PTSD in practice guidelines (15,16). The 2017 APA guideline development panel includes PE therapy as a “strongly recommended” psychotherapy based on randomized control trials from 1980 to 2012 that “examined


Post-Traumatic Stress Disorder

the efficacy and comparative effectiveness of psychological and pharmacological treatments for adults with posttraumatic stress disorder” (17). This systematic review reported on 92 separate studies that provided the following points: 1) the most efficacious action to improve PTSD symptoms, based on strength of evidence, was exposure therapy; 2) comparative effectiveness vs other psychotherapies could not be determined due to lacking head-to-head comparisons; 3) PE therapy was statistically significant for reducing PTSD symptoms vs relaxation techniques (17). While this review helps to confirm a starting point for treating patients suffering with PTSD, psychiatrists, behavioral scientists, and researchers alike understand little about how PE therapy alters brain function and allows the improvement of symptoms (18). Investigators have shown that greater activation in the ventral anterior cingulate cortex and medial prefrontal cortex to nonconscious fear processing is associated with poorer response to cognitive behavioral therapy (CBT) (19). In contrast, greater activation in the dorsal anterior cingulate and reduced activation in the amygdala to nonconscious fear processing is shown to indicate better response to CBT (19, 20). While these previous studies bring some light to an unclear area, neuroscientists Fonzo and Etkin highlight generalizability issues due to small sample sizes and lacking control participants to compare treatment effect (21). Additionally, APA practice guidelines speak to similar limitations, stating that although the research evidence is strong for the efficacy of particular psychotherapy and pharmacological treatments for adults with PTSD, there are still significant gaps in the literature. These include the lack of randomized controlled trials for newer treatments and the impact of treatments on important patient-oriented outcomes, such as quality of life, long-term treatment effects, adverse effects and harms, along with other outcomes that are not so easily quantifiable, such as moral injury, emotional regulation, identity and sense of self, and ability to form intimate relationships (15). Obtaining stronger evidence that further supports and explains PE therapy as the best recommended treatment is a goal for multiple forthcoming research studies. Current research suggests that psychiatrists and behavioral scientists understand little about how psychotherapy works to improve PTSD symptoms and why it leads to full PTSD remission in as few as 30% of patients who complete PTSD therapy (8). Helping to clarify how and for whom PE therapy is effective may enable researchers to increase the proportion of patients who achieve remission with the PE therapy. It was with these gaps in mind that investigators at the Etkin’s NeuroLab attempted to measure brain activity before and after PE therapy in PTSD patients to explore: 1) brain activity patterns to predict the PTSD patients best suited for PT therapy (21); and 2) PTSD brain response to psychotherapy (18). PTSD psychotherapy outcome predicted by brain activation during emotional reactivity and regulation The primary objective of Fonzo et al. involved identifying brain activation characteristics that would indicate a PTSD patient who is most likely to benefit from PE therapy. Their investigations analyzed PTSD patients between the ages of 18 and 60 who were randomized to receive immediate

PE therapy during that study period or randomized to the waiting arm as the control group. Brain activation patterns were captured via fMRI prior to the treatment period for both arms as part of the baseline, and after the treatment period to allow determination of posttreatment period changes. Investigators collected fMRI scans as they had patients complete “behavioral paradigms,” which are behavior tasks meant to probe components of emotional reactivity and regulation. Investigators described how these behavioral tasks had been used in previous studies to help probe at brain activation patterns and demonstrated their efforts to build upon information gathered by previous studies, as highlighted above. With their methods, Fonzo et al. were able to craft a model that provided a predictive value on functional brain activity characteristics that moderate response to PE therapy. This model, based on fMRI imaging, indicated activation in the dorsal anterior cingulate cortex, left and right dorsolateral prefrontal cortex, left and right frontopolar cortex, and left anterior insular cortex, enabling investigators to predict remission from PTSD with 95.5% accuracy (21). Selective effects of psychotherapy on frontopolar cortical function in PTSD In this accompanying study performed in the same lab, assessing the same participants, and utilizing the same methodological structure from the PTSD predictive model study, Fonzo et al. aimed to understand how PE therapy changes the brains of individuals with PTSD. Specifically, the investigators were assessing PE therapy’s impact on brain activation in the amygdala and anterior insula while participants processed emotional stimuli. They hoped to see increased prefrontal cortex activation during emotional processing. Initial data from fMRI scans showed regions within the prefrontal cortex, in particular the lateral frontopolar cortex (LFPC), were associated with significant increased activation in PE therapy participants. Additionally, researchers did not capture other regions of interest (amygdala and anterior insula) that have shown significant activation in previous studies. Investigators, surprised by the brain activity captured during emotional regulation activities, explored the increased activation in the LFPC and improvement in PTSD symptoms. By measuring PTSD-specific, long-term indicators of treatment success and quality of life measures, investigators were able to show significant association between increased FPC activation as a result of PE therapy. Overall, Fonzo et al. in this accompanying paper reported changes in frontopolar activation are part of the treatment benefits PTSD patients receive from PE therapy (18). It is important to note the limitations mentioned by these investigators. Of particular importance to their novel predictive model techniques were limitations related to test subjects and study power. This study did not perform the described tests on any PTSD patients who were suffered from PTSDrelated trauma but had since recovered and were healthy. The researchers comment that examining such a sample could provide insight toward brain adaptations or other markers that interact with PE therapy and influence treatment outcomes (21). Investigators also commented on the samples for this randomized trial (n = 66) and speculated on how a higher sample could extend and further validate the utility of the predictive model. 49


Post-Traumatic Stress Disorder

Discussion In order to be able to diagnose PTSD and then treat it properly for the masses, researchers need to find out what is truly occurring in the brain. As mentioned earlier, there are some people who have increased expression of one neurotransmitter where others do not. The discrepancies may lie with the idea that different severities of PTSD may cause different ratios of over-productivity by some neurotransmitters. This may arguably be a reason why everyone may not equally respond to the types of treatment that are already available, like psychotherapy. The reality is more of the fact that the brain is such a complex organ and nothing about it is cut and dry. It is more fluid than it is stagnant, always adapting to different internal and external stimuli, and PTSD is no different. Researchers may not fully understand what is occurring in the brain of a PTSD patient but the direction for future studies must first examine that, and then there can be movement toward more effective treatment. The first thing that needs to occur is to find out what is happening to the brain at a molecular level. One neurotransmitter thought to be involved the pathophysiology of PTSD is glutamate and its dysfunction. Glutamate is one of the excitatory neurotransmitters in the brain. In PTSD patients, continuous exposure to the trauma or stress that they experienced causes a release in cortisol from the adrenal gland, activating the release of glutamate in the brain. Glutamate then binds the receptor N-methyl-D-aspartate (NMDA). The binding of glutamate to NMDA is “thought to play a role in consolidation of traumatic memories in PTSD” (9). Researchers believe that an increase in expression of glutamate could have toxic effects. Since glutamate seems to be a significant potential biomarker, researchers study the expression of glutamate postmortem. In a study done by Holmes, positron emission tomography (PET) scan findings showed that there was an increased expression of the glutamate receptor mGluR5 available in vivo, and they also saw a relationship between the mGluR5 and glucocorticoid gene expression postmortem. In investigators’ postmortem findings, they discovered that there was an upregulation of the scaffolding protein SHANK1. SHANK1 is responsible for affixing the mGluR5 to the cell surface. This study takes a step in the right direction for possibly indicating what is going on at the molecular level as well as a possible target for treatment. Another neurotransmitter of relevance to the pathophysiology of PTSD is GABA. GABA is an inhibitory neurotransmitter whose role is to decrease the effect that stress would have on the brain. It does this by inhibition of the corticotropinreleasing hormone (CRH) and NE. These circuits are responsible for arbitrating the stress and fear responses. In a PTSD patient, as mentioned earlier, increased glutamate plays a role in the traumatic memories of PTSD patients. With an overexpression of glutamate, GABA would have to step in and neutralize the ratio occurring so that the traumatic memories do not consume the patient. This is not the case in PTSD patients, so it is safe to say that along with the dysfunctional overexpression of glutamate, there is also a dysfunctional under-expression of GABA in the brain. “GABA’s effects are mediated by GABAA receptors” (9). The GABAA receptors work with benzodiazepine, which enhances the effect of GABA at the GABAA receptors, resulting in an anxiolytic 50

response. When the brain is exposed to excessive stress, it causes alterations in the GABAA/benzodiazepine interaction. That in turn results in binding sites for benzodiazepine in different parts of the brain, including the cortex, hippocampus, and thalamus. In the past, researchers have focused on the amygdala as the region most affected in PTSD patients, but this research shows that many other parts of the brain are also involved in the biology of PTSD. Though this research is essential in understanding the pathophysiology of PTSD, it has been shown that treating a patient with PTSD with benzodiazepines is not a beneficial treatment. These studies can help researchers take a step forward for future studies. So far they know that they cannot treat with increasing the neurotransmitter needed for GABA, but they may be able to treat patients by keeping the glutamate receptor from being over-expressed to help prevent the excitatory, stress-like behavior. Another problem seen in PTSD patients is that not only is it hard to treat, but because there are no labs that can be run for these patients, it is also hard to diagnose. Researchers are looking to develop a way that can measure brain activity in a patient with PTSD. How are researchers planning to do this? Functional magnetic resonance imaging may be the answer, as it looks at the blood flow in the brain in order to measure brain activity. The reason studies like the ones mentioned above are so important is because researchers need to know what they are looking for and which areas of the brain are actually associated with the disease. Another screening process that researchers are thinking about using is an electroencephalogram (EEG). The nodes of the EEG allow investigators to look at specific areas of the brain. The more nodes used, the greater the localization of specific areas of the brain. By using something like a 16-node EEG, if researchers can isolate which portion of the brain that they think is most involved in the pathophysiology of PTSD patients, then they can localize those nodes to that specific area. Another form of scanning that can help diagnose PTSD patients is a positron emission tomography (PET) scan. As used in the in vivo studies for looking at the glutamate receptors, it is thought that the PET scan can be used to train the lateral frontopolar cortex to amplify the attention to internal regulatory processes for emotional regulation as well as reduce hyperarousal symptoms. These different studies and forms of screening are necessary for the future of treating and diagnosing PTSD. As mentioned, many PTSD patients do not respond to conservative treatment. Researchers are trying to answer the questions as to why that occurs and how can we better diagnose so that treatment is more effective. Using fMRI to compare functional brain activity in a patient with PTSD may reveal that some patients with more severe symptoms may exhibit different brain activity than those with milder symptoms. It also can be used simply to compare the activity in a healthy brain with those who have PTSD. As researchers continue to study this specific mental disorder, we are getting closer to not only a possible lab diagnosis but also a more effective treatment that can help a wider spectrum of individuals.


Post-Traumatic Stress Disorder

References 1.

Farooqui M, Quadri SA, Suriya SS, Khan MA, Ovais M, Sohail Z, et al. Posttraumatic stress disorder: a serious post-earthquake complication. Trends Psychiatry Psychother. 2017;39(2):135-43.

2. PTSD Overview - PTSD Washington, DC: National Center for PTSD; 2018. Available from: https://www.ptsd.va.gov/ professional/PTSD-overview/index.asp. 3. Karatzias T, Cloitre M, Maercker A, Kazlauskas E, Shevlin M, Hyland P, et al. PTSD and Complex PTSD: ICD11 updates on concept and measurement in the UK, USA, Germany and Lithuania. Eur J Psychotraumatol. 2017;8(sup7):1418103. 4. Flory JD, Yehuda R. Comorbidity between post-traumatic stress disorder and major depressive disorder: alternative explanations and treatment considerations. Dialogues Clin Neurosci. 2015;17(2):141-50. 5. Kilpatrick DG. Mental Health Impact of Rape: National Violence Against Women Prevention Research Center; 2018. Available from: https://mainweb-v.musc.edu/ vawprevention/research/mentalimpact.shtml. 6. Jia Z, Tian W, Liu W, Cao Y, Yan J, Shun Z. Are the elderly more vulnerable to psychological impact of natural disaster? A population-based survey of adult survivors of the 2008 Sichuan earthquake. BMC Public Health. 2010;10:172. 7.

Beck JG, Coffey SF. Assessment and treatment of PTSD after a motor vehicle collision: Empirical findings and clinical observations. Prof Psychol Res Pr. 2007;38(6):62939.

15. Clinical Practice Guideline for the Treatment of Posttraumatic Stress Disorder (PTSD) in Adults. American Psychological Association; 2017. 16. VA/DoD clinical practice guideline for the management of post-traumatic stress disorder and acute stress disorder: clinical summary. Department of Veterans Affairs Department of Defense; 2017. 17. Jonas DE, Cusack K, Forneris CA, Wilkins TM, Sonis J, Middleton JC, et al. Psychological and Pharmacological Treatments for Adults With Post-traumatic Stress Disorder (PTSD). Rockville (MD): Agency for Healthcare Research and Quality; 2013. Report No.: 13-EHC011-EF. 18. Fonzo GA, Goodkind MS, Oathes DJ, Zaiko YV, Harvey M, Peng KK, et al. Selective Effects of Psychotherapy on Frontopolar Cortical Function in PTSD. Am J Psychiatry. 2017;174(12):1175-84. 19. Bryant RA, Felmingham K, Kemp A, Das P, Hughes G, Peduto A, et al. Amygdala and ventral anterior cingulate activation predicts treatment response to cognitive behaviour therapy for post-traumatic stress disorder. Psychol Med. 2008;38(4):555-61. 20. Aupperle RL, Allard CB, Simmons AN, Flagan T, Thorp SR, Norman SB, et al. Neural responses during emotional processing before and after cognitive trauma therapy for battered women. Psychiatry Res. 2013;214(1):48-55. 21. Fonzo GA, Goodkind MS, Oathes DJ, Zaiko YV, Harvey M, Peng KK, et al. PTSD Psychotherapy Outcome Predicted by Brain Activation During Emotional Reactivity and Regulation. Am J Psychiatry. 2017;174(12):1163-74.

8. Bradley R, Greene J, Russ E, Dutra L, Westen D. A multidimensional meta-analysis of psychotherapy for PTSD. Am J Psychiatry. 2005;162(2):214-27. 9. Sherin JE, Nemeroff CB. Post-traumatic stress disorder: the neurobiological impact of psychological trauma. Dialogues Clin Neurosci. 2011;13(3):263-78 10. Prins A, Bovin MJ, Smolenski DJ, Marx BP, Kimerling R, Jenkins-Guarnieri MA, et al. The Primary Care PTSD Screen for DSM-5 (PC-PTSD-5): Development and Evaluation Within a Veteran Primary Care Sample. J Gen Intern Med. 2016;31(10):1206-11. 11. Weathers FW, Bovin MJ, Lee DJ, Sloan DM, Schnurr PP, Kaloupek DG, et al. The Clinician-Administered PTSD Scale for DSM-5 (CAPS-5): Development and initial psychometric evaluation in military veterans. Psychol Assess. 2018;30(3):383-95. 12. Kaminer D, Seedat S, Stein DJ. Post-traumatic stress disorder in children. World Psychiatry. 2005;4(2):121-5. 13. Treatment - PTSD: National Center for PTSD; 2009. Available from: https://www.ptsd.va.gov/public/treatment/ therapy-med/index.asp. 14. Medications for PTSD: American Psychological Association; 2018. Available from: http://www.apa.org/ ptsd-guideline/treatments/medications.aspx.

51


Scholarly Research In Progress • Vol. 2, November 2018

The Neural Basis of a Bilingual Brain Anjanie Khimraj1*

Geisinger Commonwealth School of Medicine, Scranton, PA 18509 *Correspondence: akhimraj@som.geisinger.edu 1

Abstract Over the past decade, researchers have investigated how language affects the brain centers using several different neuroimaging techniques to aid in understanding this process. The primary goal of this literature review was to explain the level at which bilinguals process language differently from monolinguals and how it reconfigures the structural network of the brain. Further, how the age at which an individual learns their first and second language contributes to the aforementioned was explored. Several articles were analyzed to provide an overview of the different studies conducted to explain the aims of this review. Many studies were discussed to exemplify how the left hemisphere of the brain has the strongest neuronal activations during the use of language, and the concept of executive functioning in regard to bilinguals. In current literature, the areas of the brain that serve great functional importance to the bilingual brain are as follows: inferior parietal cortex, inferior parietal lobe, inferior frontal gyrus, superior temporal gyrus, dorsolateral prefrontal cortex, and lastly the rostrolateral prefrontal cortex. Additionally, there is a direct correlation between age of acquisition and language proficiency. These studies are very detailed in both their findings and explanations, but continued research in these areas can possibly lead to new discoveries that indicate additional areas in either the left or right hemisphere of the brain being activated when monolinguals learn a second language.

Introduction Language contributes to the uniqueness of human abilities and encompasses the use of different parts of our brain. Most humans reside in different developed environments and are surrounded by individuals from various cultural backgrounds. Therefore, we have all encountered someone who knows more than one language, including ourselves. The Center for Immigration Studies reported that approximately 66.8 million Americans, 5 years of age and older, speak more than one language including English, a number that has risen by about 2.2 million since the year 2010 (1). This aforementioned ability is called bilingualism, which researchers define as a multidimensional phenomenon attributed to humans who use one or more spoken languages regularly throughout their daily lives. This capability can be measured by the frequency at which an individual uses his or her language and their capacity to develop this skill (2). According to Bialystok et al (3), bilingual individuals have a stronger advantage than monolinguals during old age because it lessens the risk of cognitive decline, being that they are unknowingly forced to use their brains frequently when switching between their languages. Many studies have shown how the age of acquisition of a second language can significantly contribute to the structural and functional connectivity in the brain, whether it was learned “sequentially” or “simultaneously” (4). Researchers and 52

scientists have utilized several different types of neuroimaging techniques to provide quantitative data and visual evidence that revealed how processing more than one language can affect the way in which the brain functions and the structural changes that occur. The primary issues regarding the neural basis in bilingualism that will be investigated are the following: different aspects of processing language in association with the different areas of the brain that are affected as a result and the connection between the age of acquisition and language proficiency. Bilingual vs monolingual brain There are very distinct, complex structural components of the bilingual brain that are activated during language processing. Both the left and right hemispheres of the brain utilize cognitive abilities when pertaining to language. However, the left hemisphere plays a primary role in language itself because it contributes to writing, speaking, hearing, and interpreting (5). Firstly, the inferior parietal lobe and cortex are responsible for processing sensory, visual, auditory, and vestibular signals, which allows us to integrate our senses when tasting or touching something. It also gives us a sense of the “neural construct” of our bodies, where the different parts are located, and how movement of these parts are initiated (5). Secondly, the caudate nucleus controls how the brain learns in the act of storing and processing memories, which is important throughout development and language processing. However, the left caudate nucleus specifically contributes to a person’s communication skills. The dorsolateral prefrontal cortex plays a role in the planning and organization of behavior, whereby a reflex response occurs when sensory information is inputted into the brain followed by a motor output. For example, if you are stuck in traffic and know you will be late for work, then you would think of an alternative route to arrive on time (5). According to Wong et al. (6), there was a large difference in gray matter density for bilinguals as compared to monolinguals in both the inferior parietal cortex and inferior parietal lobe, which are all effects of how well the individuals knew and used the language as well as their heightened control over cognition (Figure 1). Gray matter is the neuronal cell bodies that constitute the brain tissue, and its density in the specific region of the brain that positively correlates with certain skills and cognitive abilities (5). Most of the results from the aforementioned were obtained using voxel-based morphometry (VBM), which is a technique that uses statistics to reveal anatomical differences in the brain between groups (7). Nonetheless, there were additional findings that showed higher “gray matter volume” in the left caudate nucleus in bilinguals than monolinguals because the caudate nucleus works synonymously with the dorsolateral prefrontal cortex. These areas are responsible for executive functions such as selective inhibition and working memory and shows a more significant activation in the bilingual brain because bilinguals induce these functional changes when they


Bilingual Brain

switch from one language to another due to their enhanced cognitive control (8). Lastly, the inferior frontal gyrus (responsible for comprehending language), superior temporal gyrus (responsible for encoding language content), and dorsolateral and rostrolateral prefrontal cortices were specific areas in the brain that showed higher activation in bilinguals than monolinguals during a reading task. The rostrolateral prefrontal cortex mainly processes abstract thoughts, analogical reasoning, and “episodic memory retrieval,” which all require high cognitive control (9). The differences in the monolingual brain performing a language task similar to the aforementioned can be seen through a study performed (4), which explains why the human brain responds to spoken language not only by understanding and identifying words but also deriving their meanings through “contextual information.” These researchers conducted a study to test how the brain maps the correlation between sound and meaning through the processing of speech by using functional magnetic resonance imaging (fMRI). The fMRI demonstrates regional and time-varying changes in brain metabolism (9). The participants were 30 right-handed Mandarin Chinese speakers, all with normal hearing or corrected-to-normal vision. There were no reports of the subjects having any language or mental impairment, but one female displayed large head motions during the fMRI scanning, so she was excluded from the results (4). There were 3 types of auditory stimuli (total of 84) presented to the participants: timeversed phrases (TPs), expected phrases (EPs), and unexpected phrases (UPs). The EPs were Chinese idioms with 3 to 5 characters and UPs were the idioms where the first 2 characters were kept and the last was replaced with illegitimate characters. TPs were made from both UPs and EPs to make a “low-level acoustical match” so that the spectrums of the sounds and “voice Figure 1. The different parts of the brain that are activated during executive identity” were kept and allowed the idioms to functioning and language processing in bilinguals (6) seem less logical. The fMRI results revealed that the left anterior superior temporal gyrus (aSTG) had stronger activation analogous to UPs and Aspects of processing language as well as the ventral inferior frontal gyrus (IFG) relative to the “phonological-semantic prediction of spoken words” (4). According to Wong et al (6), the structural components Additionally, the study found that the superior temporal lobe is of the brain serve as a visual aid in understanding these responsible for processing and organizing words and phrases “architectural language networks,” and the functions explain of different lengths and regions of the brain from the left how these networks are aligned through either personal aSTG to the anterior superior temporal sulcus (STS) become experiences or different contexts (6). Studies indicate that activated in this process. The superior temporal sulcus can be bilingual individuals are constantly selecting and omitting differentiated into two functional aspects: language processing specific words from their vocabulary when switching between and social cognition (10). For example, a person is socially their first and other spoken languages. This enables them to cognizant when they are able to be empathetic toward other have a selective advantage over monolinguals in the level of individuals and make eye contact when having a respectful their cognitive abilities. There is the concept called executive conversation with another person. Lastly, the posterior middle functioning, whereby it is a set of nonverbal, neurological temporal gyrus (primarily responsible for encoding language functions that assist individuals in managing simple or context and motion discrimination) displayed relatively strong complex daily tasks. Inhibition, shift, and update are the main activation for both UPs and EPs. executive functions that are relevant to language. Inhibition

53


Bilingual Brain

encompasses the ability to stop an automatic, behavioral response as needed and shift is the ability to freely transition between situations or tasks. Finally, update refers to the brain’s capacity to retain information and replace it with newly acquired information (6). A study was conducted to test the frequency of executive functioning in Welsh-English bilinguals. The principal investigators, Wu and Thierry (11), formulated two hypotheses stating: 1) if bilinguals have a universal, fixed advantage for executive functioning, then cognitive control is independent of linguistic context; or 2) if the aforementioned advantage is dependent on context, then cognitive control is possibly prominent when bilinguals are exposed to each of their spoken languages. There were 18 Welsh-English bilinguals who participated in the study and they were right-handed with correct-to-normal vision (9 males and 9 females; 20.4 +/- 2.1 years). These individuals learned both languages at the toddler stage and spoke only Welsh and English. Therefore, the subjects rated their proficiency in both languages on a scale from 1 to 10 with 1 being “very poor” and 10 being “perfectly fluent” in regard to their level of speaking, reading, listening, and writing (15). The results of the ratings were English (8.9 +/- 1.1) and Welsh (9.8 +/- 0.5). The stimuli were arrow displays with 5 horizontal arrows and the third arrow either did or did not match the direction of the “flanking” arrows. The different arrangements of the arrows included: all 5 pointing to the right, all 5 pointing to the left, 4 pointing to the left with the third arrow pointing to the right, and 4 pointing to the right with the third arrow pointing to the left. A total of 270 “high-frequency nouns” in both English and Welsh were displayed across 3 blocks of 90 arrow arrangements. The monolingual blocks had words in either English or Welsh only and the bilingual blocks had half in Welsh and half in English. There were 18 different stimuli created so that the presentations were random among each participant (11). A series of stimuli were presented to the participants, who were approximately 1 meter away. They were directed to press the right button if the stimulus, which is an arrow display, pointed right and the same for the left. In addition to the arrow displays, there were a set of words intermixed with them to act as both a distractor and an indicator of the subjects’ cognitive abilities. The participants stared at a blank screen for 200 milliseconds (ms) and then another display for 500 ms, where a cross appeared at the center of the screen. The different stimulus then appeared on the screen for about 1500 ms and then at different intervals for 2000 ms. At the end of the trials, there was a questionnaire that contained the same words from the displays with approximately 270 added words and the participants had to pick the ones they recalled seeing altogether (11). The electrophysiological data (ERP) were recorded throughout the study to measure the mean amplitude of noncorresponding and corresponding conditions through the 63 electrodes arranged across each person’s scalp. The data indicated a slightly elevated amplitude when a combination of the languages was presented at a duration of 400 to 800 ms and an increasingly high amplitude for non-correspondence in either Welsh or English. On the contrary, the behavioral study indicated that the reaction times for the correspondence factor 54

in Welsh and English contexts were about the same at 610 ms and 630 ms in mixed contexts. In the non-corresponding trial, English had the highest mean reaction time (690 ms) as opposed to Welsh and mixed contexts both slightly decreasing at 660 ms. According to the results, the subjects facilitated the non-corresponding trials better, as indicated by the decreased reaction times for the mixed blocks, because the different blocks for the corresponding trials did not display any significant differences. Nonetheless, the results support the second hypothesis that states executive function depends upon the language context in bilinguals. In other words, bilinguals can incorporate and utilize different inhibitory mechanics when presented with a mixture of words from their spoken languages (11). Age of acquisition and language proficiency Age of acquisition (AoA) involves comprehending a piece of information at an earlier stage of development, thus fostering a quicker response time in adulthood. Some studies used different imaging techniques to show how this phenomenon can affect the white or gray matter of the brain (12). Grogan et al. (13) studied multilingual with differences in their lexical (lingual) efficiency in comparison to their various number of spoken languages. A total of 61 right-handed individuals from the United Kingdom participated in the study. They were all between the ages of 18 and 29 years and learned English after their native language. Of the 61 participants, 31 were multilingual who spoke English and several other languages (a range of 3 to 6 languages with 1 of them being native), and 30 were bilinguals who spoke English and their native language (13). The criteria for the AoA of the English speakers in this study were: when the subjects first learned English, how much English was used, and how they rated their language proficiency, with 1 being low proficiency and 9 being high proficiency. Additionally, the researchers measured the subject’s efficiency in producing English words through a “letter fluency” assignment, where the subjects wrote down as many words as they could starting with the letter “s” in 1 minute. A letter or two were changed from actual English words to measure how well they could recognize English words, which was measured through a computerized lingual decision assessment. The trials consisted of 60 illegitimate words and 60 legitimate words divided into a total of 6 blocks, whereby the subjects press a button for the words they thought to be legitimate (13). The data were collected using magnetic resonance imaging (MRI) to measure the speed and accuracy of the indicated response by taking high-quality images of the brain. Also, voxel-based morphometry (VBM) images were taken to identify the structures in the brain (6). These images were either modulated, which means they are fixed to quantify the volume of a given area and the unmodulated images calculate tissue density. The researchers wanted to compare the brain images for bilingual and multilingual groups without AoA, and each group separately. Essentially, both groups were tested for the findings of a correlation between gray matter and lexical efficiency (excluding AoA), gray matter with AoA (excluding lexical efficiency) and both together (13). The unmodulated images indicated an increased density of gray


Bilingual Brain

matter in multilingual in the posterior supramarginal gyrus (pSMG) of both the left and right hemispheres of the brain. The supramarginal gyrus makes up part of the parietal lobe of the brain, as previously discussed, and it plays a major role in emotional responses (empathy) and phonological processing (5). The lexical efficiency trial showed a direct correlation between the aforementioned criteria and gray matter density. An increase in high lexical efficiency directly correlated to an increase in gray matter and vice versa, but causes a low AoA in the left inferior frontal cortex (13).

Conclusion The phenomenon known as bilingualism and its effect on the human brain allows a simple concept like language to become vastly interesting. Oftentimes, we understand that a second language is developed through our parents or is being learned. However, one rarely considers how the neural network becomes altered as a result of processing more than one language at once, and the different types or levels of this. The studies discussed thoroughly explain how the capabilities of a bilingual individual can alter the brain structurally and functionally in regard to the different types of executive functioning, such as inhibition, shift, and update, as well as the correlation between language proficiency and age of acquisition. The study performed by Wong et al. (6) measured the differences in gray matter density in bilinguals and monolinguals using voxel-based morphometry. Mostly the left hemisphere of the brain was affected since it plays a role in language, reasoning and working memory (8). The specific areas that exhibited increased gray volume density in bilinguals when compared to monolinguals were both the inferior parietal cortex and lobe (sensory, visual, and auditory processing that serves great importance when learning a new language), the inferior frontal gyrus (understanding language), the superior temporal gyrus, and the dorsolateral and rostrolateral prefrontal cortices (major functions of planning and organizing behavior as well as processing and utilizing the highest level of cognitive skills and decision-making). Consequently, a similar study to the aforementioned was discussed briefly to establish the areas activated in a monolingual brain during language processing and they were slightly similar to the ones in the bilingual brain. The activated brain areas were the left anterior superior temporal gyrus, the ventral inferior frontal gyrus, the anterior superior temporal sulcus (executing social cognition by understanding different social cues such as empathy and kindness), and the posterior middle temporal gyrus (language content and deciphering motion). Lastly, studies were discussed to show how the age at which a person acquired their first and second languages can affect different cognitive abilities and proficiency in language. For future applications, additional studies should be performed to investigate and discover if the same or newer areas in the left or right hemispheres of the brain are being activated when learning a second language (both writing and speaking).

Acknowledgments

References 1.

Camarota, SA, Zeigler K. (2014, October 3). One in Five U.S. Residents Speaks Foreign Language at Home, Record 61.8 Million. Retrieved Jan. 29, 2018, from https://www.bing.com/ cr?IG=787F65E7CEF143A8B65993FCDC2C9E4E&CID=046 284AF43D9632A313A8F294276623B&rd=1&h=70lPT6c0_ 1R-xDwV5KH_-cKHgwFa2g2pNuEK7V8T0Ag&v=1&r=https% 3a%2f%2fcis.org%2fOne-Five-US-Residents-Speaks-ForeignLanguage-Home-Record-618-million&p=DevEx,5068.1

2. Buchweitz A, Prat C. (2013). The bilingual brain: Flexibility and control in the human cortex. Physics of Life Reviews. 10(4),428-443. 3. Bialystok E, Craik FIM, Luk G. (2012). Bilingualism: Consequences for Mind and Brain. Trends in Cognitive Sciences. 16(4),240-250. 4. Berken JA, Chai X, Chen J, Gracco VL, Klein D. (2016). Effects of Early and Late Bilingualism on Resting-State Functional Connectivity. Journal of Neuroscience. 36(4),11651172. 5. Purves D, Augustine GJ, Fitzpatrick D, Hall WC, LaMantia A, Mooney RD, White LE. (2018). In Neuroscience, sixth edition. Sunderland, MA: Oxford University Press. 6. Wong B, Yin B, O’Brien B. (2015, November 22). Neurolinguistics: Structure, Function, and Connectivity in the Bilingual Brain. BioMed Research International. 2016,1-22. 7.

Whitwell JL. (2009) Voxel-Based Morphometry: An Automated Technique for Assessing Structural Changes in the Brain. Journal of Neuroscience. 29(31),9661-9664.

8. Jalali-Moghadam N, Kormi-Nouri R. (2015). The role of executive functions in bilingual children with reading difficulties. Scandinavian Journal of Psychology. 56(3),297305. 9. Westphal AJ, Reggente N, Ito KL, Rissman J. (2015). Shared and distinct contributions of rostrolateral prefrontal cortex to analogical reasoning and episodic memory retrieval. Human Brain Mapping. 37(3),896-912. 10. Hein G, Knight RT. (2008). Superior temporal sulcus - It’s my area: or is it? Journal of Cognitive Neuroscience. 20(12):2125-2136. 11. Wu YJ, Thierry G. (2013). Fast Modulation of Executive Function by Language Context in Bilinguals. Journal of Neuroscience. 33(33),13533-13537. 12. Lake BM, Cottrell GW. (n.d.). Age of Acquisition in Facial Identification: A Connectionist Approach [Abstract]. 13. Grogan A, Jones P, Ali N, Crinion J, Orabona S, Mechias M, et al. (2012, March 1). Structural correlates for lexical efficiency and number of languages in non-native speakers of English. Neuropsychologia. 50(7),1347-1352. 14. Glover, GH. (2011). Overview of Functional Magnetic Resonance Imaging. Neurosurgery Clinics of North America. 22(2),133-139. 15. Lyu B, Ge J, Niu Z, Tan LH, Gao J. (2016). Predictive Brain Mechanisms in Sound-to-Meaning Mapping during Speech Processing. Journal of Neuroscience. 36(42),10813-10822.

The editor for this paper was Brian Piper, PhD. 55


Scholarly Research In Progress • Vol. 2, November 2018

Pharmacoepidemiology and Public Policy Regarding Opioid Retail Distribution in Washington State from 2006–2016 Jessica J. Wang1*, Stephanie D. Nichols2, Kenneth L. McCall3, and Brian J. Piper1

Geisinger Commonwealth School of Medicine, Scranton, PA 18509 Husson University School of Pharmacy, Bangor, ME 04401 3 University of New England, Biddeford, ME 04005 *Correspondence: jwang01@som.geisinger.edu 1

2

Abstract Background: The opioid epidemic is one of the largest public health issues in the United States. According to the Centers for Disease Control (CDC), 568,599 people in the United States died from drug overdoses between 1999 and 2015. Furthermore, 63.1% and 66.4% of the overdose deaths in 2015 and 2016, respectively, were related to opioids. By 2006, the CDC identified Washington state to be in the upper one-third of mortality from unintentional drug overdoses in the nation. Washington state has striven to find solutions through policy adaptions and collaborations with between government agencies, public agencies, and physicians. Methods: This study examined the retail drug distribution data tracked by the Drug Enforcement Agency’s Automated Reports and Consolidated Order System. Washington state data from 2006 to 2016 were obtained to evaluate trends in the most frequently distributed opioids: codeine, oxycodone, hydromorphone, morphine, oxymorphone, fentanyl, tapentadol, meperidine, and buprenorphine, methadone, and hydroxycodone. The retail drug weights were abstracted and trends examined. In addition, policy changes from this time period that addressed access or control of opioids were explored. Results: The largest increases in the morphine mg equivalent (MME) between 2006 and 2016 was for oxycodone (+1,118.0%), followed by buprenorphine (+858.1%), and tapentadol (+338.7%). The largest decreases were for meperidine (-76.0%), followed by codeine (-34.4%), and morphine (-15.3%). Conclusions: Overall retail drug distributions seem to be now declining or increasing at a slower rate than in 2006, but it is difficult to relate these trends to specific policy changes that address drug distribution or opioid overdoses. A next step could be to compare trends in prescription opioid deaths with retail drug distribution and policy changes to determine relationships and evaluate efficacy.

Introduction According to the US Centers for Disease Control and Prevention (CDC), overdoses from prescription opioids are a key contributor in the rise of opioid overdose deaths from 1999 to 2016 (1). Although the overall amount of pain experienced as reported by the US population has not changed, the amount of prescription opioids sold to distributors, such as pharmacies, hospitals, and doctors’ offices, has almost quadrupled from 1999 to 2010 (1). The

56

profound increase in opioid prescriptions and sales leading to increased opioid-related deaths and abuse have all contributed to the public health crisis in the United States that is often designated the opioid epidemic. In the mid- to late 1980s, a case series suggested that longterm opioids taken for chronic non-cancer pain were safe and could be taken with “few severe problems” (2). The perceived lack of treatment of chronic pain led to changes in state legislation across the country that reduced the risk of sanctions on prescribers that allowed opioids to be more easily prescribed (2). Failures in the measures taken by the US Agency for Health Care Policy and Research, the American Pain Society (APS), and the World Health Organization to improve assessment and treatment of pain was addressed in an editorial written by the president of the APS, Dr. Mitchell Max, in 1990. He recommended a different approach that included making pain “visible,” allowed nurses to initiate and modify analgesic treatments, and developed “quality assurance guidelines” (3). The APS introduced the “Pain as the Fifth Vital Sign” campaign in 1996 to address the underassessment of pain management at the time by emphasizing that pain is just as important as the standard four vital signs. This concept was accepted by organizations such as the US Veterans Health Administration and The Joint Commission and incorporated into their standards of pain assessment and management (4). The US Food and Drug Administration (FDA) approved the release of OxyContin®, a controlled-release version of oxycodone manufactured by Purdue Pharma (5) in December of 1995. Through an extensive, multifaceted marketing campaign, Purdue Pharma was able to quickly increase sales of the drug from $44 million in 1996 to almost $3 billion in 2001 and 2002. Purdue Pharma used prescriber profiles collected by drug companies to target high opioid prescribing physicians across the country. They conducted national pain management training conferences, offering all-expenses-paid trips to more than 5,000 physicians, pharmacists, and nurses. They established a bonus system to incentivize their representatives to increase sales. The company also initiated a patient starter coupon program, giving patients a free limited-time prescription for a 7- to 30-day supply of OxyContin. Purdue Pharma’s marketing material claimed that the risk of iatrogenic addiction from OxyContin was less than 1%, and this information was additionally propagated by their sales representatives. The increased availability of OxyContin precipitated by Purdue Pharma’s effective marketing campaign resulted in OxyContin becoming the “most prevalent prescription opioid abused in the United States” by 2004 (6).


Opioid Retail Distribution

Significant increases were also seen in prescription rates of buprenorphine and tapentadol. Buprenorphine was approved by the FDA in 2002 and is used to aid in opioid addiction/dependency treatment. It offers a good alternative to methadone, which must be administered in a structured clinic (7). Tapentadol is an opioid which was initially FDA-approved in November 2008 for acute pain. Subsequently, the FDA approved an extendedrelease form of tapentadol for chronic pain in 2011 and an extended-release form for diabetic neuropathic pain in 2012 (8). Opioid prescriptions in Washington state increased 500% between 1997 to 2006. Washington changed their laws and regulations in 1999 in accordance with the common perception of the under treatment of chronic pain and the nationwide trend of relaxing the stigma against opioid prescriptions. These state policy changes, coupled with national policy changes and improvements in marketing strategies of pharmaceutical companies, led to the rapid increase of opioid prescriptions in the early 2000s. “By 2006, the CDC identified Washington to be in the upper one-third of mortality from unintentional drug overdoses in the United States” (2). Recognizing the importance of the impact of this issue has instigated collaborations between practicing providers in the field of pain, public agencies, and state executive agencies to reduce and reverse the opioid epidemic in Washington (2). In the United States, the Diversion Control Division of the Drug Enforcement Administration (DEA) tracks the transactions of controlled substances from the manufacturers through the distributors including hospitals, pharmacies, practitioners, mid-level practitioners, and teaching institutions using the Automation of Reports and Consolidated Orders System (ARCOS) (9). This research project uses ARCOS data to examine trends of retail opioid distribution within Washington state from 2006 to 2016 with a focus on the more frequently distributed opioids, including codeine, oxycodone, hydromorphone, morphine, oxymorphone, fentanyl, tapentadol, meperidine, and buprenorphine. The research will look at the trends of opioid retail distribution in Washington state in comparison to changes in policy and events to determine potential factors affecting the varying trends of opioid distribution.

Materials and Methods Procedures The Retail Drug Summary Reports (10) collected by ARCOS were accessed from the Diversion Control Division website. Data of the “Retail Drug Distribution by Zip Code Within State By Grams Wt” (Report 1) (10) in Washington state was abstracted for each year between 2006 to 2016. The more frequently distributed opioids codeine, oxycodone, hydromorphone, morphine, oxymorphone, fentanyl, tapentadol, meperidine, and buprenorphine were selected to be analyzed and adjusted to morphine milligram equivalents (MME) using a guideline from the CDC (11). A narrative review of public policies implemented, primarily between 2006 and 2016, was also completed using criteria including Washington state, opioid, opioid

epidemic, and policy on databases including PubMed and Google Scholar. Web searches with the same criteria yielded background information on the policies and implementation. The study protocol was approved by the Institutional Review Board of the University of New England. Data analysis CDC conversion factors were used to calculate MMEs (11) for each of the above-mentioned opioids. In regard to methadone, data from “Statistical Summary for Retail Drug Purchases by Grams Wt” (Report 5) (10) was abstracted to distinguish between purchases from pharmacies, hospitals, and practitioners versus narcotic treatment programs because the conversion factor for methadone increases at higher doses. A conversion factor of 8 (11) was used for pharmacies, hospitals, and practitioners whereas a conversion factor of 12 (11) was used for narcotic treatment centers. The 2010 US census population data was used to calculate the MME/person within the state (12).

Results Pharmacoepidemiology The data of retail drug distribution in Washington state was abstracted from 2006 to 2016. The data was initially abstracted for each quarter and the total grams for each year are shown in Table 1. The missing data for tapentadol prior to 2009 is because tapentadol was not approved by the FDA until 2008 and released in the United States until 2009. Figure 1A depicts the retail drug distribution of opioids with total weights of under 100 kg: buprenorphine, hydromorphone, meperidine, oxymorphone, tapentadol, and fentanyl. From 2006 to 2016, the largest increase in distribution was seen in oxymorphone (+1,117.0%) followed by buprenorphine (+857.9%). Tapentadol had the next highest elevation (+337.9%) between 2009 and 2016. There was a sharp escalation between 2009 and 2012, which then appears to have leveled off between 2012 and 2016. There was an increase in hydromorphone (+61.6%) between 2006 and 2016. The most pronounced decrease was for meperidine (-75.9%) followed by fentanyl (-7.3%). Figure 1B depicts the retail drug distribution of opioids with total weights of over 100 kg: codeine, oxycodone, hydrocodone, methadone, and morphine. Between 2006 and 2016, the largest increase was +19.7% in oxycodone distribution with most of that elevation occurring between 2006 and 2008. There was also a more modest (+9.2%) increase in hydrocodone from 2006 to 2016. Hydrocodone reached a peak in 2011 and increased by +33.6% from 2006. The largest decrease between 2006 and 2016 was for codeine (-34.4%) followed by methadone (-19.3%) and morphine (-15.3%).

57


Opioid Retail Distribution

Table 1. Annual retail drug distribution of select opioids in Washington state in grams. Annual total retail drug distribution of select (11) opioids in Washington State in grams recorded by ARCOS from data collected by the DEA between 2006 and 2016. These include hydromorphone, meperidine, oxymorphone, fentanyl, buprenorphine, tapentadol, codeine, oxycodone, hydrocodone, methadone, and morphine.

Figure 1A. Retail drug distribution of opioids with total weights of under 100 kilograms. Retail drug distribution of drugs with total weight distributed under 100 kg between 2006 and 2016. These include hydromorphone, meperidine, oxymorphone, fentanyl, buprenorphine, and tapentadol.

Figure 1B. Retail drug distribution of opioids with total weights of over 100 kilograms. Retail drug distribution of drugs with total weight distributed over 100 kg between 2006 and 2016. These include codeine, oxycodone, hydrocodone, methadone, and morphine.

58


Opioid Retail Distribution

Figure 2 illustrates the MME for codeine, oxycodone, hydromorphone, morphine, oxymorphone, fentanyl, tapentadol, meperidine, and buprenorphine. The data from Table 1 was converted from grams to milligrams and the CDC guidelines for calculating the total dose of opioids (morphine milligram equivalent) was used (11) to create Table 2. The total MME for each drug was then divided by 6 724 540, the population of Washington state per the 2010 US Census (12). Figure 2 was separated into three areas due to the wide distribution of data. The upper portion of Figure 2 ranges from 230 to 800 MME. Oxycodone dosage remains relatively steady with an overall increase of +19.72% between 2006 and 2016. ARCOS Report 5 was used to evaluate methadone data. The grams of methadone distributed across the state was separated into narcotic treatment centers and other (hospitals, pharmacies, providers, and teaching institutions) and the corresponding multiplier was used to calculate MME. As with the other opioids, methadone was divided by 6 724 540, the population of Washington state per the US Census of 2010 (12). This data is shown in Table 3a and Table 3b. A decrease of -9.2% was seen in methadone per capita between 2006 and 2016. The mid portion of Figure 2 depicts 10-120 MME. The largest increase between 2006 and 2016 was seen in buprenorphine (+858.1%) followed by hydromorphone (+61.6%). Morphine and hydrocodone followed similar trends, both increased initially and decreased around 2011. There was a net increase of +9.2% in hydrocodone but a net decrease of -15.3% in morphine from 2006 to 2016. The lowest portion of Figure 2 displays 0 to 10 MME. The largest spike (+1,118.0%) was in oxymorphone between 2006 and 2016. This was followed by tapentadol, which increased +338.7% between 2009 and 2016 with a pronounced elevation between 2009 and 2012. The greatest decrease between 2006 and 2016 was seen in meperidine (-76.0%) followed by codeine (-34.4%) and with relatively modest changes for fentanyl (-7.0%) Figure 3 shows the annual sum opioid dosages in MMEs of codeine, oxycodone, hydromorphone, morphine, oxymorphone, fentanyl, tapentadol, meperidine, and buprenorphine per person. These totals are shown in Table 4. Between 2006 and 2009, the opioid dosage per person increased from 1,143.83 MME to 1,370.45 MME. This subsequently decreased between 2009 and 2013 from 1,370.45 MME to 1271.8 MME with the steepest decrease being between 2011 and 2012 from 1355.1 MME to 1301.4 MME. The trend then increased from 1,271.8 MME in 2013 to 1321.4 MME in 2015 before decreasing to 1,228.6 MME in 2016. There was a 19.8% increase in the distribution of opioid dosage per person in between 2006 to 2009 at its highest point. There was a net increase over 2006 to 2016 of only 7.4%.

Figure 2. Morphine milligram equivalents of 11 drugs per person in Washington state from 2006 to 2016. Morphine milligram equivalents each opioid per person in Washington State between 2006 and 2016 is shown on a three-part graph to better illustrate trends. These include hydromorphone, meperidine, oxymorphone, fentanyl, buprenorphine, tapentadol, codeine, oxycodone, hydrocodone, methadone, and morphine.

59


Opioid Retail Distribution

Table 2. Morphine milligram equivalents per person of 11 drugs per person in Washington state. Morphine milligram equivalents per person of 11 opioids in Washington state between 2006 and 2016 using the population recorded in the 2010 US Census. These include hydromorphone, meperidine, oxymorphone, fentanyl, buprenorphine, tapentadol, codeine, oxycodone, hydrocodone, methadone, and morphine. Tapentadol was not recorded until 2009 because it did not obtain FDA approval until late 2008.

Table 3A. Morphine milligram equivalents of methadone – pharmacies, hospitals, practitioners. Morphine milligram equivalents of methadone distributed in pharmacies, hospitals, and by practitioners was calculated by summing the annual total weight of methadone distributed and using a multiplier of 8. Teaching institutions were omitted because none were distributed to these groups. There were no recorded data for mid-level practitioners until 2016.

Table 3B. Morphine milligram equivalents of methadone – narcotic treatment centers. Morphine milligram equivalents of methadone distributed in narcotic treatment centers was calculated using a multiplier of 12. 60


Opioid Retail Distribution

Figure 3. Annual sum of morphine milligram equivalents per person in Washington state. The sum of morphine milligram equivalents per person in Washington state of all 11 opioids recorded for each year between 2006 and 2016.

Public policy Washington state healthcare related agencies identified that opioid prescribing practice was a key factor in influencing the opioid epidemic and in 2006, the Agency Medical Directors’ Group (AMDG) met with the Clinical Advisory Group (comprised of 15 pain management experts) to develop the first opioid prescribing guideline in the United States. The guideline was initially implemented in 2007 and aimed at patients taking at least 120 MME. This number was selected based on the experiences of the Clinical Advisory Group. The guidelines recommended that patients who reach a dose of 120 MME without any clinical improvements see a pain specialist for consultation. For patients already taking more than 120 MME, the guidelines recommended their opioid regimen be reevaluated for possible reduction or withdrawal. A survey conducted in 2009 found that most providers did not use these guidelines (2). Updates were made to the guidelines in 2010 and again in 2015 (2).

Table 4. Annual sum of morphine milligram equivalents per person in Washington state. The sum of the morphine milligram equivalents of methadone taken from Table 3a and Table 3b was calculated and divided by the population in Washington state according to the 2010 US Census.

The Engrossed Substitute House Bill (ESHB) 2876 was approved in March 2010 and was adopted in July 2011. The bill repealed earlier pain management rules and required new rules be adopted which addressed dosing criteria, guidance on the required pain management consultations, and guidance on tracking clinical progress (13). The groups of opioid prescribers who came together to develop these rules include allopathic physicians, osteopathic physicians, podiatrists, dentists, and advanced registered nurse practitioners (2). 61


Opioid Retail Distribution

Washington state passed the Good Samaritan Law in 2010. This law allowed legal immunity for people overdosing and for those seeking aid for someone else overdosing. The law also promoted the use of naloxone by overdose bystanders (14). It took 5 years for this law to be passed due to the need to “keep the scope of immunity narrow…to get support of law enforcement, prosecutors, and some legislators” and the “emergence of prescription medicines as the drugs involved in a majority of drug overdoses” (14). This law was passed without funding for implementation (14). The University of New Mexico Project Extension for Community Healthcare Outcomes (ECHO) launched in 2003. The program aimed to improve healthcare in underserved populations through the use of telemedicine. Through technology, they were able to connect interdisciplinary specialist teams with primary care providers through casebased learning and mentor these clinicians in treating certain conditions in their communities. Project ECHO has been replicated in other parts of the United States (including Washington state) as well as in other countries (15). Due to the success of Project ECHO, the University of Washington partnered with the University of New Mexico, launching UW TelePain/ECHO in 2011. Like Project ECHO, this program linked specialists with community providers in underserved areas within the state with an additional goal of extending access to Wyoming, Alaska, Montana, and Idaho. UW TelePain is composed of didactic lectures and case presentations, and provides availability to an interdisciplinary panel of specialists. This resource helps providers gain access to pain management specialist consultations required for opioid prescribing required in Washington state (16). The program offers a weekly lecture on chronic pain management and an additional weekly lecture on addiction and substance dependence. Within 2 years, 660 providers had participated in the program, with 230 consultations completed and over 1,600 hours of educational material made available (2). RCW 70.225 was passed in 2007 and detailed the creation of the Washington state Prescription Monitoring Program. The Washington Prescription Monitoring Program (PMP) allowed providers to access a patient’s prescription history before prescribing that patient medication. The program aimed to reduce abuse of controlled substances and duplicate prescribing to improve quality of care and prescribing practices. The program was made available to providers in January 2012 and tracked the prescription and dispensing of Schedule II, III, IV, and V controlled substances (17). Since registration is not required, less than 30% of providers with DEA registration were registered to use the PMP as of 2015. In addition, even if a provider is registered, they are not required to use it (2). In Washington state, legalization of medical marijuana for “certain terminal or debilitating conditions” (18) occurred in 1998 with the approval of Initiative 692 (I-692). At this point, patients that qualified for this were only allowed a 60-day supply. In 2010, an amendment applied to I-692 allowed more types of healthcare professionals to prescribe medical marijuana including physician assistants, advanced registered nurse practitioners, and naturopathic physicians. State legalization of recreational marijuana occurred in 2012 when Initiative 502 was approved. This allowed for adults to possess 62

a limited amount of marijuana (up to 1 ounce) if acquired from state-licensed and regulated marijuana stores (18).

Discussion There are several key findings in this report. In 2015, the president of the American Academy of Pain Medicine, Bill McCarberg, MD, described Washington as “leading the way again” (19) in pain management. Washington state has been a leader in using research and expert knowledge to develop initiatives to reduce the opioid epidemic (20). In a letter preceding the 2015 AMDG opioid-prescribing guidelines, the Washington state Secretary of Health stated that the “AMDG Guideline, along with other key statewide efforts, has resulted in a 29% decrease in the rate of prescription opioid deaths between 2008 and 2013” (21). In April 2007, the AMDG in Washington published guidelines aimed at opioid prescribing practices to curb the growing opioid epidemic. A survey was sent to 655 Washington state primary care physicians in 2009 (22) and the results showed that most physicians were not were not using these guidelines. This appears to be consistent with the retail opioid drug distribution during this time (2). Focusing on 2007 to 2009, the trends in retail opioid distribution are consistent with the 2006 to 2007 trajectory. Those with increasing or decreasing trends previously remained increasing or decreasing, respectively, with the exception of fentanyl, which did not change between 2008 to 2009, and oxycodone, which decreased by only 1.2% from 2008 to 2009 (Table 1). The slight decrease in oxycodone retail distribution increased again the following year. The AMDG saw a need to update the guidelines and these were revised in June 2010 (2) with ways to better implement the guidelines. Washington passed their Good Samaritan overdose law in 2010 to reduce the prevalence of opioid deaths. In an initial evaluation of the law done by the Alcohol and Drug Abuse Institute at the University of Washington and published in November 2011 found that 88% of opiate users were more likely to call 911 in the event of an overdose. In addition, they determined that 76% of police surveyed would not have made an arrest or would be less likely to make an arrest (14). Washington also passed ESHB 2876 in 2010 which called for dosing criteria to be established and also required pain management consultation when a specific dosage was exceeded. It also included guidelines on when to seek a specialty consultation as well as how to track the use of opioids and tracking of clinical progress (13). In Figure 2, most of the opioids distributed between 2010 and 2011 follow the same trend as the previous year. Morphine slightly increased following a slight decrease between 2009 and 2010. Trends from 2011 to 2012 would be the best year to see the initial impacts of the Good Samaritan Law, ESHB 2876, and the updated AMDG guidelines for opioid prescribing. As shown in Figure 2, oxymorphone decreased 12.7% between 2011 and 2012. It is unclear if this drop in oxymorphone could be due to these initiatives. Laws being passed generally aimed to have a long-term effect, and the quantities of oxymorphone increased again in 2013 almost to what it was in 2011. On the other hand, 2011 is the peak for morphine and hydrocodone within the 2006 to 2016 interval. Following this,


Opioid Retail Distribution

both drugs were seen to have a decrease until 2016. Despite that, in Figure 3, starting in 2009, there is a decrease in the sum opioid dosage per person in Washington state (in MME). This trend continues through 2013. The Prescription Monitoring Program was made available to physicians in 2012 and allowed them access to which drugs their patients had been prescribed by which other providers and if they were dispensed. However, registration of the program was not required of physicians in Washington, contrary to other states, and as a result Washington state has not taken full advantage of this resource. Many providers are not registered and therefore the implementation of this law may have had no impact on the trends of opioid retail distribution. Medical marijuana laws have existed in Washington for two decades. A study published in 2014 found that “medical cannabis laws are associated with significantly lower statelevel opioids overdose mortality rates” (23). Washington’s first medical marijuana law was passed in 1998 and amendments to these laws has made medical marijuana increasingly more available. The increased availability of medical marijuana has not appeared to impact opioid distribution. Studies evaluating the trends in cannabis use before and after the passage of the law in 2012 have not been done. Therefore, it is difficult to make any correlation between the impact of marijuana use on opioid distribution. A limitation of this study includes the date range of the data set pulled from the ARCOS database. Retail Drug Summary Reports for prior years are available starting at 2000. A 2000-onward perspective may provide a broader depiction of the distribution trends of opioids. It may more be beneficial to look at the trend starting from the 1990s or earlier, prior to when the states began relaxing their opioid laws and there was a push to increase opioid prescribing, but these data are not available through the DEA as ARCOS data were not publicly available for this interval. Another limitation is that not all of the opioids reported on by ARCOS were evaluated in this study. Manufacturers and distributors are required to report all controlled substances classified as Schedule I and II to the DEA, as well as all narcotic controlled substances classified as Schedule III to the DEA. In addition to codeine, buprenorphine, oxycodone, hydromorphone, hydrocodone, meperidine, methadone, morphine, oxymorphone, tapentadol, and fentanyl, the other opioids reported to be distributed in Washington state include dihydrocodeine, levorphanol, opium tincture, powdered opium, noroxymorphone, alfentanil, remifentanil, sufentanil, and heroin. However, the 11 drugs with the highest grams distributed were evaluated. One potential future direction is to examine other states (Oregon, Idaho) in conjunction with their opioid use and public policies. The retail drug distribution data abstracted from the ARCOS report is limited because it only provides us with data on how much of the drug was sold to which type of buyer (hospitals, pharmacies, practitioners, mid-level practitioners, academic centers, and narcotic treatment centers) or how much of the drug went to buyers in each zip code within the state. The limitation is that this data does not indicate how much of the drug was prescribed or even used. Additional data sources (e.g., prescription monitoring programs) would need to be

used to determine if there if there is any sort of correlation relating the distribution of opioids to the amount prescribed and overdose deaths. A fourth limitation is that the 2010 census data were used when calculating per-person data. However, Washington’s population has been growing between 2006 and 2016. As a result, the weight of opioids distributed per person as well as the MME per person may be slightly underestimated for years after 2010 and slightly overestimated for years prior to 2010. Additionally, not all measures applied by Washington state were fully evaluated in this study. This report focused on state legislation which may have had a wide impact at a macro (state) level, but it is likely that local and regional initiatives as well as national legislation have contributed to improving the opioid issue in Washington. For example, Washington had over 100 medicine drop-off sites as of 2015 (2). Another example is that the Washington State Department of Health started a collaboration in 2008 to address and develop guidelines for opioid prescribing in emergency departments. This group included members from state departments, emergency physicians, pain physicians, addiction physicians and encouraged the use of an Emergency Department Information Exchange for sharing data between participating emergency physicians when certain patients present to the emergency room (24). It is most likely a combination of factors that contribute to trends of retail drug distribution and opioid morbidity and mortality in Washington state, and although associations may be made, causation cannot be determined. Next steps that could be taken include statistical analysis of the trends to further evaluate the changes. Linear regression could be used to determine projected values. Since ARCOS reports retail drug distribution by zip code, further analysis can be done to see which zip codes and corresponding counties are receiving the most drugs. This can be compared with opioid deaths and demographic data in these counties to see if there is any correlation between the two or to inform law enforcement priorities. Local initiatives can be taken into consideration to determine if there are counties that have been more successful than others in reducing the opioid epidemic and how successful methods can be implemented elsewhere. A further step could be to take advantage of reports generated by the Washington state PMP to examine what was prescribed and how much was dispensed. Another topic to explore could be the effect of pharmaceutical marketing on retail drug distribution. A last potential next step would be determining when drugs came into market. For instance, tapentadol becoming FDA-approved in 2009 may have contributed to the decrease in hydrocodone dispensed starting in 2011. In addition, heroin should be considered as a potential cause of declines of prescription opioid distribution. As prescription opioids become more and more difficult to obtain, more people are turning to heroin, a cheaper alternative (25).

Conclusion The opioid crisis is a nationwide epidemic. Increased morbidity and mortality rates have risen in part due to increased prescribing, which was pushed by the old concepts that pain was not being treated and opioids were safe to use long term. The once accepted terminology of pain being “the fifth vital 63


Opioid Retail Distribution

sign” is now seen as a potential contributor to the overprescription of opioids (26).

com; [cited 2018 Feb 20] Available from: https://www. drugs.com/history/nucynta.html

The DEA via ARCOS has kept track of drugs being distributed to each state from the manufacturer. The trends in opioid drug distribution appear to overall be declining, albeit slowly, with the exception of buprenorphine. It is difficult to attribute these trends to one or even a few causes, despite the potential impact the laws may had on opioidrelated deaths. It would be beneficial to look at these trends in conjunction with opioid deaths and opioids dispensed to examine if distribution trends are following prescribing trends.

9. Diversion Control Division. Automation of reports and consolidated orders system (ARCOS) [Internet]. US Department of Justice Drug Enforcement Agency; [cited 2017 Aug]. Available from: https://www.deadiversion.usdoj. gov/arcos/

Acknowledgments

11. Health and Human Services CDC. Calculating total daily dose of opioids for safer dosage [Brochure]. [cited 2017 Oct 16]. Available from: https://www.cdc.gov/ drugoverdose/pdf/calculating_total_daily_dose-a.pdf

Data for this paper were collected by the DEA and reported by ARCOS. The public availability of this data source is appreciated.

Disclosures JJW, KLM and SDN have no disclosures. In the past 3 years, BJP has received travel from the National Institute of Drug Abuse and is a Fahs-Beck Fellow.

References 1.

Centers for Disease Control and Prevention. Understanding the Epidemic [Internet]. Center for Disease Control and Prevention; [updated 2017 August 30; cited 2018 February 5]. Available from: https://www. cdc.gov/drugoverdose/epidemic/index.html

2. Franklin G, Sabel J, Jones CM, Mai J, Baumgartner C, Banta-Green CJ, et al. A comprehensive approach to address the prescription opioid epidemic in Washington state. Am J Public Health. 2015 Mar;105(3):463-9. 3. Baker DW. The joint commission’s pain standards: origins and evolution [Internet]. Oakbrook Terrace, IL: The Joint Commission; 2017 [cited 2018 Feb 5]. 10 p. Available from: https://www.jointcommission.org/ assets/1/6/Pain_Std_History_Web_Version_05122017.pdf 4. National Pharmaceutical Council, Inc. Pain: current understanding of assessment, management, and treatments [monograph]. Reston, VA: National Pharmaceutical Council Inc. 2001. 5. Information by Drug Class. Timeline of selected FDA activities and significant events addressing opioid misuse and abuse [Internet]. U.S. Food and Drug Administration. [updated 2018 January 26; cited 2018 Feb 8]. Available from: https://www.fda.gov/Drugs/ DrugSafety/InformationbyDrugClass/ucm338566.htm 6. Zee AV. The promotion and marketing of OxyContin: commercial triumph, public safety tragedy. Am J Public Health. 2009 Feb; 99(2):221-227. 7.

SAMHSA. Buprenorphine [Internet]. SAMHSA; [2016 May 16; cited 2018 Feb 20] Available from: https://www. samhsa.gov/medication-assisted-treatment/treatment/ buprenorphine

8. Drugs.com. Nucynta approval history [Internet]. Drugs. 64

10. Diversion Control Division. ARCOS retail drug summary reports [Internet]. US Department of Justice Drug Enforcement Agency; [cited 2017 Aug]. Available from: https://www.deadiversion.usdoj.gov/arcos/retail_drug_ summary/index.html U.S. Department of

12. United States Census Bureau. QuickFacts Washington [Internet]. [No date; cited 2017 Oct]. Available from: https:// www.census.gov/quickfacts/fact/table/WA/PST040217 13. Pain management: adoption of rules. HR. ESHB 2876. 61st Legis (2010). 14. Banta-Green CJ, Kuszler PC, Coffin PO, Schoeppe JA. Washington’s 911 Good Samaritan drug overdose law – initial evaluation results. Washington (USA): Alcohol & Drug Abuse Institute, University of Washington; 2011 Nov. [cited 2018 Feb 11]. Available from: http://adai.uw.edu/ pubs/infobriefs/ADAI-IB-2011-05.pdf 15. Project ECHO: right knowledge. right place. right time [Internet]. New Mexico (USA): 2017 August 17 [cited 2018 Feb 10]. Available from: https://echo.unm.edu/wp-content/ uploads/2017/09/ECHO_One-Pager_08.17.2017.pdf 16. Anesthesiology & Pain Medicine. UW pain medicine patient care: UW TelePain [Brochure]. Washington (USA): UW medicine|health system; [cited 2018 Feb 10]. Available from: https://depts.washington.edu/anesth/care/pain/ telepain/ 17. For Public Health and Healthcare Providers. Prescription monitoring program [Internet]. Washington (USA): Washington State Department of Health; [cited 2018 Feb 11]. Available from: https://www.doh. wa.gov/ForPublicHealthandHealthcareProviders/ HealthcareProfessionsandFacilities/ PrescriptionMonitoringProgramPMP 18. You and Your Family. Medical marijuana history in Washington [Internet]. Washington (USA): Washington State Department of Health; [cited 2018 Feb 9]. Available from: https://www.doh.wa.gov/YouandYourFamily/ Marijuana/MedicalMarijuana/LawsandRules/ HistoryinWashington 19. McCarberg B. Washington state opioid prescribing guidelines. Pain Med. 2015 Aug 19;16(8):1455-1456. 20. Howell D, Kaplan L. Statewide survey of healthcare professionals. J Addict Nurs. 2015;26(2):86-92. 21. Agency Medical Directors Group. Interagency guideline on prescribing opioids for pain. Washington (USA); 2015 June. 105 p.


Opioid Retail Distribution

22. Franklin GM, Fulton-Kehoe D, Turner JA, Sullivan MD, Wickizer TM. Changes in opioid prescribing for chronic pain in Washington state. J A Board Fam Med. 2013;26(4):394-400. 23. Bachhuber MA, Saloner B, Cunningham CO, Barry CL. Medical cannabis laws and opioid analgesic overdose mortality in the United States, 1999-2010. JAMA Intern Med. 2014 Nov;174(10):1668-73. 24. Neven DE, Sabel JC, Howell DN, Carlisle RJ. The development of the Washington state emergency department opioid prescribing guidelines. J Medical Toxicol. 2012 Dec;8(4):353-359. 25. Understanding the opioid epidemic [web streaming video]. Buffalo-Toronto: WNED; 2018 January 17 [cited 2018 Feb 10]. Available from: http://www.pbs.org/wned/ opioid-epidemic/watch/ 26. Fiore K. (2016). Opioid crisis: scrap pain as 5th vital sign? [Internet]. MedPage Today. 2016 Apr 13 – [cited 2018 Feb 11]. Available from: https://www.medpagetoday.com/ publichealthpolicy/publichealth/57336

65


Scholarly Research In Progress • Vol. 2, November 2018

The Gut-Brain Axis and Its Effects on Stress, Anxiety, Depression, and Memory Sarah Eidbo1, Hunter Obeid1, and Amitha Sundaram1*

Geisinger Commonwealth School of Medicine, Scranton, PA 18509 *Correspondence: asundaram@som.geisinger.edu 1

Abstract Bacteria have been shown to influence human physiological functions in a variety of ways, including protecting the host against pathogens, providing immune support, and helping with digestive processes. Since the early 2000s, more attention has been given to the idea that bacteria may also play a role in the early development and function of the central nervous system. The microbiome-gut-brain axis is a theory about how the bacteria in the human digestive system interact with the nervous system through both neurochemical and endocrine means to influence the brain. Much emphasis has been given to the vagus nerve because of strong evidence showing that it may play a direct role in this signaling cascade. Aside from the vagus nerve, many other models have been proposed. The purpose of this literature review is to explore how the idea of the microbiome-gut-brain axis originated, and to then discuss the possible ways that bacteria influence the central nervous system in respect to anxiety, depression, memory, and cognition.

Introduction A human fetus is born with a completely sterile, germ-free (GF) digestive tract. However, shortly after birth, bacteria begin colonizing the gut and will coexist within the infant for the rest of his or her life. The composition of microorganisms that live within the intestines of a person is referred to as the microbiome. In humans, the number of bacterial species living within the digestive tract and other tissues exceeds the number of somatic cells in the entire body (1). Researchers previously believed that the bacteria in the gut formed a commensalistic relationship with the human body, meaning that a long-term relationship existed between the host’s body and the bacteria without any benefit or harm to the host. Recent studies have instead shown that microbiota form a mutualistically beneficial relationship with their host; however it is still important to note that certain bacterial species can cause pathogenic issues within the body (1). This concept has led scientists to wonder if the composition of the human microbiome plays a major role in certain diseases. This literature review is particularly focused on the role of the microbiome on the functions central nervous system (CNS) and how this influences disease and behavior. The gut microbiome is already known for serving several different functions which include protecting the host by competing with pathogens for nutrients, strengthening the intestinal epithelial barrier through secretion of antibodies, and aiding the host in breaking down food products that are difficult to digest. Lately, more attention has been given to another important role of the gut microbiome which is the role it plays in the induction and maturation in immune system functions (1). This literature review is particularly interested in 66

a fifth function, which is the role of the gut microbiome on the CNS. It has been noted that disruptions to the composition of the gut microbiome are linked with far reaching consequences in the body, which are directly related with the CNS. The gut-brain axis is a commonly used term that refers to the intricate two-way communication system that occurs between the bacteria living in the gastrointestinal (GI) tract with the CNS. This signaling network encompasses multiple physiological components including not only the CNS and GI bacteria, but also the hypothalamic-pituitary-adrenal (HPA) axis, autonomic nervous system, immune system, and enteric nervous system (ENS) (1). The gut-brain axis connects the cognitive and emotional areas of the brain with intestinal functions through neuronal and hormonal products (1). Researchers often use experiments on GF animals to demonstrate the role of bacterial colonization on CNS functions by comparing subjects that have bacteria in their digestive tract with those that do not. Interest in the topic of the microbiome-gut-brain axis started in the early 2000s when scientists discovered differences in responses to stress between GF animals and a control group that had an intact gut microbiome.

Materials and Methods A literature review was completed observing studies on several experimental models evaluating the different effects of gut bacteria on central nervous system functioning. HPA axis Possibly one of the most important components of the gutbrain axis signaling cascade is the HPA axis, which is a portion of the brain that is important for memory and emotion. The HPA axis is responsible for coordinating an organism’s ability to respond to stress (2). The body reacts to stress and proinflammatory cytokines by secreting corticotropin-releasing factor (CRF) from the hypothalamus, which acts on the pituitary gland to release adrenocorticotropic hormone (ACTH). Once ACTH is delivered to the adrenal gland, it will release cortisol, which is the major stress hormone in the human body. The HPA stress response system develops after birth at the same time that bacteria begin colonizing the intestines. Studies performed on GF neonatal mice showed that the absence of commensal bacteria coincides not only with an increased responsiveness to stress, but also with an underdeveloped immune system. For example, in a 2004 experiment that compared GF mice with specific pathogen-free (SPF) mice, researchers proposed the idea that microbiota influence the development of neural systems that are significant for controlling how the body responds to stress (2). During this experiment, GF mice showed higher ACTH and corticosterone levels in their serum compared to SPF mice, and also demonstrated more anxiety-like behavior. The GF mice


Gut-Brain Axis

expressed reduced levels of cortical and hippocampal brainderived neurotrophic factor (BDNF), which codes for a protein that has been associated with neuropsychiatric disorders such as anxiety and depression. More importantly, the exaggerated stress response in GF mice was partially reversed by reestablishing bacteria in their digestive systems by taking the feces from the SPF mice. It was noticed that the SPF feces had to be reintroduced to the GF mice in an early developmental stage in order for there to be noticeable changes in HPA stress response. This suggests that microbiota are not only important for neural signaling regarding the endocrine stress axis, but are also crucial for the development of the response to stress by the HPA axis. This study is significant because it proposed two possible ways that the gut microbiome could use to communicate with the CNS. The first suggested pathway was through a neural route and the second pathway was through a cytokine-mediated humoral route. By establishing the possible mechanisms for which gut microbiota communicate with the CNS, it makes it possible to consider how signaling can occur through neural, immune, and endocrine mediators. For example, there are endocrine cells in the gut epithelium known as enterochromaffin cells that can secrete neurotransmitters in response to stimuli received in the lumen, which may hold significance for communicating with bacteria in the lumen (1). Likewise, gut bacteria are capable of producing a wide range of cytokines and neurochemicals including acetylcholine, histamine, serotonin, melatonin, y-aminobutyric acid, and catecholamines (3). Microbiota are able to release molecules that can act on the vagus nerve, which sends signals about the intestines to the brain. Vagal pathways Since discovering strong evidence that there is an existing link between bacteria and the CNS, many experiments have sought to discover a means by which the bacteria communicate with the host. One of the most looked-at pathways involves the vagus nerve, which travels from the CNS to multiple organ systems, including the GI tract. The vagus nerve receives not only afferent signals from the hypothalamus, but also sends efferent signals to the brain about the status of the body, including information about the intestinal environment. Through anatomical models, it was noticed that vagal nerve endings exist in extremely close proximity to the intestines where bacteria reside. It is possible that these nerve endings receive neurochemical signals secreted from bacteria. To illustrate this pathway, researchers have demonstrated differences in neurochemical behavior between vagotomized mice and control groups. In 2011, an experiment was performed that sought to explore the effect of gamma-aminobutyric acid (GABA) receptor expression in mice after ingestion of a bacterial strain known as Lactobacillus rhamnosus (4). GABA is the main inhibitory neurochemical in the body and is responsible for reducing the excitability of neurons. Drugs that act on GABA receptors are commonly used for treating anxiety, mood disorders, and epilepsy. In this particular experiment, a series of behavioral tests related to anxiety and depression were performed including an elevated plus maze (EPM) test. Mice that received L. rhamnosus spent more time exploring the central area

of the open field compared to the control group of mice (4). L. rhamnosus was shown to have a direct impact on GABA receptor mRNA expression, but more importantly these results were dependent on whether or not the mice were vagotomized. Only the mice that had an intact vagus nerve showed altered mRNA expression of GABA receptors and received the benefits of reduced corticosterone levels. Since these behavioral and neurochemical results were not shown in vagotomized mice, researchers concluded that the vagus nerve serves as a major communication pathway between the brain and bacteria in the gut (4). Other experiments have supported the theory that bacteria directly communicate with the CNS through vagal pathways. Since the concept of the microbiome-gut-brain axis is still a novel research topic, many other pathways with the CNS are still being explored. Currently, most of the literature is aimed at experimental models that show how bacteria can influence behavior in several different manners, including anxiety, depression, mood, and cognition. Stress, anxiety, and the microbiome Microbiome mechanisms The gut microbiome has been shown to influence stress and anxiety in a variety of ways and via a variety of different mechanisms. These mechanisms can begin early in life, or occur later on during periods of stress or infection. As discussed earlier, the programming of the HPA axis, as well as stress reactivity, are both affected by the gut microbiota early on in life. Since the stress response system is immature at birth, it will develop well into the postnatal period. This coincides with the period during which the intestine is colonized by bacteria. These developmental periods are of the utmost importance—when rats were separated at birth from their mothers, the gut microbiota experienced long-term changes in composition as well as diversity of bacteria, both of which are vital for proper function (5). However, when the separated rats were treated concurrently with probiotics, basal corticosterone levels (a hormone involved in the HPA axis and the stress response, discussed earlier) were normalized. Some of the changes in inflammation that are induced by stress require the presence of a microbiota as well. Stress is known to increase intestinal permeability, allowing bacteria present to more easily cross the intestinal mucosa and reach immune cells as well of neurons of the enteric nervous system. This has been considered a potential pathway that the microbiota could utilize to exert an influence over the central nervous system (5). The microbiota may activate stress circuits as a way to influence central nervous system function directly. One study administered the foodborne pathogens Citrobacter rodentum and Campylobacter jejuni orally, and observed changes in the acute phases of infection. During C. jejuni’s acute infection phase, no systemic immune response was seen, despite neuronal activation marker cFOS expression in vagal sensory neurons. After the administration of C. rodentium, cFOS activation was also seen in central brain regions. Further studies conducted with C. rodentium and C. jejuni showed that 8 hours after infection, there were no differences seen in cytokine levels or intestinal inflammation compared to control mice, despite seeing increased anxiety67


Gut-Brain Axis

like behavior (5). These results show that stress circuits can be activated through vagal pathways by the bacteria present in the gastrointestinal tract. These pathogenic bacteria can also increase anxiety-like behavior in the absence of an immune response (5). The microbiome may also affect brain-derived neurotrophic factor levels as a means of affecting stress and anxiety. The neurotrophin BDNF influences many processes, some of which include survival of neurons, differentiation of neurons, formation of synapses, and neuroplasticity throughout an organism’s lifespan. The gut-brain axis has been seen previously to exert some influence on BDNF levels, via changes in BDNF mRNA and protein levels. When infection models were used that have been shown to lead to alterations in the microbiota profile, reduced expression of both BDNF mRNA and protein was seen in the hippocampus. This resulted in increased anxiety-like behaviors. However, upon administration of probiotics, the behavioral changes were reversed, and BDNF expression levels returned to the levels seen in control animals (5). Previous research also found stress levels to be linked to reduced hippocampal BDNF expression. Normal control levels of BDNF were restored after an antidepressant drug was administered (5). By altering factors involved in the gastrointestinal tract as well as the central nervous system and immune system, the gut microbiota is capable of inducing effects on stress and anxiety-like behaviors. Microbiota absence If the development of the gut microbiota is so intertwined with the underlying mechanisms of stress, fear, and anxiety, then what happens in its absence? To answer this question, one study tested GF and SPF (specific pathogen-free) rats with a social interaction experience and an open field test. Several factors were measured, including serum corticosterone levels, the expression levels of the corticotropin-releasing factor (CRF) gene in the hypothalamus and the glucocorticoid receptor (GR) gene in the hippocampus, and monoamine concentrations in the frontal cortex, hippocampus, and striatum (6). In the social interaction test, the rats were put in a transparent box with a partner that was matched in age, sex, weight, and bacterial status (GF or SPF) for 10 minutes. Their behaviors were filmed, and their time spent in social contact or selfgrooming were scored (6). In the open field test, a rectangular arena was constructed with a grid pattern and a strong light illuminating only the center. This was done to induce stress in the rats. A rat was placed in the corner of the arena and recorded for 6 minutes. Factors measured included the time taken to move from the corner, the number of squares that were crossed, the number of visits to the central illuminated area, the time spent in the corners, and the number of rearings, groomings, and defecations (6). The study found that in the absence of gut microbiota, anxietylike behavior was increased when exposed to new challenges. The absence was found to worsen both the behavioral and neuroendocrine responses to acute stress. The dopaminergic turnover rate was also affected in the upper brain regions involved in stress and anxiety regulation. These areas

68

included the frontal cortex, hippocampus, and striatum (6). In conclusion, this study and several others discussed found that there are many ways for the bacteria of the gut to gain access to the brain and affect stress and anxiety-like responses. These include the blood stream, the immune system via cytokine release, the endocrine system via gut hormone release, or the enteric nervous system via ascending neural pathways (6). Depression and the microbiome Alterations in microbiome The gut microbiota has been shown to utilize inflammation, neurotransmission, and the HPA axis to influence the central nervous system. Though it is not completely understood, “leaky gut� could play a part in all of this. Leaky gut refers to the activation of the HPA axis and immune system in response to stress, causing the gut wall to become more permeable. In mice, its effects have been reversed by taking probiotics. In human studies on depression, increased bacterial translocation has been seen. Depression and anxiety outcome measures were also improved when healthy people took probiotics (7). In a study on fecal microbiota of control patients and patients with major depressive disorder, the effects of the composition of the microbiome were quantified (7). The study defined intestinal dysbiosis as a significant taxonomical difference between three of the major phyla in the intestine. A significant increase was seen in the Bacteriodetes and Proteobacteria phyla, but a large decrease was seen in the Firmicutes phyla (7). Multiple bacterial species were responsible for the increase in Bacteriodetes; however, the Alistipes species was largely responsible. Alistipes are believed to alter tryptophan availability. Since tryptophan is a known precursor of serotonin, the increased abundance of Alistipes is thought to alter the balance of the serotonergic system in the gut (7). Greater frequencies of abdominal pain have also been seen in irritable bowel syndrome patients with higher levels of Alistipes. Because of this, Alistipes is speculated to be involved with gut inflammation (7). The Proteobacteria increase was mostly caused by Gammaproteobacteria classes, including enterobacteriales and enterobacteriaceae. They are Gram-negative, and normally present in the gut microbiota. However, the increased permeability of the intestine in patients with depression may allow invasive Gram-negative bacteria like these to achieve access to mesenteric lymph nodes or the systemic circulation. This is supported by the finding that clinical depression is accompanied by increases in circulating plasma antibodies (IgA and IgM isotypes) that are directed against these bacteria. Behavioral and psychological changes can be induced in animals and humans by the presence of these pathogenic bacteria in the gastrointestinal tract (7). In contrast to the increases seen in the Proteobacteria and Bacteriodetes phyla, the Firmicutes phyla saw a decrease in abundance. This was mainly caused by the Lachnospiraceae and Ruminococcaceae families. The Lachnospiraceae family is known for their role in the breakdown of carbohydrates into short-chain fatty acids. By decreasing amounts of this bacteria,


Gut-Brain Axis

the breakdown of short-chain fatty acids decreases, which leads to dysfunction of the intestinal barrier (7). Amounts of bacteria are not the only thing that change in the intestines of patients with major depressive disorder. Depression is also associated with elevated levels of inflammation biomarkers, including IL-6, TNF-ɑ, and IL-1β (7). Inflammatory responses may be modulated by altering the gut microbiota of patients with depression. Serum BDNF level also differed significantly among the groups in this study. This relates to the effects of anxiety discussed previously—hippocampal BDNF mRNA and protein levels are involved in gut microbiota composition. In summary, gut microbial populations have been found to be associated with depression (7). Gut microbiota, hormone production and memory Gut microbiota and hormone production Memory and cognition are also important aspects of behavior that have been shown to be impacted by the gut microbiome. One of the ways that memory and cognition have been evaluated through gut-brain axis research is through the peptide hormone known as ghrelin. This peptide hormone is produced by ghrelinergic cells in the GI tract and has receptors localized in the brain. First, within the stomach, there are important interactions between the gut and carbohydrates, causing short chain fatty acid formation. Non-digestible carbohydrates from a host’s diet undergo saccharolytic fermentation via enteric bacteria, specifically from the bacteroides and prevotella species (8) and create short-chain fatty acids, namely acetate (9). Increased acetate production via saccharolytic fermentation from these bacteria species causes activation of the parasympathetic nervous system and leads to an increase in ghrelin secretion (9). The hippocampus in memory and memory signaling In addition to serving as part of the HPA axis as mentioned earlier, the hypothalamus serves as the control center and has the ability to coordinate activity from the gut with the limbic system. The hippocampus is the part of the limbic system that is essential for forming memory, specifically important for spatial memory and memory and consolidation. In order to form memories, long-term potentiation (LTP) is necessary, which is the process of using activity patterns to enhance synapses and strengthen long-lasting signals between neurons. It is one of the main molecular processes that contributes to learning. LTP occurs in the cornus ammonis 1 (CA1) region of the hippocampus, found between the dentate gyrus and subiculum. The CA1 in the hippocampus has the main output to the cortex, and has found to be vital in memory and learning (11). Ghrelin studies and memory Since the endogenous peptide hormone, ghrelin, is found in the stomach and thought to be connected to memory and learning, researchers investigated its role these processes. Ghrelin receptors (GHS-R) are found in high numbers in the CA3 region of the hippocampus (10). As the memory signaling pathway occurs from CA3 presynaptically to CA1 postsynaptically, it is thought that ghrelin binds to the GHS-R receptors on CA3 causing the glutaminergic memory signaling pathway to begin. Carlini et al. investigated ghrelin’s effect

on memory retention in a study utilizing rat models. For this study, male Wistar rats were injected in the third ventricle with a ghrelin-artificial cerebrospinal fluid suspension. They were compared to an experimental group of rats injected with only artificial cerebrospinal fluid. The experimental rats were administered 3 different doses of ghrelin suspension and controls received equal amounts of artificial cerebrospinal fluid (0.3, 1.5, and 3 nmol/µl). The rats were then placed in a step-down test inhibitory training apparatus. This apparatus is a plastic box with metal bars on the floor that have the ability to administer a shock. On the left-hand side of this box is a small platform. The rats were placed on this platform and researchers measured their latency to step down onto the floor. In this experiment, latency was used as a measure of memory retention. After the initial step-down test was administered for a training session, there was a 24-hour waiting period and the rats were placed back in the step-down box to measure latency time (11). The results of tests found that ghrelin caused an increase in latency time as the dose of administered ghrelin increased. This finding suggests that there was an increase in memory retention in the rat models. In a separate study, memory and learning in mice were investigated by comparing ghrelin knockout (GKO) mice and wild type mice using the novel object recognition test. Both groups were presented with identical objects A and B for 5 minutes. After 24 hours a novel object, C, was presented to the mice for 5 minutes. When presented with objects A and B, both GKO and wild type mice spent the same amount of time with each object. Once presented with object C 24 hours later, wild type mice spent more time with the novel object while GKO mice spent significantly more time with the familiar object when compared to the wild type mice. The GKO mice were injected with ghrelin via a subcutaneous pump and performed the same test. During this trial of the test, the GKO mice with administered ghrelin performed in a similar fashion to the wild type mice. This investigation demonstrates that impairment of ghrelin can negatively impact memory in mice models, suggesting a link between ghrelin, memory, and learning performance (12). Ghrelin and neurogenesis The ability of ghrelin to improve memory has been studied through neurogenesis stimulated by intraperitoneal ghrelin administration. The following experiment investigated ghrelin’s ability to promote cell proliferation in the hippocampus comparing wild type mice and ghrelin-deficient mice. Hippocampal slides were stained with Ki-67, a marker for cell proliferation, of the two groups and cell numbers were then manually counted. Ghrelin knockout mice were then administered ghrelin and hippocampal slides were stained and cells were manually counted once more. Results demonstrated the wild type slides with the highest cell count while the ghrelin-deficient mice had a significantly lower cell count. However, upon intraperitoneal administration of ghrelin to the ghrelin-deficient mice, cell count increased significantly toward the count of the wild type mice. These findings suggest that ghrelin contributes to neurogenesis in the hippocampus and can be potentially used for improving memory and cognition. Although these findings and similar reports demonstrate a correlation between ghrelin and improvement in memory retention and learning, this research has areas of controversy. 69


Gut-Brain Axis

Some studies have shown ghrelin to impair memory in obese mice. The fact that there are opposing outcomes within this field suggest that there is still a great deal of research needed in order to understand the mechanism between these connections and find potential therapeutics for diseases. These studies have shown that ghrelin plays a role in enhancing memory, learning, and cognition in mice models. Although this correlation has been established, further investigation is required to solidify the mechanism by which this occurs. Ghrelin could be achieving this outcome by increasing synaptic plasticity in the hippocampus through increased dendritic spine formation and long-term potentiation (12).

Conclusion Gut-brain axis research is still very controversial because it is a novel topic with many conflicting findings. Further investigation is necessary to explore the possibility of developing treatments for diseases related to psychiatric and neurocognitive disorders such as depression, anxiety, and memory impairment. With a rapidly aging population in many parts of the world, cures for diseases related to aging like Alzheimer’s and Parkinson’s disease may also look at the microbiome gut-brain axis research for potential solutions. As these studies continue, the possibility of using probiotics to treat the aforementioned diseases will be investigated. Clinicians will be interested in how manipulation of the bacterial environment can have therapeutic effects throughout the CNS.

References 1.

Wang Y, Kasper LH. The role of microbiome in central nervous system disorders. Brain, Behav, and Immun. 2014;38:1-12. doi:10.1016/j.bbi.2013.12.015.

2. Sudo N, Chida Y, Aiba Y, et al. Postnatal microbial colonization programs the hypothalamic–pituitary– adrenal system for stress response in mice. J Physiol. 2.2004;558(Pt 1):263-275. doi:10.1113/ jphysiol.2004.063388. 3. Petra AI, Panagiotidou S, Hatziagelaki E, Stewart JM, Conti P, Theoharides TC. Gut-microbiota-brain axis and effect on neuropsychiatric disorders with suspected immune dysregulation. Clinical Ther. 2015;37(5):984-995. doi:10.1016/j.clinthera.2015.04.002. 4. Bravo JA, Forsythe P, Chew MV, et al. Ingestion of Lactobacillus strain regulates emotional behavior and central GABA receptor expression in a mouse via the vagus nerve. Proc Natl Acad Sci U.S.A.. 2011;108(38):16050-16055. doi:10.1073/pnas.1102999108. 5. Foster J, McVey K. Gut-brain axis: how the microbiome influences anxiety and depression. Trends Neurosci, 2013;36. 5:305-312. 6. Crumeyrolle-Arias M, Jaglin M, Bruneau A, et al. Absence of the gut microbiota enhances anxiety-like behavior and neuroendocrine response to acute stress in rats, Psychoneuroendocrinology, 2014;42:207-217.

70

7.

Jiang H, Ling Z, Zhang Y, et al. Altered fecal microbiota composition in patients with major depressive disorder. Brain, Behav, and Immun. 2015;48:186-194.

8. Neuman H, Debelius J, Knight R, Koren O. Microbial endocrinology: the interplay between the microbiota and the endocrine system, FEMS Microbiology Reviews. 2015;39(4):509-521. 9. Morrison DJ, Preston T. Formation of short chain fatty acids by the gut microbiota and their impact on human metabolism. Gut Microbes. 2016;7(3):189-200. doi:10.1080/ 19490976.2015.1134082. 10. Dickson SL, Kim C, Kim S, Park S. Neurogenic Effects of Ghrelin on the Hippocampus. ed. Int J Mol Sci. 2017;18(3):588. doi:10.3390/ijms18030588. 11. Carlini V, Monzon M, Varas M, Cragnolini A, Schiöth, Scimonell T, Barioglio S. Ghrelin increases anxiety-like behavior and memory retention in rats. Biochem Biophys Res Commun. 2002;(5):739-743. 12. Li E, Chung H, Kim Y, Kim D, Ryu J, Sato T, Kojima M, Park S, Ghrelin directly stimulates adult hippocampal neurogenesis: implications for learning and memory. Endocr J. 2013;60(6)781-789.


Scholarly Research In Progress • Vol. 2, November 2018

Inpatient Dermatology Consultations at Guthrie Robert Packer Hospital: A 10-year Retrospective Chart Review Eduardo Ortiz1* and John Pamula2

Geisinger Commonwealth School of Medicine, Scranton, PA 18509 Guthrie Robert Packer Hospital, Sayre, PA 18840 *Correspondence: eortiz@som.geisinger.edu 1

2

Abstract Robert Packer Hospital (RPH), a rural 254-bed tertiary care teaching hospital, lacks a designated inpatient dermatologic medical service. Clinical and financial benefits of dermatologic inpatient consultant care have been reported. This study retrospectively analyzed inpatient dermatology consultations at RPH over a 10-year period from January 2008 to December 2017. The primary objectives of this study were to identify the most common dermatologic diagnoses prompting inpatient consultation and patient outcomes following consultation, as measured by diagnostic, treatment, or management changes and delays following dermatology input. Results demonstrate that drug-induced etiologies were most common (24.0%), followed by cutaneous infections (17.0%) of which most were bacterial, eczema/dermatitis (17%), and vasculitis (10%). Of 88 cases reviewed, only 16 necessitated biopsies, suggesting that 81.81% were diagnosed clinically without additional laboratory testing or procedural skill. Seventy-five percent of cases resulted in a change or delay in diagnosis. With the exclusion of 16 cases that necessitated biopsy, 73.61% of cases resulted in a clinical diagnostic change; 60.22% of cases had a change or delay in treatment at RPH, or 70.83% if biopsied cases were excluded. These findings suggest that the patients do indeed benefit from inpatient dermatologic consultation, but provider availability is a limiting factor.

Introduction Robert Packer Hospital (RPH) is a 254-bed tertiary care teaching hospital located in a rural region that lacks a designated inpatient dermatologic medical service. When dermatology consultation orders are placed at RPH, the physician must come from an off-site location outside of typical office hours. Additionally, only two dermatology physicians are typically available on a given day. Prior studies at other medical centers have demonstrated the importance and utility that dermatologic consultation can have on quality of care, treatment plans, patient outcomes, and potential cost savings by prompt initiation of appropriate therapy and more rapid discharge (1,2). This study retrospectively analyzed inpatient dermatology consultations at RPH over a 10-year period from January 2008 to December 2017. We sought to identify the most common dermatologic conditions prompting inpatient dermatology consultation as well as patient outcomes following consultation, as measured by diagnostic, treatment, or management changes and delays following consultation. Given the limited resources available, a secondary objective of this study was to determine whether inpatient dermatologic consultation at RPH benefits patients, and if so, to propose

future interventions that may positively impact resource allocation and residency training.

Materials and Methods The study design for this project was an observational, retrospective chart review. An electronic medical record (EMR) report for inpatient Consult to Dermatology order #6505005 from January 2008 to December 2017 yielded 92 results. Exclusion criteria included the following: repeat order on the same patient during the same admission and orders that were not completed or not documented in the EMR. Following exclusion, 88 patient charts were reviewed. The following data were collected: • Date of consult order • Medical service placing consult order • Preliminary diagnosis (if any) • Preliminary treatment/management (if any) • Post-dermatology consult diagnosis • Post-dermatology consult treatment/management • Biopsy (if performed) • Final diagnosis (incorporating biopsy result) Using the above data, the following was calculated: • Percent of diagnostic changes/delays post-dermatology consult • Percent of treatment or management changes/delays postdermatology consult • Percent of final (post-consult) diagnoses by etiology

Results and Discussion Consistent with prior studies, internal medicine requested the greatest percentage of dermatology consults (35.23%), followed by hospitalist (26.40%), psychiatry (18.18%), family practice (12.50%), emergency medicine (3.41%), general surgery (2.27%) and cardiac surgery (2.27%) (Figure 1). Of note, the internal medicine service consists of a resident physician team that is supervised by attending physicians. Both this teaching service and the hospitalist service belong to the internal medicine department at RPH. The greater number of consults coming from internal medicine teaching service may be related to the composition of this medical team. Up until 2013, however, psychiatry was the leading service placing dermatology consult orders, with internal medicine placing few, if any, orders. The trends we observed over time may be attributable to a variety of factors, including physician turnover within departments or changes in EMR consult request and documentation process.

71


Inpatient Dermatology Consultations

Of 88 cases reviewed, only 16 (18.2%) necessitated biopsy, suggesting 81.8% were diagnosed clinically without additional lab testing or procedural skill. Sixty-six cases (75.0%) resulted in a change or delay in diagnosis (Figure 2), with a delay referring to the occurrence of no preliminary diagnosis having been documented by the primary team other than a reason for consult (e.g., unspecified rash, lesion of concern).

of cases, suggesting it may be helpful to maintain a high level of suspicion for this etiology. Treatment of this etiology is largely symptomatic and should be aimed at addressing potential underlying causes such as infection.

Figures 3 and 4 demonstrate the variety of diagnoses made following dermatologic consultation. Drug-induced etiologies were the most common (24.0%), followed by cutaneous infections (17.0%) of which most were bacterial, eczema/dermatitis (17%), other/miscellaneous (13%), vasculitic (10%), psychiatric (5%), allergic/ urticarial (3%), psoriatic (3%), cutaneous neoplasia (2%), and immunobullous (1%). Inpatient medical teams may benefit from increased familiarity with drug rash presentations and conducting a thorough medical reconciliation, review, and timeline of events as they pertain to initiation of potential offending agents and rash onset. Medical teams may also consider culturing cutaneous lesions. Initial microscopy may provide enough information to begin empiric antibiotic management while a dermatology consult is pending, as it may take several hours before the consult is completed. Figure 1. Total number of dermatology consults requested by individual medical services

Figure 2. Outcomes following dermatology consultation

Of the 16 cases that were biopsied, only 3 (18.8%) led to a change in treatment or management. This calls into question the benefit of inpatient biopsy, as results are delayed and management infrequently changed. Consulting dermatology for purposes of biopsy alone has limited advantage. We suggest it might be most effective to manage on an outpatient basis if lesions persist. Forty-three percent of biopsies were diagnostic of leukocytoclastic vasculitis. Vasculitis was responsible for 10% 72

While a study conducted at another institution demonstrated that dermatology consultations resulted in treatment change in in 81.9% of patients (2), only 60.2% of cases had a change or delay in treatment at RPH, or 70.8% if biopsied cases are excluded, with a delay in treatment referring to appropriate management approaches (whether pharmacologic or symptomatic) not received by the patient for his or her condition until the consult was completed. The smaller sample size in our study may account for some statistical variation, as well as the notion that providers may be more hesitant to place consult orders given that there is no designated inpatient dermatology team. Nonetheless, the benefit of a dermatology specialist consult is evident, but provider availability is limited. Should a case necessitate urgent dermatology consultation, a logistical barrier to patient care would be encountered. Expansion of the dermatology department with institution of a designated inpatient team may provide a solution. Implementation of a tele-dermatology service may also be of clinical utility, as could uploading photographs to patient charts and outpatient progress notes on the EMR. Despite the widespread use of this feature by outpatient dermatology providers, inpatient medical teams rarely, if ever, use this feature. Doing so may enhance communication and reduce the need for travel by consulting dermatologists.


Inpatient Dermatology Consultations

Acknowledgments We would like to thank Vicky Hickey for her guidance with obtaining IRB approval as well as the data collection process.

References 1.

Biesbroeck LK, Shinohara MM. Inpatient consultative dermatology. Medical Clinics of North America. 2015;99(6):1349-1364.

2. Galimberti F, Guren L, Fernandez AP, Sood A. Dermatology consultations significantly contribute quality to care of hospitalized patients: A prospective study of dermatology inpatient consults at a tertiary care center. International Journal of Dermatology. 2016;55(10):e551.

Figure 3. Etiologic categories of diagnoses made following dermatology consultation

Figure 4. Specific diagnoses made following dermatology consultation 73


Scholarly Research In Progress • Vol. 2, November 2018

A Case Report of a Salivary Duct Carcinoma Jena Patel1*, Elise Zhao1, Kathleen Heeley2, and Mark Frattali3

Geisinger Commonwealth School of Medicine, Scranton, PA 18509 The University of Scranton, Scranton, PA 18510 3 Delta Medix Ear, Nose & Throat PC, Scranton, PA 18503 *Correspondence: jpatel01@geisinger.som.edu 1

2

Abstract Salivary duct carcinoma (SDC) is an uncommon and highly aggressive salivary gland tumor. Salivary duct carcinomas are named according to their histological resemblance to invasive intraductal carcinoma of the breast. Some cases have been shown to express similar immunohistochemical markers to intraductal breast carcinomas, including the HER2neu gene amplification and androgen receptor. We present the case of a 50-year-old male with a several-year history of a stable and painless mass in his right parotid gland exhibiting a recent history of rapid growth. Computed tomography (CT) scan revealed a large right parotid tumor with surrounding enlarged lymph nodes, and upon excision the mass was identified to be an androgen receptor and HER2neu-positive SDC with perineural invasion. Treatment of SDC poses numerous challenges due to its propensity for early metastasis and late diagnosis. Although there are no standardized treatment modalities described for the treatment of SDC, most authors agree on an aggressive treatment approach consisting of neck dissection and adjuvant radiotherapy. Since salivary duct carcinomas are a rare clinical finding, it is our hope that by reporting these rare instances as case reports it will help raise awareness of this aggressive malignancy and aid in its early diagnosis and treatment.

Introduction Salivary duct carcinoma (SDC) is a highly aggressive salivary gland tumor. This comparatively rare tumor accounts for just 1% to 3% of all malignant salivary gland tumors and 0.9% to 6% of all parotid gland tumors (1). Though SDC may arise in either minor or major salivary glands, most cases involve the parotid gland (2). Approximately 40% to 60% of SDC tumors arise from existing benign pleomorphic adenomas (3). This malignancy typically has a poor prognosis with a reduced response to treatment and a high mortality due to its propensity for perineural spreading. Overall, 50% of SDC cases exhibit a 5-year mortality rate upon diagnosis. SDC is distinctive on histopathological findings because it mimics the appearance of ductal carcinoma of the breast, and in some cases has been shown to exhibit similar immunohistochemical markers (2). Therefore, androgen receptor expression and HER2neu gene amplification are therapeutically relevant genetic alterations. Most commonly, patients present with a rapidly enlarging facial mass with cervical lymphadenopathy, neck pain, and concurrent facial weakness due to facial nerve involvement (4). Treatment of SDC poses numerous challenges because existing therapeutic approaches are non-consensual, and there is limited data due to the rarity of SDC. We present a rare case of SDC manifesting as a rapidly enlarging parotid mass identified on CT scan. Finally, we

74

will discuss possible treatment modalities with a focus on a comprehensive review of the relevant literature. Case Presentation A 50-year-old male presented to our office with a painless and firm mass in his right parotid gland. He reported that the mass had been present for several years; however, it had grown significantly larger in the last 11 to 12 months. The patient denied sore throat, fever, hoarseness, and facial pain. He reported a history of smoking and regular alcohol consumption. Physical exam revealed a fixed and firm right parotid mass with facial nerve function intact on further examination. Nasal endoscopy was negative and no adenopathy was noted on neck exam. The patient underwent a fine needle aspiration of the right parotid mass which revealed a cystic lesion worrisome for squamous cell carcinoma. A CT scan without contrast revealed an enhancing right parotid gland mass measuring 3 x 2.4 cm with multiple adjacent enlarged lymph nodes. A more anteriorly located parotid mass measuring 11 mm in diameter was also seen (Figure 1). The patient refused contrast imaging on CT and an MRI due to claustrophobia. A right-sided near-total parotidectomy with modified radical neck dissection was performed 1 month after the patient’s initial office visit. The facial nerve was preserved. An inferiorly located extraglandular mass adherent to the cervical and temporal branches of the facial nerve was removed along with a large cuff of parotid tissue. Histopathological evaluation of the 2.8 cm right parotid gland resection revealed areas of ductal formations with comedonecrosis characteristic of SDC (Figure 2). In this specimen, 10 out of 17 intraglandular and periglandular lymph nodes were positive for metastatic disease with foci of microscopic extracapsular extension. The specimen was negative for definitive lymphovascular invasion; however, it was positive for perineural invasion. A total of 38 lymph nodes were removed from levels II, III, and V in the right neck dissection and only one lymph node removed from right neck level II was positive for metastatic disease. Postoperative staging of the SDC was reported as stage IVA (T2N2bM0G3). Immunohistochemistry stains of the tumor were positive for CK 5/6, P63, E-cadherin and EMA. It was weakly positive for androgen receptor and polyclonal CEA (Figure 3). The tumor was also focally positive for p53, PR, and HER2neu gene amplifications. The ki-67 proliferation index was 50%. The patient was discharged 2 days postoperatively with plans to start radiation and chemotherapy treatment 6 weeks after the surgery. The patient continues to show no signs of disease recurrence 3 months after his procedure.


Salivary Duct Carcinoma

Discussion

Figure 1. Axial CT of the jaw reveals left parotid gland mass.

Figure 2. Comedo type necrosis in salivary duct carcinoma reminiscent of breast DCIS.

Figure 3. Immunohistochemistry stain with androgen receptor marker showing nuclear positivity as seen in salivary duct carcinoma.

The 2005 World Health Organization (WHO) classification of salivary gland tumors is complex and consists of 10 benign and 23 malignant entities of epithelial origin. The three most common salivary gland tumors are pleomorphic adenoma, Warthin’s tumor, and mucoepidermoid carcinoma (5). Salivary duct carcinoma (SDC) is a rare, high grade adenocarcinoma that arises from the ductal epithelium of the salivary gland. Histologically, SDC resembles intraductal and invasive mammary duct carcinoma with solid, papillary, or cribriform growth patterns and central necrosis (6, 7). SDC is a rare tumor with an estimated reported incidence of 1% to 3% of all malignant salivary gland tumors (8). The peak incidence of SDC is in the sixth and seventh decades of life, occurring more commonly in males (8, 9). SDC mainly occurs in the parotid gland, although occurrence in the submandibular and minor salivary glands has been reported (8, 10). It has also been reported to arise from existing slow-growing pleomorphic adenomas. Gilbert et al. reviewed the cases of 75 patients and reported that 41% of patients had pathologic features suggestive of an SDC arising out of a benign pleomorphic adenoma (9). Patients generally present with a rapidly enlarging, painless parotid mass; however, facial nerve involvement can result in facial weakness, pain, or paralysis. SDC is a particularly aggressive cancer and at the time of diagnosis the majority of patients have a T3/T4 tumor with cervical lymph node metastasis (8, 9). An analysis by Jayaprakash et al. reviewing 228 patients with SDC found that 65% of patients were diagnosed with advanced stage III/IV disease, and about 50% of patients had lymph node involvement at the time of diagnosis (6). As a result of the low incidence of SDC there are no consistent standardized treatment guidelines for this type of cancer. However, due to the high percentage of patients presenting with late-stage disease and lymph node involvement at diagnosis, SDC is often treated aggressively with radical resection and neck dissection followed by adjuvant radiotherapy (6). Patients with SDC in the parotid gland undergo parotidectomy with facial nerve resection occurring in up to 69% of cases in some studies (10). The extent of parotid gland removed depends on the size of the tumor, with most tumors ranging between 0.9 and 6.0 cm in dimension. In patients with SDC occurring in the submandibular glands, resection of the entire gland is performed. A retrospective analysis by Kim et al. studying 35 patients found that aggressive treatment consisting of surgery followed by postoperative adjuvant radiotherapy to the tumor bed and ipsilateral neck nodes was effective at preventing locoregional disease recurrence and metastasis. Patients with SDC have a poor prognosis due to early nodal involvement, and high rates of local recurrence and distant metastasis. The rapidly progressive nature of SDC accounts for the presence of distant metastasis early in in the disease course. Overall reported death rates from SDC range from 45% to as high as 77%, with the occurrence of distant metastasis being the most common cause of treatment failure (6). The incidence of distant metastasis has been reported from 30% to 70% in multiple studies and has been associated with decreased overall survival. Kim et al. found that 37.8% of 35 patients had developed distant metastasis on an average 75


Salivary Duct Carcinoma

of 13 months after treatment of their tumor, with 78.6% of those patients eventually dying from disease-related causes. Distant metastases usually occur in the lung, brain, and bone. Clinically, multiple studies have shown that tumors larger than 3.0 cm in size, and evidence of perineural and lymphovascular invasion on pathology were indicative of worse outcomes. Younger patients diagnosed with stage I/II cancer and lowergrade tumors on pathology had decreased rates of disease recurrence and metastasis, and increased overall survival (6, 7, 9). Immunohistochemical staining of salivary duct carcinomas can help determine the patient’s prognosis, and can reveal the potential role for adjuvant immunomodulatory treatment and anti-hormone therapy. The Ki-67 index of salivary gland carcinomas has been shown to be an independent prognostic factor regardless of salivary gland tumor subtype. Larsen et al. evaluated the Ki-67 index in 175 cases of salivary gland cancers of 13 different subtypes and found that an index less than 26 was associated with more favorable outcomes including an 87% 5-year disease-free survival rate (p < 0.0001). Tumor characteristics suggestive of poor prognosis include perineural invasion, extracapsular invasion, and HER2neu expression (11). In a retrospective analysis by Limaye et al., 5 out of 8 patients with HER2neu-positive salivary duct carcinoma treated with adjuvant trastuzumab had no evidence of disease 2 years from completion of therapy. This same study found that all 5 of their patients with metastatic disease treated with trastuzumab experienced disease remission or reached a state of disease stability (12). Additionally, salivary duct carcinomas commonly express androgen receptor; therefore, androgen deprivation therapy may be beneficial to patients experiencing disseminated or recurrent disease (13). A case series conducted by Jaspers et al. found that 5 out 10 patients with androgen-positive SDC experienced a clear clinical benefit when treated with bicalutamide (14). Due to the low-risk side effect profile and apparent efficacy of adjuvant trastuzumab and androgen deprivation therapy, HER2neu and androgen receptor expression status of all patients with SDC should be determined to ascertain if they could benefit from anti-hormone and immunotherapy. As a result of the low incidence of SDC, there is a narrow range of data in regards to the biologic behavior and presentation of this tumor type, and limited knowledge of standardized treatment modalities. SDC is a rare clinical entity that continues to pose diagnostic and therapeutic challenges to physicians. Overall, surgery followed by adjuvant chemotherapy and radiation is the most commonly utilized treatment modality. However, due to the variable immunohistochemical similarities to invasive ductal carcinoma of the breast, there may be a gradually evolving role for adjuvant immunomodulatory and anti-hormone therapies.

References 1.

Gregorie C. Salivary Gland Tumors: The Parotid Gland. Current Therapy in Oral and Maxillofacial Surgery. 2012;450-460.

2. Murrah VA, Batsakis JG. Salivary duct carcinoma. Annals of Otology, Rhinology & Laryngology. 1994;103(3),244-247. doi:10.1177/000348949410300315

76

3. Bagheri S. Head and Neck Pathology. Clinical Review of Oral and Maxillofacial Surgery. 2014;187-222. 4. Mlika M, Kourda N, Zidi Y, Aloui R, Zneidi N, Rammeh S, Jilani S. Salivary duct carcinoma of the parotid gland. Journal of Oral and Maxillofacial Pathology. 2012;16(1),134136. doi:10.4103/0973-029x.92992 5. Rousseau A. Head and Neck: Salivary gland tumors: an overview. Atlas of Genetics and Cytogenetics in Oncology and Haematology. http://atlasgeneticsoncology.org/ Tumors/SalivGlandOverviewID5328.html. Published 2010. Accessed June 2, 2018. 6. Jayaprakash V, Arshad H, Hicks W, Rigual N, Cohan D. Survival rates and prognostic factors for infiltrating salivary duct carcinoma. Head and Neck. 2014;36(5),694-701. 7.

Kim JY, Lee S, Cho K, Kim SY, Nam SY, Choi S, Ahn SD. Treatment results of post-operative radiotherapy in patients with salivary duct carcinoma of the major salivary glands. The British Journal of Radiology. 2012;85(1018),947-952. doi:10.1259/bjr/21574486

8. Jaehne M. Clinical and immunohistologic typing of salivary duct carcinoma. Cancer. 2005;103(12),2526-2532. doi:10.1002/cncr.21116 9. Gilbert MR, Sharma A, Schmitt NC, Johnson JT, Ferris RL, Duvvuri U, Kim S. A 20-Year Review of 75 Cases of Salivary Duct Carcinoma. JAMA Otolaryngology–Head & Neck Surgery. 2016;142(5),489-495. doi:10.1001/ jamaoto.2015.3930 10. Lewis JE, Mckinney BC, Weiland LH, Ferreiro JA, Olsen KD. Salivary duct carcinoma: Clinicopathologic and immunohistochemical review of 26 cases. Cancer. 1996;77(2),223-230. 11. Larsen SR, Bjørndal K, Godballe C, Krogdahl A. Prognostic significance of Ki-67 in salivary gland carcinomas. Journal of Oral Pathology & Medicine. 2012;41(8),598-602. doi:10.1111/j.1600-0714.2012.01148.x 12. Limaye SA, Posner MR, Krane JF, Fonfria M, Lorch JH, Dillon DA, Haddad RI. Trastuzumab for the treatment of salivary duct carcinoma. The Oncologist. 2013;18(3),294300. doi:10.1634/theoncologist.2012-0369 13. Williams L, Thompson LD, Seethala RR, Weinreb I, Assaad AM, Tuluc M, et al. Salivary duct carcinoma: The predominance of apocrine morphology, prevalence of histologic variants, and androgen receptor expression. The American Journal of Surgical Pathology. 2015;39(5),705-713. doi:10.1097/pas.0000000000000413 14. Jaspers HC, Verbist BM, Schoffelen R, Mattijssen V, Slootweg PJ, Graaf WT, Herpen CM. Androgen receptor– positive salivary duct carcinoma: A disease entity with promising new treatment options. Journal of Clinical Oncology. 2011;29(16). doi:10.1200/jco.2010.32.8351


Scholarly Research In Progress • Vol. 2, November 2018

Antibiotic Stewardship: Defining a True Penicillin Allergy Hannah Snyder1*, John Orr1, Alexandra Lucas1, James Palma-D’Souza1, and Amber Khan2

Geisinger Commonwealth School of Medicine, Scranton, PA 18509 Robert Packer Hospital, Sayre, PA 18840 *Correspondence: hsnyder@som.geisinger.edu 1

2

Abstract Background: Patients who have penicillin allergies listed in their hospital charts are at higher risk for a number of complications. Up to 90% of patients who are designated as allergic to penicillin in their charts, based on administration of the medication in question or non-reaction to penicillin skin test, are not truly allergic. The combination of these factors leads to an opportunity for practitioners and administrators to re-evaluate categorization of these patients. Methods: Over the course of 3 months, 62 patients admitted to Robert Packer Hospital (RPH) with a penicillin allergy documented in their electronic medical record were administered a standard 7-question survey. Answers to the survey questions separated patients into one of three categories: Likely Penicillin Allergy, Possible Penicillin Allergy, and Unlikely Penicillin Allergy. Results: Of the 62 patients, 17 are likely to have a true allergy to penicillin, 37 of the patients have possible allergies to penicillin, and 8 patients are unlikely to have penicillin allergies. Of the 21 patients who received penicillin or amoxicillin after being diagnosed with their allergy, 17 reported having no reaction, 3 reported a rash, and 1 reported an anaphylactic reaction. Concluvion: Post-intervention, 1.8% of total hospitalized individuals on daily average could have their penicillin allergy labels safely removed. This represents between 3 and 4 patients at RPH. Of the total patients with listed penicillin allergies, 60% could potentially undergo skin testing to further classify their status, which represents a potential cost reduction as well. Although there is an area to report adverse effects to medications on the electronic medical record, this feature, based on our clinical observations, tends to be underused. Education for all medical personnel would benefit not only cost reduction for the institution and the patient, but also increase patient safety.

Introduction The presence of penicillin or other beta-lactam allergy in a patient’s medical record necessitates the use of an alternate antibiotic or antibiotics for infection treatment and prophylaxis. Penicillin and other beta-lactam antibiotics (including cephalosporins, carbapenems, and monobactams) remain first line or an acceptable alternative therapy for many infections. These alternative agents, unfortunately, have a wider range of adverse effects, are less effective in treatment of infection when beta-lactams are first-line, and may contribute to antibiotic resistance in the future, due to the broader spectrum of activity of the substitute antibiotics (1–3). These effects can be exacerbated when physicians must prescribe multiple medications to have microbe coverage equivalent to the betalactam that cannot be used. The impact of these factors on

patients can be multifactorial. The risks of readmission, length of hospital stay, likelihood of admission to the ICU, general health care use, antibiotic use, morbidity, and mortality are all increased for patients who have beta-lactam allergies listed in their medical records (4, 5). In spite of the myriad negative consequences associated with a beta-lactam allergy listed in patients’ hospital charts, researchers have found that many patients with listed betalactam allergies are not, in fact, truly allergic. Indeed, various studies find that over 90 percent of patients with alleged allergies do not have any reaction to a penicillin skin test or to being administered the beta-lactam in question. The reasons for such misidentification are various and range from physician error of not inputting the reaction data in the electronic medical record thoroughly to a misunderstanding by the patient of what constitutes a true allergy (6–10). The multiple consequences of beta-lactam allergy labeling in addition to the high proportion of patients who are incorrectly designated as allergic provides an opportunity for practitioners to re-evaluate the criteria for categorization. Questionnaires can be used to stratify those who are at low and high risks of true allergy and thus guide who should be selected for a betalactam trial.

Materials and Methods This project was a prospective observational quality improvement study that took place at Robert Packer Hospital (RPH), which is a 254-bed tertiary care teaching hospital that is part of the Guthrie Health System in Sayre, Pennsylvania. IRB approval was obtained before the start of data acquisition. Between November 1, 2017, and January 31, 2018, the members of the investigative team interviewed patients who were admitted to RPH with a penicillin allergy documented in their electronic medical record. These patients were asked if they would be willing to participate in a short survey that was part of a quality improvement project involving penicillin allergies at RPH. After verbally agreeing to participate with the quality improvement project, each patient was asked a standard 7-question survey (Table 1). If the patient was unable to answer the questions due to critical illness or mental incapacity for some other reason, the patient’s family was asked to participate in the survey. The answers from the survey questions were analyzed by the study investigators, and patients were placed into one of three categories based on their responses. The three categories were “likely penicillin allergy,” “possible penicillin allergy,” and “unlikely penicillin allergy.” The responses and patients were divided based on the following characteristics: 1) Likely – anaphylaxis, angioedema, Stevens-Johnson syndrome, wheezing, urticaria, and itchy rash (non-urticarial);

77


Antibiotic Stewardship

2) Possible – minor rash (non-itchy and non-urticarial), reaction listed as “likely” above with no reaction on a later exposure to the drug, and an unknown reaction when labeled as allergic; 3) Unlikely – reaction that is a known side effect of the medication or the patient was listed as allergic to penicillin on their chart but denied a penicillin allergy when the survey was conducted. After the patients’ responses were analyzed and divided into one of these three groups, the results were statistically analyzed.

Results Pre-intervention On average, 203 individuals were admitted on inpatient floors daily. Of those, 28 patients on average were listed with penicillin allergies.

Figure 1. Percentage of study participants divided into each of the 3 allergy categories: “Likely,” “Possible,” and “Unlikely.” Total number of study participants is 62.

Intervention Sixty-two inpatient individuals with penicillin allergies listed on their electronic medical record at Guthrie RPH responded to the survey presented in Table 1. Those 62 patients were split into the following three groups based on the criteria as reported in the methods: Likely Penicillin Allergy; Possible Penicillin Allergy, and Unlikely Penicillin Allergy. As shown in Figure 1, 27% (n = 17) of the patients fell into the first category; 60% (n = 37) of the patients fell into the second category; and 13% (n = 8) of the patients fell into the third category. Of the 62 patients, 21 reported having either penicillin or amoxicillin after already being diagnosed with a penicillin allergy. Of those 21 patients, 81% (n = 17) reported having no reaction, 14% (n = 3) reported a rash, and 5% (n = 1) reported an anaphylactic reaction (Figure 2). Penicillin Reaction or Not? 1. How old were you when you first had the reaction to penicillin? 2. Do you personally recall the reaction? If no, who informed you of it?

Figure 2. Percentage of study participants that personally remembered the reaction to the drug. Total number of study participants is 62.

3. How long after starting the penicillin was the reaction first noticed? 4. Please describe the reaction. 5. For what reason was the penicillin prescribed? 6. What was the route of penicillin prescribed? (injection, oral) 7. Have you ever taken any of the following antibiotics after your initial penicillin reaction? If yes, did a reaction occur with it? a. Penicillin b. Amoxicillin c. Amoxicillin/Clavulanate d. Cephalexin e. Cefluroxime f. Cefoxitin g. Ceftriaxone h. Cefotaxime i. Cefepime j. Ceftazidine

Table 1. Survey administered to the study participants during data collection 78

Post-intervention On average, 203 individuals were admitted on inpatient floors daily. Of those, between 24 and 25 patients on average were listed with penicillin allergies. This is a reduction of 3 to 4 patients compared to pre-intervention.

Discussion After reviewing the data, it is clear that many individuals were incorrectly labeled as allergic to penicillin. In fact, using the methodology presented above, almost 13% of patients in the data set could have their allergy labels safely removed from their charts. Although the data set only included 62 patients in total, many patients could be impacted if this methodology were applied broadly across hospital systems. In these patients, the reported “allergic reaction” simply represented a known medication side effect such as nausea or another minor side effect. In instances when penicillin-related antibiotics are first-line agents or strongly preferred, they may be used


Antibiotic Stewardship

on these patients. To this end, hospital systems should ensure that medical record systems have clearly designated places for clinicians to designate medication intolerance as opposed to allergy. Accomplishing this goal might require alteration of the electronic medical record system or merely additional training for those who often update patients’ records with allergy information. Additionally, clinicians should be encouraged to remove incorrect allergy designations when they are discovered. In addition to the 13% of patients that are least likely to be allergic to penicillin, an additional 60% of patients could potentially be re-designated with additional analysis. These patients reported symptoms that were more severe than just known side effects but still likely did not represent true allergies. For these patients, allergy testing could prove to be beneficial in further clarifying their status. Of course, allergy skin testing is not the most readily available or cost-efficient tool to use on an entire population, but the notion becomes more applicable once the population is narrowed substantially by the short questionnaire described above. Furthermore, primary care physicians could take on some of these efforts as part of the annual exams on patients, even by offering patients the possibility of undergoing skin testing at a separate appointment with an allergy specialist. Furthermore, 78% of individuals that had been administered penicillin or amoxicillin subsequent to their first reaction did not have a successive reaction with the second administration of medication (Figure 3). This finding likely indicates that those patients are not truly allergic. As discussed above, many patients are likely to be incorrectly designated as allergic to penicillin, and there are many ways to begin approaching this issue. This methodology not only presents one way to easily correct some charts, but it also proposes a potential plan to work with patients that require added testing, and highlights the need for change in utilization of electronic records systems in order to better display patient information when it comes to medications, allergies, and adverse reactions.

Acknowledgments We would like to thank Amber Khan, MD, clinical assistant professor of medicine at Robert Packer Hospital in Sayre, Pennsylvania, for all of her guidance and insight.

References 1.

MacFadden DR, LaDelfa A, Leen J, Gold WL, Daneman N, Weber E, et al. Impact of Reported BetaLactam Allergy on Inpatient Outcomes: A Multicenter Prospective Cohort Study. Clinical Infectious Diseases. 2016 Oct 1;63(1):904-910.

2. McDanel JS, Perencevich EN, Diekema DJ, Herwaldt LA, Smith TC, Chrischilles, EA, et al. Comparative Effectiveness of Beta-Lactams Versus Vancomycin for Treatment of Methicillin-Susceptible Staphylococcus aureus Bloodstream Infections Among 122 Hospitals. Clinical Infectious Diseases. 2015 Aug 1;61(3):361-367. 3. Desai SH, Kaplan MS, Chen Q, Macy M. Morbidity in Pregnant Women Associated with Unverified Penicillin Allergies, Antibiotic Use, and Group B Streptococcus Infections. The Permanente Journal. 2017;21:16-080. 4. Charneski L, Deshpande G, Smith S. Impact of an Antimicrobial Allergy Label in the Medical Record on Clinical Outcomes in Hospitalized Patients. Pharmacotherapy. 2011 Aug;31(8):742-747. 5. Su T, Broekhuizen BDL, Verheij TJM, Rockmann H. The Impact of Penicillin Allergy Levels on Antibiotic and Health Care Use in Primary Care: A Retrospective Cohort Study. Clinical and Translational Allergy. 2017 Jun 7;7:18. 6. Raja AS, Lindsell CJ, Bernstein JA, Codispoti CD, Moellman JJ. The Use of Penicillin Skin Testing to Assess the Prevalence of Penicillin Allergy in an Emergency Department Setting. Annals of Emergency Medicine. 2009 Jul;54(1):72-7. 7.

Macy E, Ngor EW. Safely Diagnosing Clinically Significant Penicillin Allergy Using Only PenicilloylPoly-Lysine, Penicillin, and Oral Amoxicillin. Journal of Allergy and Clinical Immunology: In Practice. 2013 May-June;1(3):258-63.

8. Blumenthal KG, Wickner PG, Hurwitz S, Pricco BS, Nee AE, Laskowski, K, et al. Tackling Inpatient Penicillin Allergies: Assessing Tools for Antimicrobial Stewardship. Journal of Allergy and Clinical Immunology. 2017 Jul;140(1):154-161. 9. Sacco KO, Bates A, Brigham TJ Imam JS, Burton MC. Clinical Outcomes Following Inpatient Penicillin Allergy Testing: A Systematic Review and MetaAnalysis. Allergy. 2017 Sep;72(9):1288-1296.

Figure 3. Percentage of study participants that had “No Reaction,” a “Minor Reaction,” or a “Severe Reaction” to penicillin or amoxicillin when they received the drug again after already being listed as “allergic” to penicillin in their chart. “Minor Reaction” = Rash. “Severe Reaction” = Anaphylaxis. Total number of study participants who received penicillin or amoxicillin after already being listed as “allergic” is 21.

10. Salkind AR, Cuddy PG, Foxworth JW. The Rational Clinical Examination. Is This Patient Allergic to Penicillin? An Evidence-Based Analysis of the Likelihood of Penicillin Allergy. JAMA. 2001 May 16;2285(19):2498-505.

79


Scholarly Research In Progress • Vol. 2, November 2018

Predicting the Progression of Anaplasmosis and Babesiosis into Pennsylvania Using Lyme Disease as a Model Shane Warnock1*

Geisinger Commonwealth School of Medicine, Scranton, PA 18509 *Correspondence: swarnock@som.geisinger.edu 1

Abstract Objectives: Given the substantial increase in the number of reported cases of Lyme disease in Pennsylvania from a relatively rare disease in the 1980s to over 11 000 cases in 2016, analysis was performed on two additional infectious diseases transmitted by the same vector as Lyme disease, the Ixodes scapularis tick. Anaplasmosis and babesiosis are the two diseases of interest, and utilizing the historical progression of Lyme disease through New England and the mid-Atlantic states, this study aims to draw comparison among the diseases and predict the possibility of spread into Pennsylvania in the near future. Methods: Using data from the Centers for Disease Control and Prevention and the New York Department of Health, the reported yearly incidences of Lyme disease, anaplasmosis, and babesiosis were recorded on a county-by-county basis in New York, as well as throughout New England as a whole. The results were analyzed for disease progression or regression and incidence-density county maps of New York were generated to aid in the visualization of the yearly incidence trends. Lyme disease incidence data for Pennsylvania was recorded to highlight the historic spread of the disease through the state. Results: The disease incidences of anaplasmosis and babesiosis have steadily increased in New York and New England over the last 5 to 10 years. In New England, anaplasmosis increased from 461 cases in 2011 to 1708 cases in 2016, while babesiosis increased from 378 cases in 2011 to 1105 cases in 2016. Mapping of the progression of these diseases revealed a very similar pattern of spread as was seen with Lyme disease in the 1990s with the diseases first spreading through the New England states before entering eastern/southeastern New York and, for Lyme disease, spreading into Pennsylvania. For Pennsylvania, babesiosis has not been a reportable disease, whereas anaplasmosis became reportable in 2005 when just 4 cases were reported. That number has steadily increased to 58 cases in 2016. Conclusion: The recent trends in the spread of anaplasmosis and babesiosis have mimicked the historic spread of Lyme disease through New England into New York, New Jersey, and Pennsylvania. With Pennsylvania now reporting more cases of Lyme disease each year than any other state, preparation for the possibility of a similar spread of anaplasmosis and babesiosis should begin with education of health care providers and the public on the signs and symptoms of disease, as well as how they differ from Lyme disease in presentation and diagnosis.

Introduction The clinical significance of Lyme disease in the northeast 80

United States is well known. This is especially true in New Jersey, New York, and Pennsylvania, as the disease incidence has progressed in these 3 states from a total of 4689 reported cases in 1993 to 19 675 reported cases in 2016. Pennsylvania accounted for 11 443 of the cases reported in 2016, which was the most of any state, and nearly one-third of the total cases reported in the entire United States (1). Given this progression, it is important to study two additional diseases, anaplasmosis and babesiosis, which are transmitted by the same vector as Lyme disease, the Ixodes scapularis deer tick. Although it is known these diseases share the same vector and exist in a similar geographic distribution, the extent of progression of human cases of anaplasmosis and babesiosis within these regions is poorly documented. Each of the three diseases will present after a tick bite, but the patient may not have been aware they had a tick on their body or been bitten by one, which makes knowing the signs, symptoms, and geographic distributions essential. Lyme disease, which is caused by infection with the bacteria Borrelia burgdorferi, typically presents with symptoms of fatigue, headache, myalgia, arthralgia, neck stiffness, fever, and the classic erythema migrans rash (2). Anaplasmosis, a disease caused by the bacteria Anaplasma phagocytophilum, most commonly presents with similar symptoms of fever, chills, malaise, myalgia, and headache (3). Babesiosis, caused by the malaria-like parasite Babesia microti, presents with fatigue, malaise, fever, sweating, headache, and myalgia (4). One distinguishing feature between Lyme disease and the other diseases may be that it is uncommon for a rash to develop in anaplasmosis and babesiosis (3, 4). Diagnosis for each of the diseases can be made on a clinical basis or with laboratory tests such as serological studies (3, 5). In addition, babesiosis can be identified on a peripheral blood smear (4). Whether the diagnosis is made clinically or by laboratory studies, the treatment strategies further highlight the importance of early education on the non-Lyme tick-borne illnesses. Just as the signs and symptoms of the diseases had substantial overlap, the treatment regimens also overlap with one another. Lyme disease and anaplasmosis are both treated with the first-line agent of doxycycline, while babesiosis is treated with a combination of azithromycin and atovaquone (3, 6, 7). This highlights the need to increase awareness in a state such as Pennsylvania that is endemic for Lyme disease, but has yet to see a significant number of anaplasmosis or babesiosis. This may prevent a provider from not giving doxycycline to a mildly symptomatic patient who was bitten by a tick and has anaplasmosis, but has had no signs of rash and laboratory studies were negative for Lyme disease. Knowing of these diseases will keep the clinical suspicion for anaplasmosis and/or babesiosis elevated, and direct treatment as appropriate. This study aims to analyze the recent trends in


Anaplasmosis and Babesiosis

the geographic distributions of anaplasmosis and babesiosis, while using the historic spread of Lyme disease as a model to help predict if and when the incidence of anaplasmosis and babesiosis may begin to rise throughout the counties of Pennsylvania.

Materials and Methods The analysis undertaken in this study is centered on the yearly incidences of Lyme disease, anaplasmosis, and babesiosis in New York, in New England as a whole, and in Pennsylvania. Lyme disease is used as a model in this study; therefore, the first step in the process was to document the history of Lyme disease in the northeast United States. This was done through data collection from the Communicable Disease Annual Reports produced by the New York Department of Health and the Summary of Notifiable Infectious Diseases and Conditions yearly tables produced by the Centers for Disease Control and Prevention. The data collected and analyzed for New York were broken down on a county-by-county basis to help illustrate any patterns of spread of the diseases through the state. County-by-county data for New England and Pennsylvania were unavailable; therefore, the incidence of each disease in New England and Pennsylvania was trended on a regional and statewide basis, respectively. For New York, yearly incidence density maps were able to be generated manually using Adobe Photoshop CS6. To normalize for the populations in each county, the yearly incidence per 100 000 residents was used. A color scale was determined that best represented the increases and decreases of incidence observed year to year in each county and is as follows: green 0–5, yellow 5–10, orange 10–20, dark orange 20–50, red 50–100, dark red 100–500, and indigo 500–1000 cases per 100 000 residents. Once a historical trend in the spread of Lyme disease was developed, analysis of anaplasmosis and babesiosis was performed in the same manner; however, as of 2016, which is the most recent year of available data, babesiosis was not a reportable disease in Pennsylvania.

Results and Discussion

of cases reported each year (1). Pennsylvania surpassed New York in 2014, and in 2016, reported nearly double the cases of New York. With this in mind, the next phase of this study was to focus in on the counties of New York to allow comparisons to be drawn to anaplasmosis and babesiosis. The reported yearly incidences, per 100 000 population, of Lyme disease, anaplasmosis, and babesiosis were recorded from 1995 through 2016 (9). To provide a means of visualizing similarities in the geographic distribution and the patterns of spread, incidence density maps for the three diseases were generated at 5-year intervals. These maps are seen in Figure 3, and attention should be paid to where each disease first seems to begin in New York. The diseases first appear in the southeast area of the state before spreading northerly and westerly. The analysis of Figures 1 and 3 indicates a general pattern of spread along the Atlantic costal New England and Mid-Atlantic states before moving inland. To further investigate if anaplasmosis and babesiosis were mimicking this same pattern of spread, information from the New England states was also examined. To draw comparison to the dot map of Lyme disease in Figure 1, dot maps for anaplasmosis and babesiosis were obtained from the CDC for the year 2015 and can be seen in Figure 4 (10). The geographic distribution of each disease appears very similar to that of Lyme disease in 2001. To supplement these images further, the yearly incidence in the New England states as a whole (Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont) was recorded for anaplasmosis and babesiosis (1). The data revealed a steady and significant increase in the number of cases of each disease from 2011 to 2016, as can be seen in Table 1. These findings point to anaplasmosis and babesiosis becoming more prevalent in New England over the last 5 to 10 years while also beginning to enter New York over the last few years, which means Pennsylvania may start to see cases rise in the near future. The available data for Pennsylvania include anaplasmosis dating back to 2005, but as stated earlier, data on babesiosis are not available for Pennsylvania. In short, the number of cases of anaplasmosis varied between 4 and 8 cases per year until jumping up to 34 in 2013 and increasing to 58 in 2016 (1). This is evidence that Pennsylvania is slowly beginning to see an increase in anaplasmosis each year, and data on this disease should be followed over the years to come.

The history of Lyme disease in the northeast United States was analyzed and can be easily visualized in Figure 1 (8). This figure from the CDC highlights the dramatic increase in the number of cases of Lyme disease from 2001 to 2016 and the spread of the disease throughout New England and across New York and Pennsylvania. The significant progression of the disease through Pennsylvania is obvious in Figure 1, but what is more stunning is the rate at which the incidence continues to increase from year to year despite the population being well aware of this disease. As can be seen in Figure 2, the number of reported cases in Pennsylvania steadily increased from the early 1990s until 2003, when the yearly incidence varied between 4000 and 6000 cases until 2012; however, from 2013 to 2016, Pennsylvania has Figure 1. Progression of Lyme disease from 2001 to 2016, with every reported case represented by seen a rapid increase in the number one dot placed into the reporting county (8)

81


Anaplasmosis and Babesiosis

Figure 2. Yearly incidence of Lyme disease in New York and Pennsylvania

States over the last 20 years, education on anaplasmosis and babesiosis should begin now to prepare for the possibility of these diseases mimicking Lyme disease. With the rate at which the incidence of Lyme disease in Pennsylvania continues to rise, despite the awareness in health care and the general population, continuing education on all three tick-borne diseases is a necessity.

References 1.

Centers for Disease Control and Prevention. National Notifiable Diseases Surveillance System, 1995–2016 Annual Tables of Infectious Disease Data. Atlanta, GA. CDC Division of Health Informatics and Surveillance, 2017. Available at: https://www.cdc.gov/nndss/infectious-tables.html.

2. Hu L. Clinical Manifestations of Lyme Disease in Adults. In: UpToDate, Steere A, ed. UpToDate, Waltham, MA, 2018. Figure 3. Incidence Density Maps of New York State 1995–2015 for Lyme disease, Anaplasmosis, and Babesiosis

3. Sexton D, McClain M. Human Erlichiosis and Anaplasmosis. In: UpToDate, Calderwood S, Kaplan S, eds. UpToDate, Waltham, MA, 2018. 4. Krause P, Vannier E. Babesiosis: Clinical Manifestations and Diagnosis. In: UpToDate, Daily J, ed. UpToDate, Waltham, MA, 2018. 5. Hu L. Diagnosis of Lyme Disease in Adults. In: UpToDate, Steere A, ed. UpToDate, Waltham, MA, 2018. 6. Hu L. Treatment of Lyme Disease in Adults. In: UpToDate, Steere A, ed. UpToDate, Waltham, MA, 2018. 7.

Figure 4. Anaplasmosis and babesiosis total reported cases in 2015 with every reported case represented by one dot placed into the reporting county (10)

Table 1. Yearly incidence of anaplasmosis and babesiosis in the New England states 2011–2016

Conclusion Currently, anaplasmosis and babesiosis are showing similar geographic distributions and patterns of early spread to that of Lyme disease in the 1990s and early 2000s. Given the dramatic spread of Lyme disease across the northeast United 82

Krause P, Vannier E. Babesiosis: Treatment and Prevention. In: UpToDate, Daily J., ed. UpToDate, Waltham, MA, 2018.

8. Centers for Disease Control and Prevention. National Center for Emerging and Zoonotic Infectious Diseases, Lyme Disease Data and Statistics. Atlanta, GA. CDC Division of Vector-Borne diseases, 2017. Available at: https://www.cdc. gov/lyme/stats/index.html. 9. New York Department of Health. 1995-2016 Communicable Disease Annual Reports. Albany, NY. Bureau of Communicable Disease Control, October 2017. Available at: https://www.health.ny.gov/statistics/diseases/communicable/. 10. Centers for Disease Control and Prevention. National Center for Emerging and Zoonotic Infectious Diseases, Geographic Distribution of Nationally Notifiable Tickborne Diseases. Atlanta, GA. CDC Division of Vector-Borne diseases, 2015.


Scholarly Research In Progress • Vol. 2, November 2018

An Analysis of Stimulant Retail Drug Sales throughout Various Regions of the United States from 2006–2016: Is There an Association between Retail Stimulant Sales and ADHD Diagnoses Trends? Christy Ogden1* and Brian J. Piper1

Geisinger Commonwealth School of Medicine, Scranton, PA 18509 *Correspondence: cogden@som.geisinger.edu 1

Abstract Background: Stimulant medications are utilized for a variety of purposes and thus the retail sales trends have changed over several years. Medicinal stimulants are commonly used to treat several ailments. It is important to note that there is an 11% prevalence in attention deficit hyperactivity disorder (ADHD) in the United States. This disorder relies on medicinal treatment in the stimulant and non-stimulant forms as well as psychotherapeutic methods. This study analyzed the retail sales of popular stimulants, including amphetamine, methamphetamine, lisdexamfetamine, and methylphenidate, over the course of 10 years for all 50 of the United States. Methods: The retail sales reports were examined in 2006, 2011, and 2016 in order to determine the sales trends over the course of this period. The retail sales information for these 4 stimulants has been made available by the U.S. Department of Justice Drug Enforcement Administration. It is pertinent to also examine the increasing number of ADHD diagnoses in the United States as a whole over this same time period. ADHD diagnoses trends were recorded based on previous research that was available. After analyzing stimulant data, the United States were broken into regions of Northeast, South, Midwest and West. Results: It is a common belief that increasing retail sales of common stimulant drugs can be linked to the increasing ADHD diagnoses in the United States. Our research showed a clear increase in overall sales of these 4 stimulants over the decade. There was an average 11 000-kilogram increase in total stimulant sales every 5 years. More stimulants were sold in the South than in the other three regions. Over the course of the decade, Southern stimulant sales were 37% greater than that of the lowest-selling Western region. Amphetamine and methylphenidate were consistently the highest selling stimulants over all 10 years. Eighty-four percent of the total stimulant sales from the years analyzed were that of either amphetamine or methylphenidate. Conclusion: Stimulants showed pronounced increases over the decade. This increasing trend positively correlates with the increase in ADHD diagnoses in the United States during the same time frame. It is quite likely that with further research, it would be possible to determine a causative relationship between the two variables.

Introduction The stimulant family of drugs is vast and advantageous due to their use in several areas of medicine. Medicinal stimulant

benefits are continually being discovered and for that reason, the need for them for medical purposes continues to expand. There was a consistent growth rate of 3.4% per year for stimulant medications between 1996 and 2008 (1). The reasons for this may vary, but it is probably related to the increase in attention deficit hyperactivity disorder (ADHD) diagnoses. Common medications for patients with these disorders include amphetamine (Adderall®), lisdexamfetamine (Vyvanse®), and methylphenidate (Ritalin®). Methamphetamine (Desoxyn®) can also be beneficial for ADHD treatment, but it is only suggested for short-term use due to its potential to cause a physical dependence. Stimulants are also used for treatment of weight loss, narcolepsy, and mild cognitive impairments (2). It is important to note that not all patients with ADHD use prescription drugs as treatment. Other treatments for this disorder include psychotherapy and behavioral therapy. ADHD is one of the most common childhood neuropsychiatric disorders, but it usually continues well into adulthood (3). ADHD is usually diagnosed during adolescent years, but diagnoses have begun to expand into adulthood as well. ADHD is usually diagnosed after an individual shows a consistent syndrome of inattention, hyperactivity, and impulsivity that will ultimately impair functioning in their dayto-day lives (4). This disorder has the potential to be incredibly debilitating to its patients due to its effects on the brain. People with ADHD tend to have lower levels of dopamine in their brains, which would explain their chronic desire for stimulation (5). What separates an individual with ADHD from others is that they are under-stimulated due to their lower levels or lack of dopamine. PET scans have shown dopamine dysfunction in young adults who have not been treated for ADHD (6). This finding should be viewed with caution, as other research has been unable to replicate this or produced opposite results. A meta-analysis of 9 neuroimaging studies concluded that chronic dopamine transporter blockade with stimulants resulted in a compensatory upregulation in the striatum and that there is not an inherent dopaminergic imbalance that is caused by ADHD (7). As mentioned, individuals with ADHD are viewed as under-stimulated, explaining many of the symptoms of this disorder. Stimulants are often a good treatment method for this disorder because they act by increasing dopamine levels in the brain, which is exactly what is usually lacking in ADHD patients (5). Although stimulants are usually thought of as the first line of defense against ADHD, a non-stimulant drug known as atomoxetine has been given considerable attention based on its efficacy and ability to improve spatial planning (8). Regardless, stimulant sales continue to flourish. 83


Stimulant Retail Drug Sales

Amphetamines are one of the most common of all stimulant medications and they are normally found in the ADHD medication known as Adderall. Amphetamines can result in long lasting improvements in language comprehension (9). Based on this information, amphetamines can be used as a treatment for several different ailments, including strokes, which is under ongoing testing for approval. Lisdexamfetamine, a pro-drug in the form of Vyvanse, is used when patients are unresponsive to medications with similar properties. This second-line treatment has been noted as providing significant improvements in the core ADHD symptoms for those with severe ADHD (10). Methylphenidate was the original stimulant manufactured in order to aid in behavioral problems in children (11). It has been used for over 50 years and is presently found in the ADHD medication Ritalin. Methylphenidate has been noted to have a positive effect on neural emotion processing (12). This is crucial for children with this disorder because emotional processing is an important aspect of mental development. Out of the 4 mentioned substances, methamphetamine is the least likely to be used due to its high potential for abuse. Even though there are significant benefits to methamphetamine-based medications, 46% of methamphetamine users misuse their ADHD medication (13). Methamphetamine-based medications will be used in specific circumstances such as low success with other forms of treatment. Each of the mentioned medications are used for ADHD among other things, so their individual and combined retail sales should correlate with that of ADHD diagnoses. Stimulants are classified as Schedule II drugs by the Drug Enforcement Agency (DEA). Drugs under this category are defined as having a high potential for abuse, and possible side effects of psychological and/or physical dependence (14). Prescriptions for controlled substance stimulants have increased considerably over the last decade. The rapid increase in prescribing substances with the potential to become addictive (Schedule II) needs to be further evaluated in relation to the rate of ADHD diagnosis over the same timeframe in order to further illustrate these trends. Although stimulant medications are frequently used for ADHD patients, there are also non-stimulant medications and non-medicinal treatments available as well. This study will explore data from 2006 to 2016 of the stimulants commonly used in the treatment of ADHD. The regional data for each year will be compared to the overall ADHD trends over the past decade. In doing so, it will be possible to determine if there is a substantial association between retail stimulant sales and ADHD diagnoses.

Materials and Methods The US Department of Justice Drug Enforcement Administration utilizes the Automation of Reports and Consolidated Orders System (ARCOS) in order to create retail summary reports for each state, territory, etc., in a given year in the United States (15). Prior research with ARCOS has been published (16), but this is the first study to use ARCOS to report on stimulants. These summary reports break retail drug sales down by each individual drug as well as total sales for all drugs. Information was gathered for amphetamine, methamphetamine, and methylphenidate for the years 2006,

84

2011, and 2016. Lisdexamfetamine was approved by the FDA in February of 2007 (17). For this reason, the years 2007, 2011, and 2016 were analyzed instead for this agent. This project was approved by the Institutional Review Board of the University of New England.

Data analysis The data from each of these reports was recorded in kilograms (kg) and transferred to Excel to visualize the difference between each region in a given year as well as comparing total sales over the 10 years that were recorded. Data were collected for the overall total retail stimulant sales as well as for each individual stimulant in question. Total retail sales totals for each individual state were noted. In order to consolidate the data, the states were broken down into the 4 major regions of the United States. The Northeast region included data from Connecticut, Delaware, Maine, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont. The Southern region encompassed Alabama, Arkansas, Florida, Georgia, Kentucky, Louisiana, Maryland, Mississippi, North Carolina, Oklahoma, South Carolina, Tennessee, Texas, Virginia, and West Virginia. The Midwestern region comprised Kansas, Illinois, Indiana, Iowa, Michigan, Minnesota, Missouri, Nebraska, North Dakota, Ohio, South Dakota, and Wisconsin. Finally, the Western region incorporated data from Alaska, Arizona, California, Colorado, Hawaii, Idaho, Montana, Nevada, New Mexico, Oregon, Utah, Washington, and Wyoming. ADHD diagnoses information was found from a variety of resources, including the Centers for Disease Control and Prevention (CDC) and the ADHD Institute. GraphPad Prism QuickCalcs software was used for data analysis via paired t-tests. Dataanalysis was completed with GraphPad Prism (http://www. graphpad.com/quickcalcs/ttest1.cfm) with a p <0.05 considered statistically significant.

Results Comparing the total amount of retail stimulant sales illustrated a linear trend. Based on the data, there was a consistent increase in total retail stimulant sales during this time (Figure 1). To reiterate, data were taken for amphetamine, methamphetamine, lisdexamfetamine, and methylphenidate individually and then combined to look at retail sales trends as a whole. In 2006 the total amount of retail stimulant sales for the 4 simulants was 25 883 kg. In 2011, this number jumped to 38 458 kg and in 2016 this number again increased to 48 379 kg. This averaged out to be an 11 000-kg jump for every 5-year increment, meaning that the total amount of these substances sold every 5 years increases. An 89.6% increase in stimulant sales can be seen in the decade in question. The rise was statistically significant (p <0.05).


Stimulant Retail Drug Sales

this type of drug between 2006 and 2016. These data were statistically significant (p < 0.05). Lisdexamfetamine sales totaled 831 kg, 6,837 kg, and 9,575 kg in 2007, 2011, and 2016. To reiterate, data on this substance were not available for 2006, so information from 2007 was used. The average increase in lisdexamfetamine sales between these time increments totals 4,500 kg. Finally, methylphenidate sales totaled 16 933 kg, 18 750 kg, and 18 906 kg in 2006, 2011, and 2016, respectively. This averages out to be an average 1,000-kg increase between each time increment. There was a 43.6% increase in the retail sale of methylphenidate during this decade. The data were found to be statistically significant (p < 0.05). The collected data are indicative of a significant increase in total retail sales between each increment in question.

Figure 1. Total retail stimulant sales for the 3 years examined. Each year encompasses the total amount of amphetamine, methamphetamine, lisdexamfetamine, and methylphenidate sold by retail drug stores across the United States. *p < 0.05.

For each of the 4 drugs analyzed, the retail sales rate increased between each increment of time and between each region. The amount of stimulant sold in each region varied considerably. Based on total retail stimulant sales of the 4 drugs examined, the Southern states encompassed the largest total stimulant sales over all 3 years analyzed (Figure 2). Contrarily, the Western states consistently had the smallest amount of total sales. In 2006, 5,082 kg of stimulants were sold in the Northeast and 9,799 kg were sold in the South. The amount of stimulants sold in the Midwest and West during this time was 6,894 kg and 4,108 kg, respectively. In 2011, The Northeastern retail stimulant sales for the 4 drugs discussed was 6,537 kg and the Southern total was 15 469 kg. The Midwestern stimulant sales total was 10 905 kg and the Western total was 5,546 kg. Finally, in 2016, the Northeast and South retail stimulant sales totals were 8,843 kg and 19 933 kg, respectively. The Midwest retail sales total was 12 422 kg and the West sales total was 7,179 kg. In the Northeast and South, there was an average increase of 1,500 kg and 5,000 kg, respectively, between the time increments analyzed. In the Midwest there was an average increase of 3,000 kg every 5 years. For the Western states, there was a 1,500 kg increase, similar to the Northeast, over the course of the years analyzed. As a whole, each region steadily increased between each 5-year increment. When looking at each individual stimulant, it is evident that their retail sales totals increase over the 3 years in which data were taken (Figure 3). Amphetamine sales totaled 8,103 kg, 12 866 kg, and 19 892 kg in 2006, 2011, and 2016, respectively (p < 0.05). This averages out to be a 6,500 kg increase between the time increments of 2006 to 2011 and 2011 to 2016. There was a 145.5% increase in the sales of this drug during the decade in question. These results were statistically significant. Methamphetamine sales totaled 14 kg, 5 kg, and 6 kg for 2006, 2011, and 2016, respectively. Interestingly, the retail sale of methamphetamine decreased between 2006 and 2011, but slightly increased again between 2011 and 2016. Overall, there was a 56.4% decrease in the sales of

Figure 2. The regional stimulant sales for 2006, 2011, and 2016. The total drug sales were collected from each of the 4 stimulants (amphetamine, methamphetamine, lisdexamfetamine and methylphenidate) and compared across the 4 major regions of the United States. Graph A represents the sales totals for each region in 2006, whereas figures B and C represent the sales totals for each region in 2011 and 2016, respectively. 85


Stimulant Retail Drug Sales

Discussion Based on the findings from recent research, a 2 million case increase in ADHD was recorded between the years of 2003 and 2011 (18). This averages out to about 250 000 new cases per year. With an increase in patients with this disorder, there comes a growing necessity for convenient treatment methods. The CDC estimates that there was an average increase of 5% in ADHD diagnoses between the years of 2007 and 2011 (19). It is evident that there is an increase in ADHD diagnoses over the past decade. In 2013, the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) was published, altering many diagnosis classifications and providing new case examples. The ADHD revisions were not as dramatic as many other disorders, but they better defined the symptoms seen in older adolescents and adults (20). The changes are of crucial importance to expanding the general understanding of the nature of ADHD in older populations. Under this diagnostic change, children are required to have at least 6 known symptoms and adults are required to have at least 5 symptoms for a diagnosis (21). Identification methods that work for younger adolescents and children are not as successful in adults. By altering the ADHD diagnosis criteria and making them more inclusive, there is potential to make the diagnosis of ADHD much more reliable as well (20). According to the results, there was a steady increase in total stimulant sales for amphetamine, methamphetamine, and lisdexamfetamine between each time increment in which data were collected (Figure 1). Interestingly, methamphetamine showed a slight decrease in sales between 2006 and 2011, but then began to incline again by 2016. Researchers estimate that about 70% of children diagnosed with ADHD respond to at least one of these stimulant medications at an optimal dose (22). However, the long-term efficacy of these agents has been questioned (7). A vast majority of pediatric ADHD patients successfully respond to some form of a stimulant medication, validating the positive impact of this treatment option for this specific disorder. This is a trend that can be seen in present data.

Figure 3. Total amount of each drug sold in 2006, 2011, and 2016. A) represents amphetamine retail sales for the 3 years mentioned. B) represents methamphetamine retail sales for the same time frame. C) represents lisdexamfetamine retail sales for 2007, 2011, and 2016 (due to lack of information for this drug in 2006). D) represents methylphenidate retail sales for 2006, 2011, and 2016. *p < 0.05.

86

Currently, the worldwide prevalence of ADHD is 5.2%, which is less than the prevalence in the United States individually (4). The United States ADHD prevalence is now set to about 11%. Among the 50 states in the United States, there is some variability in number of diagnoses per state and region. North Carolina has the highest rate of ADHD diagnoses, with a 16.4% diagnosis rate in children. Nevada has the lowest rate of ADHD diagnoses (6.4%) (23). The research done in the current study indicates that Southern states had the highest rates of stimulant sales in 2006, 2011, and 2016. Contrarily, Western states had the lowest rates of stimulant sales in 2006, 2011, and 2016 (Figure 2). The Northeastern and Midwestern retail sales totals varied between years. This compares with research that shows the Southern region of the United States to have the highest percentage of children with ADHD, at 11.8%, and the Western region of the United States to have the lowest percentage of children with ADHD, at 7.8% (23). It can be inferred that the percentage of ADHD patients in a region correlates with the amount of stimulant sales in the same region, based on the fact that majority of children with ADHD respond positively to some form of stimulant medication as a means of treatment. As mentioned, the Western states had the


Stimulant Retail Drug Sales

smallest amount of stimulant sold for medicinal purposes. This could be due to that fact that there are less people in these states with this disorder when compared to regions such as the South. With less of a prevalence of ADHD, there is less of a need for stimulant treatment for this disorder in said region. Among the 4 drugs in which analysis was done, amphetamine and methylphenidate were the most prevalent (Figure 3). Between 1990 and 2000, the use of methylphenidate increased fivefold in the United States (11). Research has been done in order to determine the value of methylphenidate for treatment of this disorder. It is possible that the frequency in methylphenidate sales increased due to research supporting the theory that methylphenidate is effective for treating adult ADHD with a similar efficacy as with children diagnosed with ADHD (24). Using a drug that is effective in several age groups is incredibly beneficial for a myriad of reasons. Patients diagnosed in childhood would be able to continue using a methylphenidate-based drug into adulthood if their disorder persists. Changing medication can often cause negative consequences for the user, so being able to maintain a consistent treatment method over years with the same efficacy is ideal. It would be fair to deduce that an increase in diagnoses of ADHD in several age categories correlates with the increasing usage of methylphenidate-based drugs. Over the course of the last decade, drastic changes have occurred in stimulant medication prevalence, including amphetamine medication usage surpassing that of methylphenidate medication usage (25). This is a trend that can be seen in present data. Total retail sales of methylphenidate were consistently the highest among the 4 substances recorded for the first half of the decade, but there was a shift to an increase in amphetamine for the second portion of the decade. Studies have shown amphetamine as having positive behavioral effects to persist longer after individual doses than that of methylphenidate (26). It is important to note that while each case is individual, a broad conclusion can be made about treatment methods for children and adults with ADHD. Due to the long half-life of amphetamine, it would be a more beneficial stimulant treatment for longitudinal effects during a single dosing period. A study indicates that using amphetamine as a treatment for ADHD can have consistently positive effects over long durations of time (27). It is essential to utilize drugs that the human body will not gain tolerance to over time. Amphetamine-based drugs can be used for long periods of time without the need for increased dosage or medication changes. This would explain the high retail sales for amphetamine through the decade that was examined for this study. While the sales of stimulants such as lisdexamfetamine and methamphetamine are also large, they are not used nearly as common as methylphenidate and amphetamine. Diagnosis for ADHD is currently changing. It used to be necessary for adults with ADHD to be previously diagnosed during childhood (28). This is not always the case for highly advanced individuals that were consistently well-performing in grade school. Other individuals’ impairments may not have been significant enough to diagnose in their earlier stages of life. For a large population of individuals, ADHD symptoms are not apparent until later in life and the necessity to diagnose at this point meant that the guidelines for diagnosis needed

to be changed. Scales have since been established in order to better diagnose adults regardless of previous childhood ADHD diagnoses. The DSM-5 added new examples to their classification of ADHD in adults in order to make the diagnosis more inclusive (20). These changes have also added to the increasing trend of ADHD diagnoses in the United States, as well as the rest of the world. In 2011, two-thirds of those diagnosed with ADHD were administered medication as treatment (18). The majority of patients with ADHD are being treated medicinally and every frequently used medication has a stimulant base ingredient. This is because stimulants are the most effective medications for ADHD. Stimulants have responsiveness rates in 70% to 80% range, meaning they are seen to have positive effects the majority of the time (28). Long-acting stimulant medications are also recommended because they have shown consistent results of better patient compliance and longer-lasting, smoother improvements of symptoms, giving patients a better overall outlook on life. The relationship between the increase in retail stimulant sales and ADHD diagnoses from 2006 to 2016 is evident based on the collected data. Limitations became prevalent throughout this study. Exact rates for ADHD diagnoses were not readily available for analysis and therefore basic trends were noted instead. To reiterate, lisdexamfetamine information was not available until its approval in 2007, and therefore some alterations were made to this research in order to accommodate for this issue. A caveat in the results could be due to the regional breakup of the United States. The Southern region encompasses 15 states, whereas the Northeastern region encompasses only 10 states. Although population density may vary from state to state, it is important to note that there is an unequal breakup in the four regions of the United States.

Conclusion The results from the data above indicate that retail stimulant drug sales are continuously increasing. The Southern states are consistently purchasing more stimulant medications than the other three regions and the Western states are consistently purchasing fewer stimulant medications. Amphetamine and methylphenidate-based medications are sold in the highest amounts for every year that was examined. These data align with the overall increase in diagnoses of ADHD in the United States. ADHD diagnoses have continually increased over the last decade and the overall prevalence in the United States is above 11%. Southern states have a higher percentage of residents with a diagnosis of ADHD than Western states. Amphetamine and methylphenidate are used for the longevity of their effects and consistent efficacy among a variety of age groups, respectively. It is estimated that about 60% of children with ADHD are treated with prescription stimulants, meaning approximately 3 million children in the United States are being treated with some form of a stimulant for ADHD (29). This further supports the relationship between increased ADHD diagnoses and increased prescription stimulant sales in this country. Stimulant medication is simply the superior treatment for a vast majority of the population diagnosed with this disorder. It is estimated that the “cost of illness� for an individual with ADHD is $14,576 each year. This would equate to about 87


Stimulant Retail Drug Sales

$42.5 billion a year for all Americans with ADHD (30). This disorder is becoming more and more prevalent in society with each passing year. This is partially due to the changes in the classification of ADHD. It is fair to say that it has a large role in present society and the cost burden is exponentially increasing. It is now necessary to ensure the general public is as educated as possible about this disorder and the treatment options available. It used to be the belief that ADHD was a disorder limited to childhood due to diminishing externalizing behaviors. This has since been disputed, because longitudinal studies are now showing symptoms remaining for the majority of ADHD patients into adulthood (28). This explains the increases in stimulant sales and ADHD prevalence as a whole. Children are generally diagnosed in their adolescence, but many maintain their disorder throughout adulthood as well. This, along with the adults who are now newly diagnosed, creates a larger portion of the population with ADHD. With the prevalence of ADHD and the cost of this illness being as significant as it is, it would be best for more studies to be done in order to better understand the longitudinal effects of the disorder as well as the treatment methods administered and the misuse of these agents.

Acknowledgments An earlier version of this manuscript was completed as part of the Readings in Basic Sciences course and was presented as part of the Geisinger Shares seminar series with Darina Lazarova, PhD. Thank you to the Drug Enforcement Administration for making this data publicly available.

References 1.

Zuvekas SH, Vitiello B. Stimulant medication use in children: A 12-year perspective. Am J Psychiatry. 2012; 169(2):160-166.

2. McDonald DC, Jalbert SK. Geographic variation and disparity in stimulant treatment of adults and children in the United States in 2008. Psychiatr Serv. 2013; 64(11):1079-1086.

Striatal dopamine transporter alterations in ADHD: Pathophysiology or adaptation to psychostimulants? Am J Psychiatry. 2012 Mar; 169(3):264-272. 8. Ravishankar V, Vedaveni Chowdappa S, Benegal V, Muralidharan K. The efficacy of Atomoxetine in treating adult attention deficit hyperactivity disorder (ADHD): A meta-analysis of controlled trials. As J Psychiatry. 2016 Dec; 23:53-58. 9. Sinita E, Coghill D. The use of stimulant medications for non-core aspects of ADHD and in other disorders. Neuropharmacology. 2014 Dec; 87:161-171. 10. Mundada V, Bem A. PP10.3-2259: Lisdexamfetamine in the children with ADHD. Eur J Paediatr Neurol. 2015; 19(1):S70. 11. Sussman S, Pentz M, Spruijt-Metz D, Miller T. Misuse of “study drugs�: Prevalence, consequences, and implications for policy. Subst Abuse Treat Prev Policy. 2006 Mar; 1:15. 12. Schlochtermeier L, Stoy M, Schlagenhauf F, Wrase J, Park SQ, Friedel E, et al. Childhood methylphenidate treatment of ADHD and response to affective stimuli. Eur Neuropsychopharmacol. 2011 Aug; 21(8):646-654. 13. Novak SP, Kroutil LA, Williams RL, Van Brunt DL. The nonmedical use of prescription ADHD medications: results from a national internet panel. Subst Abuse Treat Prev Policy. 2007 Oct; 2:32. 14. Drug Enforcement Administration (DEA). Drug Scheduling; 2018 [online] available at https://www.dea.gov/druginfo/ ds.shtml 15. U.S. Department of Justice Drug Enforcement Administration. ARCOS Retail Drug Summary Reports; 2018 [online] available at https://www.deadiversion.usdoj. gov/arcos/retail_drug_summary/index.html 16. Piper BJ, Shah DT, Simoyan OM, McCall KL, Nichols SD. Trends in medical use of opioids in the U.S., 2006-2016. Am J Prev Med. 2018 May; 54(5): 652-660.

3. Konrad K, Eickhoff SB. Is the ADHD brain wired differently? A review on structural and functional connectivity in attention deficit hyperactivity disorder. Hum Brain Mapp. 2010 Jun; 31(6):904-916.

17. U.S. Food and Drug Administration (FDA). Drug Approval Package: Vyvanse (Lisdexamfetamine Dimesylate); 2018 [online] available at https://www.accessdata.fda.gov/ drugsatfda_docs/nda/2007/021977s000TOC.cfm

4. Moffitt TE, Melchoir M. Why does the worldwide prevalence of childhood attention deficit hyperactivity disorder matter? Am J Psychiatry. 2007 Oct; 164(6):856858.

18. Visser SN, Danielson ML, Bitsko RH, Holbrook JR, Kogan MD, Ghandour RM, et al. Trends in the parent-report of health care provider-diagnosed and medicated attentiondeficit/hyperactivity disorder: United States, 2003-2011. J Am Acad Child Adolesc Psychiatry. 2014 Jan; 53(1):34-46.

5. Armstrong T. Neurodiversity: Discovering the Extraordinary Gifts of Autism, ADHD, Dyslexia, and other Brain Differences. Boston MA: De Capo Lifelong Books; 2010. 6. Ludolph AG, Kassubek J, Schmeck K, Glaser C, Wunderlich A, Buck AK, et al. Dopaminergic dysfunction in attention deficit hyperactivity disorder (ADHD), differences between pharmacologically treated and never treated young adults: A 3,4-dihdroxy-6-[18F]fluorophenyl-L-alanine PET study. Neuroimage. 2008 Jul; 41(3):718-727. 7.

88

Fusar-Poli P, Rubia K, Rossi G, Sartori G, Balottin U.

19. Centers for Disease Control and Prevention (CDC). ADHD; 2018 [online] available at https://www.cdc.gov/ncbddd/ adhd/index.html 20. Epstein JN, Loren REA. Changes in the definition of ADHD in DSM-5: Subtle but important. Neuropsychiatry. 2013; 3(5):455-458. 21. ADHD Institute. Overview of the DMS-5 medical classification system for ADHD; 2018 [online] available at http://adhd-institute.com/assessment-diagnosis/diagnosis/ dsm-5/


Stimulant Retail Drug Sales

22. American Academy of Pediatrics (AAP). ADHD: Clinical practice for the diagnostics, evaluation, and treatment of Attention Deficit/Hyperactivity Disorder in children and adolescents; 2018 [online] available at http://pediatrics. aappublications.org/content/pediatrics/early/2011/10/14/ peds.2011-2654.full.pdf 23. Frazee S. How geography drivers ADHD diagnosis; 2012 [online] available at http://lab.express-scripts.com/lab/ insights/specialized-care/how-geography-drives-adhddiagnosis 24. Faraone SV, Spencer T, Aleardi M, Pagano C, Biederman J. Meta-analysis of the efficacy of methylphenidate in treating adult attention-deficit/hyperactivity disorder. J Clin Psychopharmacol. 2004 Feb; 24(1):24-29. 25. Safer DJ. Recent trends in stimulant usage. J Atten Disord. 2015 Oct; 20(6):471-477. 26. Pliszka SR, Browne RG, Olvera RL, Wynne SK. A double-blind, placebo-controlled study of Adderall and Methylphenidate in the treatment of Attention-Deficit/ Hyperactivity Disorder. J Am Acad Child Adolesc Psychiatry. 2000; 39(5):619-626. 27. Gillberg C, Melander H, von Knorring AL, Janols LO, Thernlund G, Hagglof B, et al. Long-term treatment of children with attention-deficit disorder symptoms. A randomized, double-blind, placebo-controlled trial. Arch Gen Psychiatry. 1997 Sep; 54(9):857-864. 28. Kolar D, Keller A, Golfinopoulos M, Cumyn L, Syer C, Hechtmen L. Treatment of adults with attention-deficit/ hyperactivity disorder. Neuropsychiatr Dis Treat. 2008 Apr; 4(2):389-403. 29. Lakhan SE, Kirchgessner A. Prescription stimulants in individuals with and without attention deficit hyperactivity disorder: misuse, cognitive impact, and adverse effects. Brain Behav. 2012 Sep; 2(5):661-677. 30. Holland K, Riley E. ADHD by the numbers: Facts, statistics, and you; 2014 [online] available at https://www.healthline. com/health/adhd/facts-statistics-infographic

89


Scholarly Research In Progress • Vol. 2, November 2018

Epigenetics: The Impact of Gene Expression on Neuropathic Pain, A Review Matthew A. Jones Jr.1*

Geisinger Commonwealth School of Medicine, Scranton, PA 18509 *Correspondence: mjones01@som.geisinger.edu 1

Abstract Peripheral neuropathic pain refers to pain that is associated with damage or degeneration of peripheral nerves within the human body. These nerves can become damaged in many ways, including through infection, metabolic disease, cancer, chemotherapy, and nutritional deficiencies, among other causes. Damaged neurons can become increasingly sensitive to usually non-painful stimuli and can produce abnormal feelings and sensations in the areas of the body affected by neuropathy, making day-to-day life uncomfortable and sometimes unbearable to people suffering from peripheral neuropathy. Currently, treatment includes the use of certain antidepressants, anticonvulsants and topical ointments. However, it is reported that only 40% to 60% of patients receive partial relief, if any at all, suggesting that favored methods of treatment are not effective in alleviating the symptoms associated with peripheral neuropathy. Despite a lack of success with traditional treatment, there have been promising results in the realm of precision medicine, in which genetic and epigenetic processes have contributed to the relief of pain and abnormal sensations associated with peripheral nerve damage. This review will explore potential epigenetic mechanisms that impact gene expression and ultimately peripheral neuropathic pain in order to better understand the cellular mechanisms associated with the degeneration of neurons as well as the pain and discomfort that are associated with this type of damage.

Introduction The human body is a living, breathing, finely tuned machine that allows us to function in our day-to-day lives and to accomplish all sorts of tasks. The overall control and guidance our bodies receive comes from the brain and nervous system, which is a network of cells and neurons. In order for the brain to fulfill its role as the body’s coordinator, it needs the help of structures known as nerves. Nerves are specialized cells that are part of our nervous system which allow for the transmission of brain signals to various parts of the body. Nerves can come in many different forms, but this paper will focus on peripheral nerves—more specifically sensory peripheral nerves. Peripheral nerves are nerves that work via the peripheral nervous system (PNS), which is the nervous system that exists outside of the brain and spinal cord. These nerves can be subdivided into 3 categories: motor neurons, autonomic neurons, and sensory neurons (1). Motor neurons are responsible for sending signals from the brain to the rest of the body. These neurons innervate muscles and control the movement of our bodies; if one were to damage or otherwise lose these nerves, symptoms like muscle weakness, spasm, and cramps can occur (1). Autonomic neurons control 90

involuntary or partially involuntary movements. Processes such as heart rate, blood pressure, and the digestion of food are all controlled by autonomic nerves. Damage to these nerves can lead to an irregular heartbeat or blood pressure, to name a few side effects (1). The final class of nerves, sensory neurons, are responsible for sending messages from the muscles and limbs back to the brain for processing. These nerves are needed in order to help people sense what an object feels like (e.g., rough, smooth, hot, cold). When these nerves are damaged, numbness, tingling, and pain often result, leading to a discomforting lifestyle to those with this type of damage (1). Peripheral neuropathy is the abnormal or lack of function of the peripheral nervous system (PNS). This term is a blanket term that covers a wide range of ailments and afflictions, but mainly involves damage to the nerve(s) in some way, whether that be from metabolic disorders such as diabetes, or from toxins and external causes such as radiation therapy and chemotherapy (1). This leads to a lack of signal transduction along the nerve, which can lead to the abnormal feelings and sensations associated with peripheral neuropathy. The disease can be further classified into mononeuropathies, which are those affecting a single nerve, and polyneuropathies, which are those subsets affecting multiple nerves. Mononeuropathy is highlighted in disorders such as carpal tunnel, in which a single nerve in impinged. Most people suffering from peripheral neuropathy have a form of polyneuropathy (1). Current first-line treatments for neuropathy include tricyclic antidepressants, anticonvulsive medications, serotonin/ norepinephrine reuptake inhibitors, and opioids. Many of these medications are used in an off-label fashion, meaning the drug was not approved by the FDA for treatment of peripheral neuropathy, but rather for some other type of disease or ailment (2). It was noted that only between 40% and 60% of patients obtaining first-line treatment gained even partial relief of their pain (3, 4). Aside from the off-label use of drugs to treat the symptoms, prescribed opioids are addictive, and can cause lifelong problems in addition to neuropathy if their use is not carefully monitored and maintained appropriately. Effect of gene expression on neuropathic pain Epigenetic modification has proven itself to be a useful tool in the treatment of various diseases, including various forms of cancer. Epigenetic modifications are modifications that do not alter the genetic code itself, but do alter the expression/expression patterns of associated genes. This type of treatment can prove useful, as it would treat the underlying cause of neuropathy (failed signal transduction along the nerve) rather than the symptoms, which is what first-line treatments aim to attenuate. This paper will focus on potential epigenetic treatments that can help relieve the pain associated with peripheral neuropathy, which can ultimately improve the quality of life for those suffering from this affliction.


Neuropathic Pain

Various targets and pathways are of focus with regard to the treatment of the pain associated with peripheral neuropathy, many of which focus on metabotropic glutamate receptor 2 (mGlu2). The mGlu receptors and their mGlu2 subtype are widely distributed along the neural axis of pain, and it is thought that upregulation of these receptors can produce analgesic effects (5). mGlu2 receptors negatively regulate neurotransmitter (NT) release from primary afferent fibers, thus preventing the transmission of the pain signal back to the brain for processing (6). The mGlu2 receptors are associated with Gi proteins and are transcriptionally regulated by the NF-kB pathway (5, 6). mGlu2 receptors are thought to “depress pain transmission at synapses between primary afferent fibers and second order sensory neurons in the dorsal horn of the spinal cord” (5). In their study, Zammataro et al. were interested in whether or not mGlu2 receptor agonists could produce the analgesic effect in mice that were mGlu2+/mGlu2-. LY379268, an mGlu2 receptor agonist, was administered to mice at a dose of 3mg/kg via an intraperitoneal route (5). Analgesic effects were measured via a formalin test, in which mice were injected in a hind paw with formalin to elicit a painful response. Murine responses to pain in the paws include licking and lifting/shaking of the paw. It was seen that LY379268 produced analgesia in mice, but that this effect was short-lived and tolerance to the analgesic effect was formed within 5 days of continuous treatment (5). It was seen that mGlu2 receptor agonists were able to elicit an analgesic effect in mice, but only for a short while. These drugs bound to the receptor to elicit a response (hence an agonist). However, what if the receptors themselves were more abundant—could that make a difference? In a study conducted by Chiechio et al., the role of mGlu2 receptors in the dorsal horn of the spinal cord and dorsal root ganglion (DRG) was analyzed. In the study, researchers were interested in determining whether or not the role of histone deacetylases (HDACs) contributed to the pain associated with peripheral neuropathy (6). Two compounds of interest were examined: MS-275 (dosed at 3 mg/kg) and SAHA (dosed at 5 mg/kg or 50 mg/kg), both of which are histone deacetylase inhibitors (HDACi) and were administered subcutaneously. Since histone acetylation is generally associated with gene expression/ transcription, the idea was that inhibiting deacetylation would lead to more gene transcription of mGlu2 receptors, leading to an analgesic effect (6). HDACs can be broken down into several classes, but this paper was mainly focused on Class I HDACs, specifically HDAC1 and HDAC2, which are known to deacetylate the p65/NF-kB complex and both are found in the dorsal horn and DRG of mice, and acetylation of this complex at lysine 310 is needed for full transcriptional activity of its target gene(s) (6, 7). In order to examine the potential analgesic effects from these drugs, a formalin test similar to the one mentioned above was used. Mice were subjected to this test 30 minutes or 24 hours after a single injection of the drug, or 24 hours

after a 5-day administration of the drug. The test was broken into two phases: phase I, which measured acute pain and was done immediately after formalin injection, and phase II, which measured chronic pain and was done 20 to 45 minutes postinjection (6). It was seen that a single dose of MS-275 (3 mg/kg) or SAHA (50 mg/kg) 30 minutes or 24 hours before formalin testing did not affect the pain response seen in the mice (Figure 1A and 1B) (7). However, administration of these same doses over a 5-day regimen before formalin testing showed pain was reduced in phase II of the formalin test, indicating these compounds had an effect on chronic pain and that prolonged inhibition of HDACs is needed in order to produce the desired analgesic response (6). SAHA was seen to be equally effective at the lower dose of 5 mg/kg (Figure 1). It was also seen that administration of these drugs did indeed increase the expression of mGlu2 receptors in the dorsal horn, but that they also did not affect other mGlu receptors (Figure 2) (6). These findings helped show that mGlu2 receptor expression in the dorsal horn and DRG may be responsible for analgesic effects in models of neuropathic pain. To further support this hypothesis, an mGlu2 receptor antagonist, LY341495, was tested and showed a reversal in analgesic effects provided by the HDACis MS-275 and SAHA (6).

Figure 1. (A) Analysis of formalin testing from a single dose of MS-275 (3 mg/kg). (B) Analysis of formalin testing from a single dose of SAHA (50 mg/kg). Analysis of formalin testing from a single dose of MS-275 MS-275 (3 mg/kg). (D) Analysis of formalin testing for a single dose of SAHA (5 mg/kg and 50 mg/kg). Figure adapted from Chiechio et al. (6)

91


Neuropathic Pain

Perceived pain before treatment

Perceived pain after treatment

Higher

Lower

Figure 2. (B) Western blot analysis of mGlu class receptors in the dorsal horn of the lumbar spine of mice in response to MS-275 (3 mg/kg). (C) Analysis of receptors in the dorsal horn in response to both doses of SAHA (5 mg/kg or 50 mg/kg). Note that only expression of mGlu2 was affected. Figure adapted from Chiechio et al. (6)

Similarly, it was seen in a study conducted by Notartomaso et al. that use of L-acetylcarnitine (LAC) was able to elicit a similar response, also via increased expression of mGlu2 receptors. Similar to the study mentioned above, it was seen that LAC was able to increase mGlu2 receptors in the DRG and dorsal horn of the spinal cord via increased acetylation of the p65/ Rel complex (8). LAC was shown to be effective in doses of 100 mg/kg once a day for 7 days, but not when administered as a single dose or dosed over a 3-day period (Figure 3A). It was also seen that LAC produced an analgesic effect that persisted 7 days after treatment with the drug was ended (Figure 3B) (8). Expression of mGlu2 receptors in the DRG and dorsal horn were also seen to persist 7 days post-drug withdrawal (Figure 4), supporting the aforementioned hypothesis that increased expression of the mGlu2 receptor is involved in the analgesia produced by drugs like MS-275, SAHA and LAC (6, 8). Metabotropic glutamate receptors are not the only target of interest when it comes to epigenetic modification as a treatment for neuropathic pain. Melatonin (MLT), the alltoo-familiar over-the-counter sleep aid, is another drug of

92

interest (9). Injury which results in neuropathic pain leads to a decrease in a protein phosphatase 2A (PP2A) in the spinal nerves. PP2A is a serine threonine specific phosphatase that normally regulates the protein HDAC4 via dephosphorylating critical residues on this protein maintaining its activity inside the nucleus. As levels of PP2A decline, HDAC4 is phosphorylated and is subsequently transferred to the cytosol. HMGB1 is a gene normally silenced by HDAC4. Departure of HDAC4 from the nucleus allows expression of HMGB1 to occur, resulting in allodynia (pins-and-needles feeling) and hyperalgesia (increased sensitivity to pain) associated with neuropathy (9). When MLT is directly administered to spinal nerves, the drug is taken up by MLT2 receptors located on the cell surface of these neurons. MLT has the effect of suppressing the expression of the HMGB1 gene, which is thought to lead to the analgesic effect associated with MLT (Figure 5) (9). The HMGB1 gene is a chromatin protein thought to control nuclear structure, which ultimately regulates gene expression. HMGB1 is thought to be linked to the inflammatory response in tissues and may contribute to the clinical phenotypes discussed above.


Neuropathic Pain

Perceived pain before treatment

Perceived pain after treatment

Higher

Lower

Figure 3. (A) Mice injected intraperitoneally with CFA and saline showed no difference in pain, whereas mice treated with CFA and LAC showed a marked difference when given a 7-day treatment. (B) LAC was able to produce analgesic effects that lasted 1 hour and 7 days post-drug withdrawal (only when administered with CFA). Mechanical threshold (y-axis) refers to the amount of pain the mice can withstand and was measured in grams (g) based on bending of a wire filament that elicited at least 3 pain responses. Figure adapted from Notartomaso et al. (8)

Perceived pain before treatment

Perceived pain after treatment

Higher

Lower

Figure 4. (A) Protein analysis of mGlu2 receptors showed an increase in expression after both 1 hour and 7-day treatments with LAC (100 mg/kg). CFA i.pl demonstrates that mice were also injected with CFA intraperitoneally. Figure adapted from Notartomaso et al. (8) 93


Neuropathic Pain

References 1.

Types of Peripheral Neuropathies [Internet]. Types of Peripheral Neuropathy - Inflammatory - Guillain-Barré Syndrome / Acute Inflammatory Demyelinating Polyneuropathy (AIDP). [cited 2017Dec]. Available from: http://peripheralneuropathycenter. uchicago.edu/learnaboutpn/typesofpn/index.shtml

2. Taylor DC. What Is Neuropathic Pain? Treatment, Medication, Definition [Internet]. MedicineNet. [cited 2017Dec]. Available from: https://www.medicinenet.com/neuropathic_pain_nerve_pain/ article.htm 3. Dworkin RH, O’Connor AB, Backonja M, Farrar JT, Finnerup NB, Jensen TS, et al. Pharmacologic management of neuropathic pain: Evidence-based recommendations. Pain [Internet]. 2007;132:23751. Available from: https://www.ncbi.nlm.nih.gov/pubmed/17920770 4. Neuropathic pain [Internet]. Wikipedia. Wikimedia Foundation; [cited 2017Dec16]. Available from: https://en.wikipedia.org/wiki/ Neuropathic_pain

Perceived pain before treatment Higher

Perceived pain after treatment Lower

5. Zammataro M, Chiechio S, Montana MC, Traficante A, Copani A, Nicoletti F, et al. mGlu2 Metabotropic Glutamate Receptors Restrain Inflammatory Pain and Mediate the Analgesic Activity of Dual mGlu2/mGlu3 Receptor Agonists. Molecular Pain [Internet]. 2011;7. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC3030510/

Figure 5. Administration of MLT (SNL7D-MLT) was shown to decrease HMGB1 expression and cause analgesic effects in models of neuropathic pain. Figure adapted from Lin et al. (9)

6. Chiechio S, Zammataro M, Morales ME, Busceti CL, Drago F, Gereau RW, et al. Epigenetic Modulation of mGlu2 Receptors by Histone Deacetylase Inhibitors in the Treatment of Inflammatory Pain. Molecular Pharmacology [Internet]. 2009Feb;75:1014-20. Available from: https://www.ncbi.nlm.nih.gov/pubmed/19255242

Conclusion

7.

Overall, we have seen the role that epigenetics can play in the treatment of neuropathic and inflammatory pain. By targeting specific mechanisms that work to control gene expression, analgesia was produced in mouse models and showed the promising future of precision medicine. The role of HDACs are far-reaching and the effects of epigenetic modification of HDACs has been shown to produce results in models of neuropathic and inflammatory pain, as seen in this review, as well as in treatments of various cancers (5, 6, 8–12). These studies serve as a benchmark for future application of precision medicine in human patients, and show promising results in the treatment of human disease. Future application of epigenetic and genetic approaches can allow for targeted, personalized treatment for patients based on the profile of the disease or ailment with which they are affected. Further studies in the realm of epigenetics can aim to provide a plethora of treatments—treatments designed for specific individuals or disease profiles that will eliminate the need for some of the current first-line treatments that can be toxic and have a wide range of side effects. Though this is a relatively young area of science, genetic and epigenetic research have provided us with understanding as to how our bodies function. By manipulating the processes and pathways that effectively operate our bodies, we will be able to find better ways to treat disease and injury, and maybe even eradicate some altogether.

Acknowledgments To Greg Shanower, PhD: Thank you for your help and guidance! 94

Ashburner BP, Westerheide SD, Baldwin AS. The p65 (RelA) Subunit of NF- B Interacts with the Histone Deacetylase (HDAC) Corepressors HDAC1 and HDAC2 To Negatively Regulate Gene Expression. Molecular and Cellular Biology [Internet]. 2001;21:7065-77. Available from: https://www.ncbi.nlm.nih.gov/ pubmed/11564889

8. Notartomaso S, Mascio G, Bernabucci M, Zappulla C, Scarselli P, Cannella M, et al. Analgesia induced by the epigenetic drug, L-acetylcarnitine, outlasts the end of treatment in mouse models of chronic inflammatory and neuropathic pain. Molecular Pain [Internet]. 2017;13:174480691769700. Available from: https://www. ncbi.nlm.nih.gov/pubmed/28326943 9. Lin T-B, Hsieh M-C, Lai C-Y, Cheng J-K, Wang H-H, Chau Y-P, et al. Melatonin relieves neuropathic allodynia through spinal MT2enhanced PP2Ac and downstream HDAC4 shuttling-dependent epigenetic modification ofhmgb1transcription. Journal of Pineal Research [Internet]. 2016;60:263-76. Available from: https://www. ncbi.nlm.nih.gov/pubmed/26732138 10. Ropero S, Esteller M. The role of histone deacetylases (HDACs) in human cancer. Molecular Oncology [Internet]. 2007Jul;1:19-25. Available from: https://www.ncbi.nlm.nih.gov/pubmed/19383284 11. Tate CR, Rhodes LV, Segar HC, Driver JL, Pounder FN, Burow ME, et al. Targeting triple-negative breast cancer cells with the histone deacetylase inhibitor panobinostat. Breast Cancer Research [Internet]. 2012;14. Available from: https://www.ncbi.nlm.nih.gov/ pubmed/22613095 12. Descalzi G, Ikegami D, Ushijima T, Nestler EJ, Zachariou V, Narita M. Epigenetic mechanisms of chronic pain. Trends in Neurosciences [Internet]. 2015;38(4):237-46. Available from: https://www.ncbi.nlm.nih.gov/pubmed/25765319


Scholarly Research In Progress • Vol. 2, November 2018

Past, Present, and Future of Lie Detection Mystie Chen1, Robert Gordon1, Katherine Shoemaker1*, and Jessica Wang1

Geisinger Commonwealth School of Medicine, Scranton, PA 18509 *Correspondence: kshoemaker01@som.geisinger.edu 1

Abstract Lie detection methods seem to be as old as civilization, and this pursuit has been aided in recent years by technological advancements. An accurate method to determine a lie from the truth could play a central role in legal and business proceedings as well as national security. This paper has reviewed the efficacy, limitations, and future directions of previous and current methods. Existing forms of lie detection rely on autonomic responses, but these simply detect a stressed state not exclusive to deception. Despite extensive neuroimaging studies to map the neural correlates of deception, particularly those using fMRI, results have been inconsistent and accompanied by extensive limitations. Although advancements in technology and research methods may hold an exciting future for the field of lie detection, current methods of lie detection are not yet accurate enough to be utilized as standalone techniques to determine deception.

Introduction History of lie detection Despite their negative connotations, lies are, and have long been, a part of human society. Perhaps in their most innocuous form, deception can be beneficial since they smooth over the wrinkles of interpersonal relationships or add some flavor to life. However, certain lies can be more insidious and potentially dangerous to society. Thus, it is no surprise that the study of deception and attempts to detect it are no stranger to mankind. The first documentation of lie detection took place in 1000 BC in China (1). The accused was put on trial by being forced to hold a handful of dry rice in his or her mouth while listening to the circumstances of the case. Then the accused had to spit out the rice and an examiner came to declare the moisture state of the rice. If the rice was seen as wet, the accused was deemed innocent. However, if the rice was proclaimed dry, the suspect was guilty on the sole “evidence” that a mouth without saliva is associated with a nervous, and thus guilty, person (2). The next milestone occurred in 300 BC, when Greek physician and anatomist Erasistratus of Ceos linked the pulse with different mental states as a meter for veracity (3). Then around the 11th and 15th century with trial by ordeal. This technique was based on the belief that the innocent are protected by God while the guilty are left to suffer. The accused were subjected to grueling tasks such as holding their hand in a boiling cauldron, walking on hot coals, being thrown into cold water while inside a tied sack, or carrying a burning object (2). As testimony of their innocence, the suspects were expected to emerge from these tasks harm-free. If unable to do so, the accused were deemed guilty. In 1796, the field of phrenology was popularized by Franz Gall, who believed that skull morphologies are responsible

for a person’s abilities and character (4). Additionally, the more concave and convex areas of the skull were indicative of someone with stronger personality traits associated with those locations. So, individuals with skull grooves in areas thought to be linked with “the tendency to lie…and engage in criminal behavior” were prosecuted as guilty (2). Later, in 1869, J.H. Michon introduced graphology as another option to phrenology for understanding the mind. Michon’s philosophy concluded that handwriting irregularities and the physical flow of writers’ hands could reveal aspects of their character (5). Although both graphology and phrenology turned out to be unsubstantiated by science, they played an important role in the evolution of lie detection by disseminating the idea that the anatomy and physiology of a criminal’s brain could be responsible for his or her behaviors and these abnormalities should be studied scientifically (2). Autonomic nervous system-based indicators of lie detection The physiological responses that occur with activation of the autonomic system have long been used as indicators for deception. When the sympathetic nervous system is activated, like during the stress of lying, there is increased sweating, elevated heart rate, pupil dilation, and increased blood pressure (6). The techniques created using this concept incorporate measures of electrodermal activity, heart rate, respiration, and blood pressure (7). Polygraph The polygraph was the first instrument designed for detecting the body’s changes during the process of lying. The first version of the polygraph measured blood pressure, respiratory rate, and conductance of the skin (2). Through many renovations, the detector in its modern-day version picks up both somatic activity (motion sensors to detect skeletal muscle movement) and activity influenced by the autonomic nervous system (pulse, blood pressure, respiration, and skin resistance) (8). Respiration detection is achieved using a thoracic and diaphragmatic ratio, which has been shown as a good measure of changes in emotion, particularly stress (9). Voice stress analysis Voice stress analysis (VSA) is aimed at detecting fluctuations in the microtremors of speech that occur when someone is lying (2). The parameters of voice tension that are looked at include frequency, pitch, intensity, and sampling rate (10). The concept behind this technique is that psychological stress that occurs when someone is lying has physiological consequences. Activation of the sympathetic nervous system during stress causes vasoconstriction of the vasculature surrounding the vocal cords which may have an effect on the characteristics of a person’s voice (11). VSA has become very commercialized and anyone is able to purchase voice analysis software. However, according to PoliceOne.com, studies have shown

95


Lie Detection

that VSA is not effective in detecting deception and is even banned in some states (12). Eye tracking Eye tracking is based on the theory that people who are lying might take longer to read questions and/or may have dilated pupils. A person is asked a series of questions while looking into a camera connected to a computer. An infrared camera is used to track eye movement, pupil dilation, blinks, reading time, and responses (10). Sympathetic activation during stress incurred while telling a lie causes the response of pupillary dilation. A study conducted in 2012 has found that reading behaviors—in particular, focus on the oculo-motor nerve— may be helpful in detecting deception and could provide an alternative to the polygraph (13). Thermal imaging Thermal imaging for lie detection involves using infrared cameras to detect changes in skin temperature, which may change at a micro-level when a person is lying. The camera focuses around the periorbital area because of the concentrated vasculature in this region (14). Thermal imaging has been researched to be used to detect deception in airport screening. A study conducted in 2011 found that the interviewers’ judgments of whether someone was lying or not was 10% more accurate without the use of thermal imaging (15). Further research on this method is limited. Despite the common use of these autonomic nervous systembased indicators, both in the media and in real life, these forms of lie detection are not free of controversy. For instance, people attempting to deceive are usually characterized by different types of autonomic changes, and those who do experience similar changes may present the changes to varying degrees (7). Furthermore, many of these physiological changes are also seen in people undergoing physical activity rather than the telling of a lie (7). Most noteworthy, scientists have voiced concerns that these lie detectors are picking up changes in the body that are only thought to be linked with lying; however, there has been no scientific evidence to unequivocally support this belief (8). Thus, while these instruments are still in use today, the data they output should not be freely interpreted without caution. Brain-based techniques of lie detection Brain based techniques of lie detection allow researchers to measure the central nervous system rather than the peripheral nervous system (16). An advantage that they have over techniques that detect changes in the autonomic nervous system that are induced by stress is that they measure neural activity, which is not as easily manipulated as the responses previously discussed. It reduces the possibility that emotional and mental stress would elicit a false-positive result. Electroencephalogram (EEG) Brain fingerprinting measures brain wave data collected by EEGs to detect deception. In this technique, sensors are placed on a person’s scalp like with a regular EEG and stimuli are presented. The stimuli include words, phrases, or pictures and can either be relevant or irrelevant to the situation in question (17). Recognition of a stimulus can lead to a change 96

in the brainwave pattern of the subject. This specific brain wave pattern is called P300-MERMER. Brain fingerprinting only evaluates if the information being tested is in the subject’s brain via analysis of the brain waves and the computer also provides a statistical confidence value regarding whether or not the subject has seen the information presented before. A study done by Farwell in 2012 found that brain fingerprinting is highly resistant in countermeasures to lying and has been ruled admissible in court (17). Functional transcranial Doppler sonography Functional transcranial Doppler sonography (fTCD) uses ultrasound waves to measure the velocity of blood flow through blood vessels in the brain and is typically used in conjunction with fMRI. Blood flow velocity is associated with changes in neural activity and therefore should vary when a person is telling a lie. This has been used to study cerebral lateralization of other brain functions—for instance, language, face or color processing, and intelligence, where blood flow is more active in one side of the brain. fTCD measures the blood flow velocity while people are answering cognitively complex questions. A 1999 study by Schmidt et al. found that fTCD was able to detect blood flow lateralization to the right hemisphere in right-handed individuals who were subjected to “complex cognitive visuospatial tasks” (18). Functional near-infrared spectroscopy Functional near-infrared spectroscopy (fNIRS) is a neuroimaging technique that detects changes in oxygen and brain activity, similar to the fMRI. According to Scrapapicckia et al., a limitation with fNIRS is that it requires complex math and computational analysis of the signals collected, making it difficult for some people to read the results (19). Ding et al. further point out that fNIRS also provides a lessened spatial resolution quality in comparison to fMRI, but it has the advantage of being a more portable technique. A study (20) using fNIRS as a deception detection technique has found similar results to the study conducted by Kozel et al. (21) in which the prefrontal cortex is activated during deception. Functional magnetic resonance imaging Traditional MRIs are able to elucidate structural imaging of the brain, but functional magnetic resonance imaging (fMRI) is also able to identify areas of the brain which are active, using BOLD (blood oxygen level dependent) technology (19). A regular MRI machine is used in this test, but instead of lying still, the subject is asked to perform a task which causes specific areas of the brain to light up (22). fMRI makes use of BOLD imaging to detect changes in brain activity. Like in other tissues, oxygen is transported to neurons via hemoglobin. Increased neuronal activity increases the demand for oxygen in these tissues and the subsequent result is an increase of blood flow to these regions. The differences of magnetic properties between oxygenated and deoxygenated hemoglobin lead to the variations in MR signaling dependent on the oxygenation levels of blood (22). The use of fMRI for the detection of deception relies on this association between blood flow in the brain and neuronal activation. An fMRI is first recorded to be used as a baseline and subtracted from the subsequent fMRI recording taken while someone is lying. This is called cognitive subtraction (16) and allows the researcher to identify


Lie Detection

the areas of the brain that are activated during deception. Results of studies using fMRI for lie detection have shown activation of the prefrontal cortex and anterior cingulate cortex (21). It is thought that these regions are activated to suppress the response to give a truthful answer. There are numerous limitations to using neuroimaging for lie detection at this time. First, it requires a willing participant (23). If the subject refuses to answer the question, gives responses that do not answer the question, or moves around during scanning, the data collected could be rendered unusable. This type of countermeasure can cause other areas of the brain to be activated, causing confounding results (24). This limitation calls into question how practical using neuroimaging would be in a real-life situation. Another limitation is that in the published studies conducted on the use of fMRI for lie detection, the subjects are always healthy volunteers who have been screened for any neurological or psychiatric disorders and therefore are not representative of the general population. A third limitation is that a delusion may not appear as a deception (23). In cases of psychosis and dementia, people may believe their delusion, but their delusion may not be true. It is unclear how this affects neuroimaging at this time. Similar to that, the effect of rehearsed lies vs spontaneous lies on neuroimaging is also unclear at this time (23). When examining frontal lobe activation in correlation to lie detection, it may be related to the suppression of competing resources, one of the components of lying. More research needs to be done to further examine how suppression of competing resources affects brain activity. In addition, the “lie center” of the brain has not yet been identified. There has been research showing that certain areas of the brain activate during deception, but it has not been elucidated if this is, in fact, specific to deception (24). Empirical evidence for fMRI as a lie-detection technology Despite the limitations, fMRI is the method most studied for its applicability in lie detection among the autonomic- and brain-based methods previously discussed. For an in-depth example of the technique and how its output translates to the identification of a lie, we have referenced the paper “Detecting deception using functional magnetic resonance imaging,” authored by Kozel, Johnson, Mu, Grenesko, Laken, and George (21). This study was chosen to illustrate fMRI methodology because it sought to identify deception in individual fMRI readings, as opposed to in a combined group of outputs. To do so, Kozel’s group increased the signal-to-noise ratio—a hallmark issue of MRI and a particular challenge of fMRI—by filtering out the neural activity not associated with the target action (25). They formed a model-building group (MBG), which was used to identify what brain areas should be focused on, and a model-testing group (MTG), in which they specifically looked at those identified areas of interest to determine when a participant was lying. Methodology The study used a variation of the guilty knowledge test (also referred to as the concealed information test), the core concept of which is that participants react differentially when asked about information they are attempting to conceal. In

both MBG and MTG, participants were instructed to “steal” a ring or a watch from a drawer, and conceal that one item with their possessions before moving on to the fMRI apparatus. While in the fMRI machine, each participant was asked 4 sets of 20 questions: neutrals (such as “Are you awake?”), controls (“Have you ever been arrested?”), watch (“Did you take the watch from the drawer?”), and ring (“Did you take the ring from the drawer?”). Each was required to lie about stealing the item he or she took, and to answer the other 3 question sets truthfully. Participants were promised a monetary award for engaging in countermeasures—deceptive behaviors utilized when lying, such as a quizzical look to feign confusion. Pinpointing a lie The authors used fMRI outputs from each question group to set up contrast-inverse t maps of brain activity. They then combined these individual maps to create group maps of the MGB, to see what areas of the brain are activated when lying, telling the truth, and answering neutral and control questions. Cognitive subtraction was employed to differentiate between the output from all 4 question groups; these results showed brain areas that were activated only when lying, only when telling the truth, and at baseline. The false discovery rate (FDR), or p-value, for activation was set at 5% or less, and the minimum activation volume was set at 25 voxels (3 mm3). Even within these limitations, the researchers identified 7 areas of activity across all participants, with clusters identified as 1, 2, and 4 activated in 93% of MBG participants when lying. After collection and analysis of the MTG, activation of the same 3 clusters identified a deceptive response in 90% of participants. Clusters with positive activation in the Lie-minus-Truth t maps in over 90% of participants represented areas in the right anterior cingulate cortex (cluster 1), right middle frontal/ dorsolateral prefrontal cortex (DLFPC) (cluster 2), and right orbitofrontal/inferior frontal cortex (cluster 4). A challenge of fMRI as a lie detection technology is determining which areas of the brain correlate to deception. The anterior cingulate finds its way into discussions of many cognitive processes requiring conflict management, with evidence of activation during Stroop (26) and continuous performance (27) tests. Kozel et al. similarly identify the anterior cingulate as active in monitoring a deceptive response throughout the prefrontal cortex. The authors assign the anterior cingulate cortex, orbitofrontal cortex, and DLFPC collectively to the processes of high order processing, response inhibition, and decision-making tasks. Their application to the process of deception may involve suppressing truth, concealing any telling emotional response, and fabricating plausible, supportable context for the lie. Further considerations Taking into account the high percentage of activation in these clusters and the agreement within the neuroscience community regarding the function of those areas, along with their 90% success rate at identifying the item participants stole, the authors were able to confirm their hypothesis that lie detection is possible at the individual level using fMRI. Implicated in the affirmation of their hypothesis is the usefulness of fMRI as a lie detection device, although relevant 97


Lie Detection

and more current literature has pointed out quite a few issues with that conclusion. Some assert that pure insertion—the assumption that there is one isolated brain area associated with a cognitive function, such as Broca’s area being attributed to speech production—has no credence in modern neural correlate studies (28). Others maintain that necessary measures taken in fMRI, such as keeping still and answering questions only yes/no and nonverbally (29), may activate confounding pathways in the brain. Inherent flaws in fMRI, such as low percent signal changes, may further contribute to false-positive data. One last and somewhat less empirical objection is raised concerning the definition of truth as a neural correlate: If it is not possible to pinpoint one pathway always involved in telling a truth of any level of complexity, how can the reverse be true of deception (30)? Added with other more generalized concerns about individual (gender, race, age) brain differences, these thoughts seriously detract from what may nonetheless be the best method available for lie detection. Also of note for this Kozel et al. paper: Funding was provided in part through the Department of Defense Polygraph Institute. An accurate lie-detection technology is most applicable in legal matters or those of state; despite the objections to fMRI’s validity in identifying deception, there is a market for products that claim to map the neural occurrence of a lie. Accordingly, recent research has continued to explore the use of fMRI as a plausible alternative to the more traditionally used polygraph (31), or for use with the polygraph (32), though the field is also looking beyond these two “mainstream” techniques at concepts such as micro expressions. Micro expressions Paul Ekman, PhD, the inspiration for the show “Lie to Me,” defines a micro expression as an involuntary facial expression lasting around 1/25 of a second which displays the subject’s true emotions (33, 34). Micro expressions were initially identified by Haggard and Isaacs as they reviewed psychotherapeutic interviews in 1967 (35). Ekman and Freisen made a similar observation in 1967 as they reviewed tapes of patients who reported they were not depressed and later attempted suicide (33). It appears that when someone attempts to repress an emotional response in order to conceal the truth under high-stakes conditions, they involuntarily flash a facial expression revealing their true emotions (36). Potential physiological etiology This is thought to occur as two separate pathways for facial expression, voluntary and spontaneous, compete for control. These two pathways became evident as some stroke patients displayed only emotional facial paralysis while others displayed only voluntary facial paralysis (37, 38). Stroke victims with damage to the primary motor and premotor areas may be unable to form a symmetrical smile voluntarily, yet symmetrically smile in response to emotional stimuli (38). Patients with strokes damaging the midcingulate area are capable of voluntary facial movements but cannot display emotional expressions spontaneously (38). There have been five cortical motor regions identified which directly innervate the facial nuclei. They are the primary motor cortex, the premotor cortex, the supplementary cortex, and

98

the anterior and caudal regions of the midcingulate. The anterior and caudal regions of the midcingulate appear to be responsible for the production of spontaneous/emotional facial expressions. The cingulate cortex is a vital component of the limbic system which receives projections from the amygdala pertinent to emotion (37, 38). The concept behind micro expressions, as it relates to lie detection, is that as someone attempts to voluntarily suppress a strong emotional response, this separate, involuntary pathway may cause his or her true emotion to appear for a fraction of a second. Application, strengths, and limitations The Paul Ekman Group provides training which they claim improves a person’s ability to spot and recognize micro expressions. Micro expression spotting and recognition can be applied to law enforcement or national security to aid in interrogations. There is also the potential for application in psychotherapy to recognize when a patient is lying about an emotional state or to recognize emotions a patient may be repressing (35). One benefit of micro expression detection and recognition as a form of lie detection, as opposed to the tests previously mentioned, is that it does not require subject participation beyond the bounds of a typical interrogation. Micro expression-based lie detection is also potentially cheaper when compared to neuroimaging such as fMRI. However, there are several limitations of micro expressionbased lie detection. While a micro expression may contradict a subject’s portrayed emotion, this does not directly prove deceit. The micro expression may indicate what line of questioning to probe deeper into, but to determine whether or not the subject is outright lying relies on conclusions made by the interviewer as they connect the subject’s emotional reaction to the preceding question. This is far too subjective to be used as evidence in court, but could still be useful in guiding the efforts of law enforcement. Another limitation of micro expression-based lie detection is that these expressions happen so rapidly that it is difficult to detect in real time (39). A potential solution is to slow down the recorded interviews, but this is extremely time consuming. Future direction Several teams have attempted to overcome this problem by using databases of posed micro expressions to design software to automatically spot and recognize micro expressions. The resulting software was not very effective as these posed micro expressions differ from spontaneous micro expressions. Recently several databases of spontaneous micro expressions have been formed. To elicit a spontaneous micro expression, participants were recorded with a highspeed camera as they watched emotion-eliciting videos. The participants were instructed to maintain a neutral facial expression or they would have to fill out a lengthy survey before receiving their compensation. Afterwards the participants were interviewed on the emotions they experienced to match their emotion to the micro expression captured by the camera. The team of Li et al. utilized their own spontaneous micro expression database SMIC as well as CASMEII to develop their automatic MESR (Micro Expression Spotting and Recognition) software. SMIC contains 164 micro expressions collected from 16 participants and CASMEII, which was collected by Yan et al., contains 247 micro expressions


Lie Detection

from 26 participants. The automatic MESR was more effective at recognizing micro expressions than untrained humans with an accuracy of 81.69% compared to 72.11% ± 7.22%. However, it was less effective at the combined task of spotting and recognizing micro expressions with an accuracy of 42.42% and a false spot rate of 22.98% compared to the human’s 49.74% ± 8.04% accuracy and false spot rate of 7.31% ± 8.04% (35). While automatic MESR software is not yet ready for the market, this is a promising start and could potentially reduce the man hours required to spot micro expressions and remove the subjectivity behind human recognition of the micro expressions.

8. Dahlager E, Darby T, Lee C, Madsen M. The use of the polygraph test for Brendan Dassey in Making a Murderer [Internet]. Cornell University Interactions of Social Science and the Law. 2010 [cited 2018 Apr 22]. Available from: https://courses2.cit.cornell.edu/sociallaw/ MakingAMurderer/PolygraphtestDassey.html

Conclusion

11. McShane J. Voice Stress Analysis Challenges. The Truth About Forensic Science [Internet]. Thetruthaboutforensicscience.com. 2013 [cited 2018 Apr 14]. Available from: https://www. thetruthaboutforensicscience.com/voice-stress-analysischallenges/.

A method to accurately detect a lie has long been sought after, but some current techniques seem to have as little validity as those from the years BC. Methods that rely on an autonomic response, such as the polygraph, detect physiological stress but are inherently flawed, as the subject’s mental and emotional state can alter the test results. Among the many autonomic- and brain-based lie detection technologies available, functional MRI has received the most academic attention, but is currently viewed as an inexact science despite identification of neural substrates associated with lying. Using the fMRI and polygraph in combination may present a more complete solution. Outside the imaging realm, the use of micro expressions to detect lies may be useful but would require extensive man hours; software advancements may allow this to be a more efficient option in the future. While the accuracy of lie detection methods may be grossly overestimated in the media, advancements in neurological research and technology may eventually make this perception a reality.

Acknowledgments A previous version of this paper was written for the Foundations of Neuroscience class led by Brian Piper, PhD, at Geisinger Commonwealth School of Medicine. We would like to thank Dr. Piper for his direction and assistance.

References 1.

Ford EB. Lie detection: historical, neuropsychiatric and legal dimensions. Int J Law Psychiatry. 2006;29(3):159-77.

2. Vicianova M. Historical techniques of lie detection. Eur J Psychol. 2015;11(3):522-34. 3. Trovillo PV. A history of lie detection. J Crim Law Criminol. 1939;29(6):848-81. 4. Clarke E. The history of the neurological sciences. Proc R Soc Med. 1970;63(1):21-3. 5. Blinkhorn SF. The writing is on the wall. Nature. 1993;366(6452):208. 6. Autonomic Nervous System Introduction. Dantest Medeia Inc [Internet]. [cited 2018 Apr 22]. Available from: http:// www.dantest.com/dtr_ans_overview.htm. 7.

Meijer EH, Verschuere B, Gamer M, Merckelback H, Ben-Shakhar Gl. Deception detection with behavioral, autonomic, and neural measures: conceptual and methodological considerations that warrant modesty. Psychophysiology. 2016 May;53:593-604.

9. Lewis JA, Cuppari M. The polygraph: the truth lies within. J Am Acad Psychiatry Law. 2009;37(1):85-92. 10. Bhuvaneswari P, Kumar JS. A note on methods used for deception analysis and influence of thinking stimulus in deception detection. Int J Eng Technol Sci Innov. 2015;7(1):109-16.

12. Police using voice stress analysis to detect lies. PoliceOne.com [Internet]. 2002 Feb 10. [cited 2018 Apr 4]. Available from: https://www.policeone.com/investigations/ articles/48102-Police-Using-Voice-Stress-Analysis-toDetect-Lies/ 13. Cook AE, Hacker DJ, Webb AK, Osher D, Kristjansson S, Woltz DJ, et al. Lyin’ eyes: oculo-motor measures of reading reveal deception. J Exp Psychol Appl. 2012 Sep;18(3):301-13. 14. Thermal Imaging and Lie Detection – A Task for Computer Vision. Zbigatron [Internet]. Available from: http://zbigatron. com/thermal-imaging-and-lie-detection-a-task-forcomputer-vision/. 15. Warmelink L, Vrij A, Mann S, Leal S, Forrester D, Fisher RP. Thermal imaging as a lie detection tools at airports. Law Hum Behav. 2011;35(1)40-8. 16. Langleben D, Campbell Moriarty J. Using brain imaging for lie detection: where science, law, and research policy collide. Psychol Public Policy Law. 2012;19(2): 222-34. 17. Farwell LA. Brain fingerprinting: a comprehensive tutorial review of detection of conceals information with event-related brain potentials. Cogn Neurodyn. 2012 Apr;6(2):115-54. 18. Schmidt P, Krings T, Willmes K, Roessler F, Reul J, Thron A. Determination of cognitive hemispheric lateralization by “functional” transcranial Doppler cross-validated by functional MRI. Stroke. 1999 May;30(5):939-45. 19. Scarapicchia V, Brown C, Mayo C, Gawryluk JR. Functional magnetic resonance imaging and functional near-infrared spectroscopy: Insights from combined recording studies. Front Hum Neurosci. 2017 Aug 18;11:419. 20. Ding XP, Sai L, Fu G, Liu J, Lee K. Neural correlates of second-order verbal deception: A functional nearinfrared spectroscopy (fNIRS) study. Neuroimage. 2013 Oct;87(2014):505-514. 21. Kozel FA, Johnson KA, Mu Q, Grenesko EL, Laken SJ, George MS. Detecting deception using functional magnetic resonance imaging. Biol Psychiatry. 2005;58(8):605-13. 99


Lie Detection

(EMG) approach. IEEE Int Conf Autom Face Gesture Recognit Workshops. 2008;1-6.

22. Introduction to fMRI. Nuffield Department of Clinical Neurosciences [Internet]. 2018 [cited 2018 Apr 5]. Available from: https://www.ndcn.ox.ac.uk/divisions/fmrib/ what-is-fmri/introduction-to-fmri

38. Gothard KM. The amygdalo-motor pathways and the control of facial expressions. Front Neurosci. 2014; 8(43).

23. Simpson, JR. Functional MRI lie detection: Too good to be true? J Am Acad Psychiatry Law. 2008; 36:491-98.

39. Ekman P, Sullivan M, Frank M. A few can catch a liar. Psychol Sci. 1999;10(3):263-26.

24. Detecting lies with fMRI. Neuroscientifically Challenged [Internet]. 2014 December 12. [cited 2018 Apr 4]. Available from: http://www.neuroscientificallychallenged.com/blog/ detecting-lies-with-fmri. 25. Welvaert M, Rosseel Y. On the definition of signal-tonoise ratio and contrast-to-noise ratio for fMRI data. PLoS One. 2013;8(11):e77089. https://doi.org/10.1371/journal. pone.0077089 26. Carter CS, van Veen V. Anterior cingulate cortex and conflict detection: an update of theory and data. Cogn Affect Behav Neurosci. 2007;7(4):367-79. 27. Carter CS, Braver TS, Barch DM, Botvinivk MM, Noll D, Cohen JD. Anterior cingulate cortex, error detection, and the online monitoring of performance. Science. 1998;280(5364):747-749. 28. Friston KJ, Price CJ, Fletcher P, Moore C, Frackowiak RS, Dolan RJ. The trouble with cognitive subtraction. Neuroimage. 1996; 4(2):97-104. 29. Vrij A, Fisher RP, Blank H. A cognitive approach to lie detection: a meta�analysis. Leg Crim Psychol. 2017;22(1):121. 30. Littlefield M, Fitzgerald D, Knudsen K, Tonks J, Dietz M. Contextualizing neuro-collaborations: reflections on a transdisciplinary fMRI lie detection experiment. Front Human Neurosci. 2014;8:149. https://doi.org/10.3389/ fnhum.2014.00149 31. Langleben DD et al. Polygraphy and functional magnetic resonance imaging in lie detection: a controlled blind comparison using the concealed information test. J Clin Psychiatry. 2016;77(10):1372-1380. 32. Bhutta MR, Hong MJ, Kim YH, Hong KS. Single-trial lie detection using a combined fNIRS-polygraph system. Front Psychol. 2015;6:709. https://doi.org/10.3389/ fpsyg.2015.00709 33. Ekman P. Darwin, deception, and facial expression. Ann N Y Acad Sci. 2003;1000(1):205-21. 34. Ekman P, Friesen WV. Nonverbal Leakage and Clues to Deception. Psychiatry. 1969;32(1):88-106. 35. Li X, Hong X, Moilanen A, Huang X, Pfister T, Zhao G, et al. Towards reading hidden emotions: a comparative study of spontaneous micro-expression spotting and recognition methods. IEEE Trans Affect Comput. 2017; 1-14. 36. Frank M, Ekman P. The ability to detect deceit generalizes across different types of high-stake lies. J Pers Soc Psychol. 1997:72(6):1429-1430. 37. Korb S, Grandjean D, Scherer KR. Investigating the production of emotional facial expressions: a combined electroencephalographic (EEG) and electromyographic

100


Scholarly Research In Progress • Vol. 2, November 2018

The Association of Acculturation and Health Care Utilization Marc Incitti1*, Ashanti Littlejohn1, Kimberly Saint Jean1, Rebhi Rabah1, Brian Sacks1, Julia Callavini1, Elizabeth Kuchinski1, and Mushfiq Tarafder1

Geisinger Commonwealth School of Medicine, Scranton, PA 18509 *Correspondence: mincitti@som.geisinger.edu 1

Abstract Background: The aim of this study is to investigate the interplay between acculturation, the type of health care facility utilized, and the frequency in which these services are utilized. Despite the vast amount of research on barriers to care, there remains a disparity in certain populations between the services offered and navigating the health system. Method: In this study, acculturation was measured by assessing individuals’ primary language as the degree of assimilation into the dominant culture. A secondary data analysis of information provided by the National Health and Nutrition Examination Survey (NHANES) revealed that the level of an individual’s acculturation was not correlated with the frequency of health care utilization. Results: As determined by our univariate analysis, the distribution of mainly English and mainly non-English participants in our study were 3,041 and 1,141 respectively. The majority of our mainly non-English participants identified themselves as Mexican-American, other Hispanic, and nonHispanic Asian. The frequencies of these racial categories in data collection were 69%, 72%, and 61%, respectively. Comparatively, the frequencies of the mainly English participants were confined to non-Hispanic white and nonHispanic black, which were 97.8% and 94.3%, respectively. Eighty-two percent of mainly non-English participants had an education level below the ninth grade, whereas 82% of mainly English participants identified as having some college or an associate's degree. On average, 64% of the participants received health care 1 to 3 times over a span of a year, and about 55% had followed up with a doctor within the last 1 to 3 years. Out of all of the participants, roughly 78% utilized health care and 93% specifically reported using clinics, doctor’s office or HMO, or other health care places as their primary place for medical attention. Through a chi-squared analysis, frequency of care was found to be associated with language. However, “last seen” was found to have no association with language spoken. Additional analysis of the frequency of care variable yielded strong evidence against our null hypotheses, indicating that these variables are independent. Analysis revealed that race was a confounding variable, whereas education was not. Conclusion: Regardless of whether or not an individual was assimilated, they sought health care treatment at a clinic, a doctor’s office, or an HMO site rather than directly going to a hospital. These findings refuted prevailing theories and popular beliefs that newly assimilated citizens utilize health care less often and more inappropriately than the individuals of the dominant culture in the United States. Further investigation into the effects of socioeconomic status, geographic locations, and insurance plans can have on appropriate health care utilization is warranted. Increasing

health literacy is mentioned in this paper as a proposed method to combat the social issue of ineffective health care practices directly caused by communication barriers.

Introduction The growing diversity of this nation is a driving force to continually assess and evaluate the goods and services it provides. Health care as a national industry has seen an overhaul in the past decade. The Patient Protection and Affordable Care Act, passed in 2010, was implemented in hope of decreasing costs, achieving better health outcomes, and improving access to care for all Americans. Unfortunately, there remains a large subset of the US population with specific barriers to health care access, whether it be through affordability or navigating the health care system. The association between the primary language spoken (acculturation) and the type of health care facility most frequented may be responsible for these barriers. The ability to attain the correct health services at the right time can be a complicated process for English-speaking individuals, let alone non-English-speaking individuals (1). The health care system in place is not made for newly acclimated citizens, especially ones that are not fully acculturated into the US (2). On average, roughly 66% of Hispanic and Asian immigrants reported not utilizing a health care facility within their first year as a citizen, and there was a positive correlation between years of citizenship and health care utilization (2). One proposed reason for this particular trend in health care access could be the language barrier between individuals who have not yet mastered their new country’s native tongue and the health care workforce. Therefore, health disparities exist in the US between health care access for individuals who do and do not speak English as their primary language (3). This disparity between mainly English and non-English speakers may be attributed to levels of acculturation. Acculturation, the anthropologic measure of integration into a foreign majority culture, is considered a determinant of health and lifespan. The primary language spoken at home is a suitable way to measure the phenomenon known as acculturation; language has been commonly and widely assessed across acculturation instruments (4). Wallace’s group compiled and organized acculturation research from the past 40 years. Not only did they conclude that the assessment of language is a powerful way to measure acculturation, but it is also a predictor of differences in the language environments between the ethnic minorities’ and the dominant culture to which they are exposed. These differences reflect a patient’s acculturation level, and predict the degree to which they participate effectively in the dominant culture. An anomaly has been identified in health literacy among

101


Acculturation and Health Care Utilization

English and non-English-speaking individuals. Zun et al. (1991) explored whether Hispanic citizens who speak primarily Spanish could navigate through a hospital environment. It was found that even though they could speak English, the majority of participants did not comprehend medical terminology spoken to them (5). In light of this information, the language barrier that deters people from seeking health care may be due to the fear of being misunderstood and the ensuing miscommunication between parties. Communication breakdown can have negative consequences for these patients and their families. Health literacy has been suggested as an area of interest to begin to tackle the miscommunication between patients and their health care teams. Another reason to examine the association between the primary language spoken (acculturation) and the type of health care facility most frequented is to explore the need for better promotional materials to increase health literacy. Health literacy is defined by the US Agency for Healthcare Research and Quality as the degree to which individuals have the capacity to obtain, process, and understand basic health information and services needed in order to make appropriate health decisions. Health literacy is taken for granted by most Americans. Research on health literacy is important in order to prevent negligence as well as the intangible forms of distress. This includes perceived pain and allostatic load as well as the culmination of the physiological effects of stressors. The capacity of individuals’ physiological systems to adapt to external challenges and stressors is known as allostasis and is a necessary part of healthy functioning. In our study, the primary goal is to determine if there is an association between the primary language spoken (acculturation) by an adult, aged 18 to 60 years old, and the type of health care facility most frequented. If patients’ primary language is not English, then hospitals may be more frequented for care by these individuals. Additionally, we aim to determine if an association exists between the primary language spoken (acculturation) among adults, aged 18 to 60 years old, and the amount of times a person received health care over the past year (frequency of care). If the primary language spoken at home is not mainly English, then we hypothesize that the number of times a doctor was consulted regarding health care per individual will be inversely proportional. Increasing health literacy is mentioned in this paper as a proposed method to combat the social issue of ineffective health care practices directly caused by communication barriers.

Materials and Methods Study design and description of human subjects The study population included participants’ data collected in the National Health and Nutrition Examination Survey (NHANES) that came from a one-time survey that was administered to each participant. It is important to note that the reference population for the parent study were the noninstitutionalized resident civilians of the U.S. Ideally, these are individuals who are older than 16 and not members of an institution such as criminal, mental, or other facilities, or active-duty military personnel in the US. Once the intricate probability design used by NHANES to sample populations

102

in all 50 states, including the District of Columbia, has chosen eligible participants, data about the household, about the family, and about the individual were collected through various questionnaires. Out of the 13 431 people who were selected, 9,756 people completed the interview and 9,338 people were examined (6). Data analysis All data used in our CHRP study were collected as secondary data from the parent study, NHANES (7). Data was stored on Microsoft Excel for efficient analysis of all variables and confounding variables that may potentially be relatable to our study. Once the data was categorized and coded in Microsoft Excel, our group used Epi Info 7 to run measures of association tests including odds ratio, chi-squared, and multiple logistic regression analysis, with α = 0.05. To reiterate, we examined if an association existed between the language spoken by adults and their choice of health care utilization. Epi Info is a software program intended for the public health community. Epi info allows for simple data entry and database compilation, the ability to customize data entry, and data analyses on Epi Info may consist of epidemiologic statistics, maps, and graphs. Some of the uses of Epi info include outbreak investigations and disease surveillance systems (6). Variables The two overarching variables that were analyzed were acculturation (measured by primary language spoken at home) and health care utilization. Health care utilization will encompass whether or not a participant utilized a health care facility and the type of health care facility most frequented. The variable “Healthcare_Utilization” is different than the concept of health care utilization that will be discussed further on when addressing the variable “Healthcare_Place.” In addition, the amount of times a person received health care over the past year and the duration since their last health care visit is analyzed for further investigation of health care utilization in regard to language spoken. Our exposure variable was “language,” which was determined based on the one-time survey. The exposures were split into two groups: those who speak mainly English at home and those who speak mainly non-English language at home. The mainly English group included all individuals who responded that they spoke only English, more English than non-English or both equally (English and non-English) at home. The mainly non-English group for our study included all individuals who responded that they only spoke non-English and non-English more than English at home. The positive exposure is mainly English. Our outcome variable health care place encompasses “hospitals” and “clinic, doctor’s office or HMO, and other.” “Hospitals” became a combination of the “hospital emergency room” and “hospital outpatient department” responses. Reasons for this decision are discussed later in this paper. The “Healthcare_Place” variable was based on the individual's response to the question of which type of place most frequented for health care. Our outcome variable “Frequency_of_Care” was categorized into “1 to 3 times,” “4 to 9 times,” or “10 or more times” within the past year that a participant received health care. The


Acculturation and Health Care Utilization

“Last_Seen” outcome variable was categorized based on the responses of the individual’s response to “About how long since you last saw or talked to your doctor?” The categories are “< 1 year,” “1–3 years,” or “> 3 years” measuring the duration since the last health care visit. Our exposure variable is language (mainly English and mainly non-English). Mainly non-English is our negative exposure because we are expecting an inverse relationship. Our outcome variables consist of frequency of health care, time since last health care visit, health care utilization, and the type of health care place utilized. Health care place and frequency of care are the variables most important to testing our hypothesis. Potential confounders that our study controlled for was the influence education level and race/ethnicity could have on the exposure and outcome variables. Statistical tests/measures of association

“Race” and “Level_of_Education” were the exposures when examining their measures of association with the outcome variables of health care utilization. In order to identify the possible confounding variables and effect modifiers in our study, multivariate analyses were conducted. The associations between “Language” and “Healthcare_Place” were analyzed via multiple logistic regression, and we examined possible interactions with the variables “Race” and “Level_of_Education.”

Results Univariate analyses Utilizing the surveys provided by NHANES, we identified 4 possible variables with probable associations to health care utilization and our measure of acculturation: “language.” Figure 1 shows the distribution of our exposure variable (language), outcome variables (health care utilization, health care place, frequency of care and last seen) and possible confounders (race and education) within our study between the ages of 18 and 60 years old. As determined by our univariate analysis, the distribution of mainly English and mainly non-English participants in our study were 3,041 and 1,141, respectively.

Bivariate analyses for our study required two tests for measures of association: the odds ratio (OR) and the chisquared test. A chi-squared analysis, with an alpha value of 0.05, was used once the data has been categorized and coded to determine whether or not there is an association between the language spoken by adults and the time since they have last talked to or visited a doctor. In addition to a chi-squared analysis, we used OR to measure the strength of association between our outcome and exposure variables with a 95% confidence interval. OR was used to measure the strength of association between exposure and outcome variables that are dichotomous. A 95% confidence interval was used in conjunction with OR to measure the level of uncertainty around the measure of effect as indicated by the strength of association. A chi-squared test was used to test if an association existed between variables of interest that are categorical/dichotomous. The following variables were chosen to be examined in our study for a complete picture of the measure of association between acculturation and health care place utilized. “Language” and “Healthcare_ Place” and “Language” and “Healthcare_Utilization” was examined by separate OR; 95% CI. Relationships between “Language” and “Frequency_ of_Care” and “Language” and Figure 1. Distribution of Language, Healthcare Place, Frequency of Care, Last Seen, Race, and Level “Last_Seen” were examined of Education among Participants Aged 18–60 Years Old. (A) displays the primary language distribution via chi-squared tests. Both among study population. (B) displays the distribution of health care types. (C) displays the frequency “Frequency_of_care” and within the last year of the time the survey was given. (D) displays the distribution of times a doctor was “Last_Seen” are categorical last seen by the participant. (E) displays the distribution between whether or not there is a regularly data, necessitating the use attended health care site. (F) and (G) display the distribution of race and level of education among of chi-squared over OR. Both participants, respectively. 103


Acculturation and Health Care Utilization

Bivariate descriptive analyses

Bivariate specific tests for measures of association

The racial distribution and the education level of our participants were analyzed. The majority of our mainly non-English participants identified themselves as MexicanAmerican, other Hispanic, and non-Hispanic Asian. The frequencies of these racial categories in data collection were 69%, 72%, and 61%, respectively. Comparatively, the frequencies of the mainly English participants were confined to non-Hispanic white and non-Hispanic black, which were 97.8% and 94.3%, respectively. Eightytwo percent of mainly non-English participants had an education level below the ninth grade, whereas 82% of mainly English participants identified as having some college or an associate's degree.

Table 1 includes our chi-squared analysis and odds ratio data analyzed with a 95% confidence interval. With health care place and health care utilization being our dichotomous variable, an OR analysis was used to determine the association with language. The subset of participants who indicated “Yes, there is a place,” identified as that they utilize a health care facility, were further examined for the type of health care place they utilize. The data indicated a positive association between health care place and language. Through a chi-squared analysis, frequency of care was found to be associated with language. However, “last seen” was found to have no association with language spoken. Additional analysis of the frequency of care variable yielded strong evidence against our null hypotheses, indicating that these variables are independent.

Even though the two language groups differed among racial identities and education levels, there were similarities in their frequency of care, health care utilization, and health care place when closely examining the stratification of outcome variables with language. On average 64% of the participants received health care 1 to 3 times over a span of a year, and about 55% had followed up with a doctor within the last 1 to 3 years. Out of all of the participants, roughly 78% utilized health care and 93% specifically reported using clinics, doctor’s office or HMO, or other health care places as their primary place for medical attention (Figure 2).

Multivariate analyses Multivariate analysis was performed to look at the multidimensional interaction of race and education as potential confounders in the interaction between language spoken at home and the type of health care most frequented. Multiple logistic regression analysis was performed to ascertain adjusted odds ratios. The crude and adjusted odds ratios are listed in Table 2. The difference between crude and adjusted odds ratio were analyzed closely, and a difference greater than 10% indicated confounding. Analysis revealed that race was a confounding variable, whereas education was not.

Figure 2. Distribution of Health Care Place by Language. The bar graph displays a comparison of health care delivery locations between the 2 primary language groups. The left column displays 92.91% of mainly English speakers received care at a clinic, vs the right column, which displays 92.07% of mainly non-English speakers received care at a clinic also. This is in contrast to only 7.09% and 7.93% of mainly English and mainly nonEnglish speakers relying on hospital services. 104


Acculturation and Health Care Utilization

Table 1. Results from Chi-Squared Analysis. The table shows the distribution of how the mainly English speakers and mainly nonEnglish speakers (the columns) answered when asked about health care place utilized, the frequency of requiring care in a year, and when a doctor was last seen (the rows).

Table 2. Results from Multiple Logistic Regression. This table displays the crude and adjusted odds ratios for the health care place utilized and multiple factors including language, race, and education. These are the odds that a health care place will be utilized in the presence (or absence) of each of these factors. 105


Acculturation and Health Care Utilization

Discussion From our data analysis, we have identified several associations related to health care utilization and primary language spoken. With regards to their health care utilization, our research revealed that the distribution of the two primary language groups (mainly English and mainly non-English speakers) were similar to one another. Both groups frequented health care places in the same proportions. Though it has been reported that mainly non-English speaking patients visited hospital settings more frequently (2), our study refuted this prevailing theory. It was found that mainly non-English speaking participants utilized clinics, doctor’s offices, or HMO groups, as opposed to hospitals. Our study also revealed that mainly English and non-English speaking patients exhibited similar frequencies in terms of sought out health care, which refuted our initial hypothesis. However, the number of times a patient consulted their doctor was found to not be correlated with the primary language spoken by the participants. We initially hypothesized that mainly non-English individuals would visit hospitals more compared to clinics, doctor’s offices, or HMO groups. Through ease of health care access, it would be expected that individuals facing language barriers frequent hospitals more than other health care places. Research conducted by Chen et al. revealed the presence of health disparities among individuals who do not speak English as their primary language (3). This may be a result of the lack of comprehension of medical terminology, which would limit their understanding of the role of other health care places (5). Our analysis of health care utilization revealed that the proportions of both mainly English-speaking individuals and mainly non-English speaking individuals were similar, but there was a difference in the type of health care place most frequented. We found that regardless of language spoken at home, patients visited clinics, doctor’s offices, or HMO groups more than hospitals. This refutes the claims of previous studies (2) as well as our own hypothesis. Socioeconomic status and/ or geographic locations of the participants may account for the parity in health care utilization frequency. In addition, insurance policies may have influenced the health care place accessibility due to limitations on coverage. Follow-up studies addressing these possible confounders should be conducted. Additionally, we hypothesized that the frequency of health care utilization would be inversely proportional to mainly non-English-speaking individuals. This was based on previous research, which revealed that the majority of assimilating Asian and Hispanic immigrants in the US did not utilize a health care facility within their first year as citizens (2). However, our results revealed mainly English and mainly non-English-speaking participants exhibited similar frequencies of sought out health care. It is also important to note that the number of times an individual last saw their physician was not correlated to the primary language spoken. This could indicate that regardless of the language spoken, the number of times individuals follow up with their physicians is independent. Again, these may have been a result of socioeconomic status as well as the years of citizenship for non-English speaking individuals. Nonetheless, no study is without limitations. The data were self-reported, which could have led to inaccuracy. The NHANES survey neglected to incorporate specific 106

geographic locations and types of insurance policies used by the individuals within the study, which could have been a possible reason for our results. Also, our study may have been susceptible to interviewer bias, which may have result during the initial NHANES survey process. Ambiguity could have existed during the initial surveillance of health care follow-up among participants. This may have resulted in discrepancies in the association between the language and the time since a patient had last seen his or her doctor. Additionally, our study did not incorporate information on the socioeconomic status of individuals. As previously mentioned, this could have affected the results of frequency of health care utilization and health care facility most frequented.

Conclusion Regardless of whether or not an individual was assimilated, they sought health care treatment at a clinic, a doctor’s office, or an HMO site rather than directly going to a hospital. These findings refuted prevailing theories and popular beliefs that newly assimilated citizens utilize health care less often and more inappropriately than the individuals of the dominant culture in the United States. Further investigation into the effects that socioeconomic status, geographic locations, and insurance plans can have on appropriate health care utilization is warranted. Based on the findings of our study, there appears to be a gap between acculturation and health care utilization. It is important to implement certain programs, whether they be educational, institutional, or interventional, in order to seal this gap. These programs will help improve access to health care for these acculturating populations. Because our study found an association between frequency of care and language, health literacy promotions are suggested. One public health program that helps to address the issue of access for acculturating populations is TeamSTEPPS. The goal of TeamSTEPPS is to help hospitals reduce the frequency and severity of patient safety events for individuals who are of low English proficiency (LEP) or who are culturally diverse. Furthermore, TeamSTEPPS functions to prevent miscommunication among a health care team in order to improve the care of LEP patients. A limitation of our study was that socioeconomic status was not considered in our analysis. Future studies should include this factor for their participants when looking into limitations toward health care utilization. Our research also revealed that the majority of non-English participants had an education level that was less than ninth grade, indicating that education level may serve as a potential target to determine if there is a positive interaction between language and health care utilization. If this is the case, interventions toward the subset of mainly non-English participants with lower education levels would be beneficial.

References 1.

Schyve, Paul M. Language Differences as a Barrier to Quality and Safety in Health Care: The Joint Commission Perspective. Advances in Pediatrics. J Gen Intern Med. 2007 Nov;22(Suppl 2):360-361.

2. Akresh IR. Health Service Utilization among immigrants to the United States. Population Res Policy Rev 28. Centers


Acculturation and Health Care Utilization

for Disease Control and Prevention. National Health and Nutrition Examination Survey, 2016 Nov 1. 3. Chen J, Bustamante AV, Tom SE. Health care spending and utilization by race/ethnicity under the affordable care act’s dependent coverage expansion. Am J Public Health. 2015 Jan 1;105.S499-S507. 4. Wallace PM, Pomery EA, Latimer AE, Martinez JL, Salovey P. A review of acculturation measures and their utility in studies promoting Latino health. Hispanic Journal of Behavioral Sciences. 2010 Jun 30;32(1),37-54. 5. Zun LS, Sadoun T, Downey L. English-language competency of self-declared English-speaking Hispanic patients using written tests of health literacy. J Natl Med Assoc. 2006 Sept 3;98(6):912-917. 6. Centers for Disease Control. Epi Info. 2017 April 7. https:// www.cdc.gov/epiinfo/index.html 7.

“NHANES 2015–2016.” Edited by CDC National Center for Health Statistics, NHANES 2015-2016, Centers for Disease Control and Prevention, 2016, www.cdc.gov/nchs/nhanes/ continuousnhanes/default.aspx?BeginYear=2015.

107


Scholarly Research In Progress • Vol. 2, November 2018

Marginalized Populations in Northeastern Pennsylvania: A Targeted Literature Review Charles O.A. Bay1* and Ida Castro2

Howard University College of Medicine, Washington, DC 20059 Geisinger Commonwealth School of Medicine, Scranton, PA 18509 *Correspondence: charles.bay@bison.howard.edu 1

2

Abstract Objectives: To analyze current health needs data from county hospitals and state and local departments to understand current data gaps. To quantify the current needs of vulnerable populations within the northeast Pennsylvania (NEPA) area, including Lackawanna, Luzerne, Monroe, and Pike counties. To begin forming action plans identifying ways to improve the mental/physical/social health of vulnerable populations within the NEPA area. Methods: A targeted literature review focused on the vulnerable populations within the NEPA area was completed. We completed a comprehensive community health needs assessment (CHNA) review to understand current data related to county health profiles. Acute and chronic factors related to social determinants of health (SDOHs) and health outcomes were of interest. Additionally, federal-, state-, and county-level available data related to health and health resources were reviewed. Results: A total of 10 primary data sources were analyzed. There was no consistency in the data reported for vulnerable populations. Five CHNAs were analyzed with mention of only “Hispanic,” “Black,” and “Other” seen for race/ethnicity in all CHNAs, with some mentioning of Asian/Pacific-Islander groups. Economically disadvantaged population data were reported in various places with varying levels of detail across CHNAs.

representative studies, and targeted approaches, health organizations in the United States have attempted to quantify and lessen the negative impact health disparities have on those who are marginalized (5–8). Disparities in social determinants of health (SDOHs) and health care can be expected to lead to increased morbidity and at times increased mortality. Despite these efforts, from 2000 through 2015 “most disparities have not changed significantly for any racial and ethnic groups, with only 20% of measures showing decreased disparities for Blacks and Hispanics,” according to the Agency for Healthcare Research and Quality (AHRQ) (9). The persistence of disparities in health care for the marginalized impacts Pennsylvania in particular. When considering racial and ethnic disparities and the impact on quality of care, Pennsylvania was rated in the fourth (worst) quartile for average differences in disparity scores for black, Hispanic, and Asian metrics when compared to whites, as reported in the 2016 National Healthcare Quality and Disparities Report (NHQR) (9). This study aims to better understand how well health data on vulnerable populations within our communities are being compiled and reported (Figure 1).

Conclusion: The presence of readily available and easily accessible health data is lacking. The presentation of associated statistics for vulnerable populations in CHNA is underwhelming. The same level of analysis provided for majority populations of NEPA is more clearly and thoroughly reported, allowing stakeholders to create actionable hospital and community intervention plans to address concerns. The ability for similar intervention plans tackling problematic health outcomes for vulnerable populations in NEPA may be improved with increased data availability.

Introduction There are a number of ways researchers and public health practitioners have utilized the concept of vulnerable and/or marginalized populations to better understand disparities in health outcomes (1, 2). While varying definitions exist, marginalization can be summarized as factors and circumstances that make it difficult to obtain a certain level of health status, due to existing on the fringe of mainstream society (2, 3). Once marginalized, people are exposed to inequities in health care access and resources, making them vulnerable to conditions that may lead to poorer health outcomes (4). Through health policies, nationally 108

Figure 1. Motivations of this study

Various organizations report Pennsylvania’s health ranking outcomes as very poor. It falls into the 10 worst states for cancer, cardiovascular disease, and drug-related death rates. Based on results from Pennsylvania’s Department of Health (PADOH) Pennsylvania has the fourth-highest drug overdose mortality rate, jumping from 2,489 drug-related deaths in 2014 to 4,884 drug-related deaths in 2016 (10, 11). Additionally, the Kaiser Family Foundation reported Pennsylvania’s age-


Marginalized Populations

adjusted heart disease mortality rate to be greater than 30 other states (179 per 100 000 as compared to US rate of 168.9 per 100 000) in 2016 (12). Further, Pennsylvania has the seventh-highest all-cancer, age-adjusted incidence rate (472.7 per 100 000) in the United States (12, 13). The desire to change these and similarly poor rates for the top 5 causes of death in Pennsylvania (cardiovascular disease, cancer, accidents, cerebrovascular disease, and chronic lower respiratory disease, respectively (14)) provided the impetus for PADOH’s 2015–2020 state health improvement plan. By bringing together stakeholders with state, regional, and local expertise, Pennsylvania whittled down 48 heath-related issues and community themes into three focused priority health areas. These main themes include 1) enhancing primary care and preventative screening methods, 2) improving behavioral and mental health in adults and children with a particular focus on drug/alcohol abuse in adults, and 3) limiting the rippling effects of obesity by curtailing physical inactivity and improving nutrition (15). This is a comprehensive plan positioned to bring all residents of Pennsylvania closer to and above average national health outcomes. It mentions improving health equity, increasing health literacy, and re-envisioning primary care integration grounded in SDOHs. Additionally, the state health improvement plan offers promising potential as some of the plan’s goals align with decreasing disparities in health care delivery related to racial and socioeconomic factors that continue to impact marginalized populations (9). The populations included are low-income, children, and elderly. In 2016, PADOH and the Robert Wood Johnson Foundation reported Lackawanna County to be in the bottom 10% of all counties (60th of 67) due to its health outcomes and quality of life rankings (16, 17). Luzerne, Monroe, and Pike were ranked 64th, 48th, and 14th out of 67 counties, respectively, based on the same health outcomes. Additionally, when researchers performed disparity assessments and split the counties into census blocks for further granularity, 7 Lackawanna blocks, 6 Luzerne blocks, and 6 Monroe blocks scored in the highest or second-highest need index. This need index is defined by 5 key barriers to care, which include income, cultural, education, insurance, and housing scores. These scores are composed of associated socio-economic statistics related to the analyzed block. Pike County, as the healthiest of the four, had blocks in low need, with only four blocks that qualified for mild need (17, 18). Why are these index scores important? Residents of communities with the highest need index “were shown to be twice as likely to experience preventable hospitalization for manageable conditions, such as ear infections, pneumonia or congestive heart failure, as communities with the lowest index scores,” as explained by researchers Roth and Barsi (19). Data describing unnecessary and additional disease burden based on location and associated socioeconomic factors are likely to indicate that marginalized populations in these areas will be disproportionately affected as well. There have been recent studies that aimed to quantify the impact poor health outcomes are having on marginalized populations living within northeast Pennsylvania (NEPA). Studies focused on understanding how and why the homeless population uses the Emergency Department for care (20); detailing the inability for refugees to obtain adequate care (21); highlighting increased levels of risk for mental health disease in lesbian,

gay, bisexual, and queer (LGBQ) youth (22); evaluating models for preventing lung cancer and smoking (23); and examining associations of community-based socioeconomic deprivation on child/adolescent obesity (24) have all focused on better understanding disease burden on marginalized populations in NEPA. To the extent that these and similar studies (25, 26) evaluate differing outcomes due to population classification, to our knowledge, no study has assessed the level of detail provided by regional and local entities. More specifically, the goal of this study is to describe the depth with which hospital and Pennsylvania-sponsored reports categorize and produce health data focused on marginalized populations. This study builds upon previous reports completed by hospitals, community organizations, and associated partners within NEPA. Geisinger Commonwealth School of Medicine (GCSOM) and the larger Geisinger organization aim to play a substantial role in improving the health of residents in northeast Pennsylvania. The Geisinger system serves an estimated 3 million patients throughout its 45-county footprint in Pennsylvania. The impact of these institutions will be first felt by the communities they serve, particularly via hospital and community interactions in Lackawanna, Luzerne, Monroe, and Pike counties (Figure 2). Data reported in Community Hospital Needs Assessments (CHNAs) provide strategic insight toward areas in which hospitals and partnering community organizations can improve the health of the patients served. To the extent that overall community analyses have implications towards better patient outcomes, understanding potential patterns for the most vulnerable patients is valuable. At the time of its founding, GCSOM (then The Commonwealth Medical College or TCMC) conducted a regional health assessment to engage community partners across the 16 counties it serves. In keeping with the mission to provide applicable communitybased and patient-centered care, this foundational CHNA aimed to better define the role that the medical college would serve in in bridging the health gaps of communities served. Vulnerable populations included in this 2009 assessment were rural, impoverished/working poor, minority communities, elderly, and youth (27). Thus, this study builds on the foundation of past CHNAs by focusing on current and perhaps underreported data related to patient populations defined as vulnerable. This study aims to detail the current needs of communities and identify areas where Geisinger and GCSOM can further improve the mental, physical, and social health of NEPA.

Materials and Methods A targeted literature search was completed to acknowledge some of the issues affecting populations we define as vulnerable within the northeast Pennsylvania area. The terms marginalized, underserved, and vulnerable were utilized and will be referred to as vulnerable populations. Vulnerable populations are communities of people who are at an increased risk of suffering from undue burden of disease due to conditions that leave them with poorer health outcomes because of inequities in SDOHs, health care access, and resources (1–5). In alignment with previous studies, the Pennsylvania health disparities report, (28) and investigator expertise, this study defines vulnerable populations to include 109


Marginalized Populations

those who are economically disadvantaged, rural, women, black/African/African-American, Latino/Hispanic, indigenous, immigrant, refugee, LGBTQIA+, disabled, undocumented, and language-isolated/English as Second Language (ESL). At the time of this search, conducted first Q2 2017 and again Q1 2018, investigators reviewed literature including federal data, state data, and city/county data that pertained to populations of interest. Additionally, literature pertaining to populations existing in four NEPA counties (Lackawanna, Luzerne, Monroe, and Pike) were of particular interest. Federal literature was reviewed if it detailed information pertaining to Pennsylvania’s vulnerable populations. A comprehensive review of CHNA conducted by hospitals in northeast Pennsylvania was done to understand the breadth of existing data the CHNA collected related to hospital patient population. CHNAs, depending on the information reported, provide acute, chronic, and modulating factors related to social determinants of health and leading to specific health outcomes reported within a hospital’s community footprint. Hospitals conduct CHNAs every 3 years, and adopt an implementation strategy to meet the needs identified in the assessment. This review collected data from the 2012 to 2016 nonprofit, hospitalbased CHNAs.

participants, and multiple focus groups targeted at other marginalized participant groupings, only one CHNA had qualitative efforts specific for data collection within multiple vulnerable populations. Vulnerable populations included in all reports were: aging/senior, children, impoverished, and women. Vulnerable populations not included in any reports were: immigrant, indigenous, LGBTQIA+, and undocumented. Refugee data could only be found in the PA Refugee Resettlement Program database. This came with no associated health or social data, only placement statistics. Important statistics from secondary data analyses that report disproportion burden of illness are presented in Table 2. Qualitative study results from the CHNA including themes or key commentary from surveys, focus groups or patient/physician interviews are presented in Table 3. Eight state minority health reports, while applicable by population and data provided, were not specific to our counties of interest and thus were not included in this section. These reports can be found in Table 4.

Results This study included data from 10 total sources compiled from 1 nonprofit organization report specific to vulnerable populations, 4 reports from governmental bodies, and 5 reports from hospital CHNAs completed in counties of interest. A summary of all reviewed reports and information pertaining to identified vulnerable populations are presented in Table 1. In reviewing the CHNAs, few reached the level of informative analysis as provided by the 2012 community-based health and healthcare research report by partnership among a number of NEPA universities, colleges, and organizations included under The Institute for Public Policy and Economic Development. While the institute’s analysis included dedicated sections to Spanish-speaking and African-American survey Table 1. Summary of included literature sources by organization 110


Marginalized Populations

Table 2. Summary of key study results on vulnerable populations within non-CHNA reports 111


Marginalized Populations

Table 3. Qualitative study results of vulnerable populations reported within Community Health Needs Assessments

112


Marginalized Populations

Table 4. Summary of PADOH Minority Health Reports as compared to white PA residents, 2013–2015

113


Marginalized Populations

Hospital community health needs assessments The Patient Protection and Affordable Care Act (PPACA) introduced the requirement for all 501(c)(3) (nonprofit) hospitals to complete CHNAs (29). Applicable hospitals must conduct a comprehensive health needs assessment every 3 years and adopt an associated implementation strategy to meet the needs identified by the assessment. The inclusion of vulnerable population data within these assessments were of particular interest to this review. Table 3 highlights the CHNAs, the hospital system that conducted the assessment, counties served, presence of which vulnerable populations in analysis, and key themes revealed. This study reviewed five CHNAs: the Geisinger system reporting for Lackawanna and Luzerne counties; St. Luke University Health Network reporting for Monroe County; Wayne Memorial Hospital reporting for Lackawanna and Pike counties; and Lehigh Valley Health Network reporting for Lehigh County. An adaptation of the Geisinger service area, Pennsylvania county layout by Boscarino et al. (30), is provided in Figure 2. Overall, CHNAs conducted in NEPA during 2013 to 2016 did not readily define vulnerable populations and reported data with varying levels of detail. When included, vulnerable populations suffer with significantly higher percentages of morbidity and negative health outcomes. All CHNAs analyzed mentioned only “Hispanic” and “Black” populations as far as race/ethnicity. Two mentioned Asian and Pacific Islander, and all captured remaining ethnic groups in a catchall “all others/others” categorization. Outside of race/ethnicity categorizations, low-income/impoverished/economically disadvantaged and rural populations were highlighted in demographic or socioeconomic sections of CHNAs. Such mentions were generally referenced with descriptive percentages indicating demographic quantity and per capita income statistics. All CHNAs provided data on women. Limited data were provided on children and ESL populations. Two CHNAs had sections devoted to the reporting of vulnerable populations. Specific to Lackawanna County (Geisinger Community Medical Center, or GCMC), the CHNA reported 15 of the 21 census blocks had unchanged or increased barriers to health care. This was compounded by trends in CDC health statistics data suggesting that Lackawanna County consistently shows poorer health outcomes for lifestyle-related mortality rates (e.g., diabetes, heart disease) when compared to Luzerne County. Commentary on oral health as a top priority for the county only included a single mention of children and no oral pediatric care. Children were also noted to have potential undue health burdens due to parents lacking health insurance. Similarly, mention of the relationship between low-income populations and higher prevalence among lower-income families was briefly included in stakeholder discussion. Issues affecting the vulnerable populations in this CHNA were assessed via 266 surveys in the GCMC catchment area and reported in a standalone section. Of these respondents, 23.9% indicated they had at some time been diagnosed with mental health issues. When asked to identify 5 of their top community health concerns, respondents chose cancer (43.7%), drug and alcohol use (39.7%), diabetes (34.9%), heart disease (30.2%), and high blood pressure (29.4%) most often. Data specific to Luzerne County (Geisinger Wyoming Valley Medical Center, or GWV) were reported in two CHNAs, the 114

aforementioned GCMC/Lackawanna and in a GWV/Luzerne assessment. The GWV CHNA analysis contained the same exact wording as the GCMC CHNA, with numbers replaced. The same vulnerable population section toward the end of the GWV CHNA was present and provided survey responses to the top 5 community health concerns. Respondents chose cancer (55.7%), drug and alcohol use (54.5%), diabetes (44.3%), heart disease (36.4%), and high blood pressure (37.5%) most often. Of respondents, 21.7% indicated they had at some time been diagnosed with mental health issues. St. Luke’s University Health Network reported data specific to Monroe County. No distinct vulnerable population section was included in this CHNA; however, the assessment did refer to children, adolescents, and elderly as vulnerable populations. Of 3,000 surveys conducted, results included data on women, Hispanic/Latino, children, black, impoverished, and ESL populations. St. Luke organized its data report by detailing outcomes associated with socioeconomic, cultural, and environmental factors that may influence the health and well-being of its respondents. Discussion of potential health impact due to poverty status (10.67%) and language barriers (3.96%) for Monroe County respondents were discussed. St. Luke’s reporting on children was particularly in depth, as they connected the demographic to SDOHs like education, risky health behavior, and mental health. Lehigh Valley Health Network (LVHN) reported data specific to racial/ethnic, disabled, ESL, children, and impoverished populations in Hazelton of Luzerne County. Interestingly, LVHN’s report was the only CHNA to include a section about disabled patients and the wide health needs beyond their specific disability. As a summation, the LVHN investigators stated that Luzerne County is struggling. Their surveyed population revealed a disabled group whose disability was slightly higher than national averages. The LVHN-Hazelton population change over the past decade has brought a significant Hispanic/Latino population living in the city of Hazleton, particularly in comparison to the other NEPA areas. Care providers have not mirrored this population change. Thus, within this population, the LVHN came across themes of distrust and lack of comfort with any provider who does not speak Spanish. Economic and age demographics may have influence on health outcomes, particularly for children living in poverty. LVHN-Hazelton reported 49% of children age 18 and younger are living at or below the federal poverty level. This is drastically higher than the US (22%) and Pennsylvania (19%) averages and may be tied to disproportionate burden of disease. The disproportionate burden of disease is further supported by the disproportionate number of residents (36%) who are uninsured, as compared to Pennsylvania (18%), and as such rely on emergency care. LVHN has thus initiated efforts to reduce the emergency presentations that are preventable hospital stays. Investigators discussed reducing preventable hospital stays among this population by improving the identification and prevention of existing health problems that can lead to hospitalization, therefore improving healthy living and reducing health care costs. Wayne Memorial Hospital (WMH) service area included Pike County. The CHNA mentions immediately that because of “the size and considerable differences in community demographics in the service area…community level analyses


Marginalized Populations

Figure 2. Geisinger coverage map spanning across northeast Pennsylvania by Boscarino et al.

were critical in achieving significant need selection which recognized significant differences in the seven communities.” In an effort to represent underserved communities, WMH included commentary from their federally qualified health center staff and patients in addition to 1,100 respondents to a community survey. Significant need was reported for impoverished respondents, particularly single-parent homes and homes in which children’s primary guardians were 65+ years old, women 15 to 44 years old, non-white populations, single mothers with no prenatal visits in their first trimester. Concerning vulnerable populations, WMH reported general themes and priority areas instead of specific details from the conducted patient surveys. With this data presentation, the need for increased primary care capacity and decreasing children substance abuse were stated. Nonprofit organizations The Institute for Public Policy and Economic Development was the single representative for non-hospital, nonprofit institutions. As a coalition between universities and organizations within Lackawanna and Luzerne counties, they produce community-based analyses such as health indicator reports. Their 2012 report utilized 12 000+ surveys supported by qualitative analyses via stakeholder interviews and patient

focus groups. The stakeholder interviews represented regional employers, health centers and clinics, researchers, policy advisors, and various community organizers. Focus group interviews intended to assess vulnerable populations, which are included in Table 2. An interesting but concerning finding of this report was the repeated perspective among vulnerable populations that northeast Pennsylvania was not embracing diversity and lacked cultural understanding, awareness, and respect. Due to the reported lack of cultural competency in an area surging in the Hispanic/ Latino, African-American, Bhutanese, Hindu, and Russian populations, surveyed participants expressed frustration with the lack of understanding of the impact of such barriers. Examples of barriers that perpetuate decreased health included fragmented resources and disrespect. Furthermore, there was consistency from stakeholders who indicated their largest concerns were the plight of poverty, a lack of mental health providers, difficulty in patient compliance, and lacking awareness among providers. Government organizations Four governmental sources reported on disparities impacting blacks, LGBTQ individuals, cancer patients, and refugee populations. The data reported came primarily from PADOH’s 115


Marginalized Populations

Minority Health Division while the rest were from other divisions within PADOH including the Office of Health Equity and Community Epidemiology. Additionally, the Pennsylvania Refugee Resettlement Program provided data on incoming refugees resettling in NEPA. Table 2 includes these reports. As previously mentioned, racial disparities in mortality among blacks is a particularly important metric that Pennsylvania tracks with aims to improve the differences seen in its black population. PADOH reported disparities in mortality rate persisted among blacks as compared to Whites from 1999 to 2015, despite differences in all-cause mortality rates declining over that time. While mortality rate differences declined for many leading causes of death from 1999–2003 to 2011– 2015, the report showed widened gaps for heart disease, accidents, and suicide mortality rates from 1999–2003 to 2011– 2015. Factors contributing to mortality disparities may be associated with a higher prevalence of unhealthy behaviors as compared with whites. These behaviors include smoking, limited physical activity, and higher risk of obesity, for example. For black residents, these lifestyle factors may be compounded by barriers to accessing care. Specifically within our counties of interest, Lackawanna had significantly higher all-cause standardized mortality rates for blacks, at 1.15, while Luzerne (0.87), Monroe (0.87), and Pike (0.90) counties were all decreased. Investigators concluded a need for advanced and refined policy and action for the prevention and control of major causes of death at the local-county level. The 2013–2014 American Cancer Society’s publication “Cancer Facts and Figures for African-Americans” inspired PADOH to improve their understanding of cancer’s impact on vulnerable populations (31). The subsequent cancer report by PADOH on the black population highlights the suspected disparities that exist for the state’s overall second leading cause of death. At the time of the report, the age-adjusted mortality rates among black residents were twice as high as white residents for 6 cancer sites. Furthermore, blacks compared to whites had a lower percentage of cancers diagnosed at early stages (47.1% vs 51.4%) and a higher percentage of cancers diagnosed at late stages (45.4% vs 41.5%). Investigators commented that this discrepancy between groups might imply less access to appropriate care or lacking use of screening methods or health care services in general. Additionally, previously mentioned behavioral tendencies within black populations were included within this report as possible contributors to this cancer stage discrepancy. The remaining reports pertaining to northeast Pennsylvania were conducted in the LGBTQ and refugee populations. Conducted in 2015, the survey study of 1,450 Lehigh Valley participants revealed troubling conclusions from the 600+ LGBTQ respondents. There was a lack of comfort with NEPA providers, which contributed to negative health behaviors. Allen et al. reported an estimated 25% of the LGBTQ respondents surveyed had providers who “reacted negatively to their LGBTQ status.” Additionally, LGBTQ participants reported their medical providers and their place of residence as the least LGBTQ-friendly areas. Mental health, suicide, and HIV/STD prevention were the top concerns among LGBTQ individuals in Monroe County.

116

Discussion As mentioned, the goal of this study was to describe the current needs of communities and highlight areas where vulnerable populations may be disproportionately impacted. We aimed to review currently available reports from varying levels of analysis by government, nonprofit, and hospital organizations data. The findings of this review suggest that there is a lack of publicly available and easily understood data specifically analyzing the health of vulnerable populations within northeast Pennsylvania. A discrepancy appears to exist between the data collected and reported on by non-CHNA reports versus CHNA reports. The majority of non-CHNA reports highlighted in Table 2 were released prior to the CHNAs reported in this review. All non-CHNAs provided details on key themes and problematic areas impacting vulnerable populations including LGBTQ, black and African-American, and Spanish-speaking residents. The reports demonstrated potential areas of focus that could be further explored and highlighted by more granular CHNAs. However, avenues available to be further explored by CHNAs were ignored or barely touched upon. Some reports provided interesting data that future studies may seek to understand. Lackawanna County having significantly higher all-cause standard mortality rate (SMR) for its black populations, while other NEPA counties showed significant decreases in their all-cause SMR for their black population, demands attention. Also mentioned was the 21.7% of participants in the GWV CHNA indicated they had at some time been diagnosed with mental health issues. For context, prevalence numbers for diagnosis of “any mental illness” in the past year (2016) among US adults according to the National Institute of Mental Health are as follows: overall, 18.3%; among women, 21.7%; among men, 14.5%; multi-race, 26.5%; American Indian/Alaskan Native, 22.8%; Native Hawaiian/Pacific Islander, 16.7%; Non-Hispanic White, 19.9%; Hispanic/Latino/Latina, 15.7%; Black, 14.5; Asian, 12.1% (32). While the GWV CHNA did not provide details of mental illness rates of diagnosis, there does exist a slight increase in percentage of overall population diagnosed. This aligns with the common occurrences of mental health as a mentioned key theme and area of need within NEPA. Additionally, reports on separate counties that provided identical analysis outcomes and conclusions based on community specific stakeholder interviews are alarming and identify potential high-yield actionable issues. Throughout multiple CHNA, provider-based commentary mentioned lacking use of health care services as problematic patient habits. It is important to note that underutilization should not be automatically associated with non-compliance. Providers and researchers alike should consider lack of access to screening and unknown issues around inadequate education and culturally sensitive barriers to use. The findings of this review suggest there may be opportunities to begin addressing causes and consequences, particularly for repeated themes across county lines and populations. Improving the lack of comfort with NEPA providers and thus decreasing the negative effect this has on health outcomes is one such opportunity. An individual’s resistance to seeking health care due to required focus on other demands exacerbates issues that may have otherwise been small.


Marginalized Populations

Meeting communities where they are at, demonstrating the desire to work around the plight of the impoverished or health illiterate, is another opportunity. This study may be influenced by several limitations. All potential categorizations for vulnerable populations were not included within this review. Therefore, reports pertaining to those populations may have been missed, preventing this study from highlighting their contributions. Second, this study considered publicly available reports. There may be countybased health data pertaining to vulnerable populations that is of proprietary analysis and not privy to our investigation. The applicable CHNAs reported details and communicated potential health issues that impact a wide range of patients, many who could be included within a vulnerable population specified by this study or others. We note that issues reflected in summary themes from 2012 reports are still present and at times further entrenched and apparent in recent 2016–2017 reports. Subsequent implementation plans have undoubtedly impacted various individuals within the vulnerable populations this and other studies are particularly interested in better understanding. While it was encouraging to see many hospital systems recognizing the impact social determinants can have on health, additional detailed attention to factors affecting the health of vulnerable populations both in CHNA aims and methodology as well as in resultant implementation plans may provide increased improvements in those populations’ health outcomes. Gathering the perspective of key community stakeholders and patients who are members of vulnerable populations to use towards the creation of implementation plans has shown to be impactful toward priority health themes. Future studies, CHNAs, or other population-based studies may consider identifying and focusing specifically on a vulnerable population to provide more detailed insights, based on input from stakeholders in the identified communities. This approach would benefit vulnerable populations and the local organizations positioned to serve them. References 1.

Hall JM, Stevens PE, Meleis AI. Marginalization: a guiding concept for valuing diversity in nursing knowledge development. Advances in Nursing Science. 1994;16(4):23-41.

2. Alexander GL, Kinman EL, Miller LC, Patrick TB. Marginalization and health geomatics. Journal of Biomedical Informatics. 2003;36(4):400-7. 3. Wrigley A, Dawson A. Vulnerability and Marginalized Populations. In: Barrett DH, Ortmann LW, Dawson A, Saenz C, Reis A, Bolan G, editors. Public Health Ethics: Cases Spanning the Globe. 3: Springer; 2016. 4. Stevens PE. Marginalized women's access to health care: a feminist narrative analysis. Advances in Nursing Science. 1993;16(2):39-56. 5. Institute of Medicine (US) Committee on Eliminating Racial and Ethnic Disparities in Health Care. Unequal Treatment: Confronting Racial and Ethnic Disparities in Health Care. Washington (DC): National Academies Press (US) Copyright 2002 by the National Academy of Sciences. All rights reserved. 2003.

6. Alegria M, Alvarez K, Ishikawa RZ, DiMarzio K, McPeck S. Removing Obstacles To Eliminating Racial And Ethnic Disparities In Behavioral Health Care. Health Affairs (Project Hope). 2016;35(6):991-9. 7.

Healthy People 2020: Midcourse Review. U.S. Department of Health and Human Services, Prevention CfDCa; 2017.

8. A Systematic Approach to Health Improvement. Healthy People 2010: Understanding and Improving Health: U.S. Department of Health and Human Services; 2010. 9. National Healthcare Quality and Disparities Report. Rockville, MD: Agency for Healthcare Research and Quality,; 2016 October. 10. Report on Overdose Statistics: 2016. Bedford, PA: Pennsylvania State Coroners Association; 2017. 11. Stats of States: Pennsylvania [Internet]. Centers for Disease Control / National Center for Health Statistics. 2018 [cited February 15, 2018]. Available from: https:// www.cdc.gov/nchs/pressroom/sosmap/drug_poisoning_ mortality/drug_poisoning.htm. 12. The Pennsylvania Health Care Landscape. Kaiser Family Foundation; 2016. 13. Pennsylvania Cancer Incidence [Internet]. Pennsylvania Department of Health. 2013-2015 [cited 2/12/2018]. Available from: http://www.statistics.health.pa.gov/ StatisticalResources/EDDIE/Pages/EDDIE.aspx#. Wrbbb5PwY1I. 14. Pennsylvania Vital Statistics. Harrisburg, PA: Pennsylvania Department of Health, Informatics DoH; 2016 February, 2018. 15. Pennsylvania State Health Improvement Plan 2015–2020. Pennsylvania Department of Health; 2016. 16. Pennsylvania and County Health Profiles. Pennsylvania Department of Health; 2016. 17. County Health Rankings: Pennsylvania. Robert Wood Johnson Foundation, Health P; 2016. 18. Community Need Index: Methodology. DIgnity Health; 2016. 19. Roth R, Barsi E. The community need index. A new tool pinpoints health care disparities in communities throughout the nation. Health Progress (Saint Louis, Mo). 2005;86(4):32-8. 20. Feldman BJ, Calogero CG, Elsayed KS, Abbasi OZ, Enyart J, Friel TJ, et al. Prevalence of Homelessness in the Emergency Department Setting. Western Journal of Emergency Medicine. 2017;18(3):366-72. 21. Kim A, Witt K, Burch B, Jenson A. Forced Acculturation & the Crushed American Dream. Middle East Review of Public Administration. 2017;3(2). 22. Shearer A, Herres J, Kodish T, Squitieri H, James K, Russon J, et al. Differences in Mental Health Symptoms Across Lesbian, Gay, Bisexual, and Questioning Youth in Primary Care Settings. The Journal of Adolescent Health. 2016;59(1):38-43.

117


Marginalized Populations

23. Tramontano AC, Sheehan DF, McMahon PM, Dowling EC, Holford TR, Ryczak K, et al. Evaluating the impacts of screening and smoking cessation programmes on lung cancer in a high-burden region of the USA: a simulation modelling study. BMJ Open. 2016;6(2). 24. Nau C, Schwartz BS, Bandeen-Roche K, Liu A, Pollak J, Hirsch A, et al. Community socioeconomic deprivation and obesity trajectories in children using electronic health records. Obesity. 2015;23(1):207-12. 25. Kioumourtzoglou M-A, Schwartz JD, Weisskopf MG, Melly SJ, Wang Y, Dominici F, et al. Long-term PM(2.5) Exposure and Neurological Hospital Admissions in the Northeastern United States. Environmental Health Perspectives. 2016;124(1):23-9. 26. Maeng DD, Han JJ, Fitzpatrick MH, Boscarino JA. Patterns of health care utilization and cost before and after opioid overdose: findings from 10-year longitudinal health plan claims data. Subst Abuse Rehabil [Internet]. 2017 2017; 8:[57-67 pp.]. Available from: http://europepmc.org/ abstract/MED/28860892 27. Garrettson M, Walline V, Heisler J, Townsend J. New Medical School Engages Rurual Communities to Conduct Regional Health Assessments. Family Medicine. 2010;42(10):693-701 28. Pennsylvania Health Disparities Report. Harrisburg, PA: Pennsylvania Department of Health; 2012. 29. New Requirements for 501(c)(3) Hospitals Under the Affordable Care Act Washington, D.C.: Internal Revenue Service; 2012 [Available from: https://www.irs.gov/ charities-non-profits/charitable-organizations/newrequirements-for-501c3-hospitals-under-the-affordablecare-act. 30. Boscarino JA, Kirchner HL, Pitcavage JM, Nadipelli VR, Ronquest NA, Fitzpatrick MH, et al. Factors associated with opioid overdose: a 10-year retrospective study of patients in a large integrated health care system. Subst Abuse Rehabil. 2016;7:131-41. 31. DeSantis C, Naishadham D, Jemal A. Cancer statistics for African Americans, 2013. CA: A Cancer Journal for Clinicians. 2013;63(3):151-66. 32. Mental Illness [Internet]. National Institute of Mental Health, Bethesda, MD. [cited 7/20/2018]. Available from: https://www.nimh.nih.gov/health/statistics/mental-illness. shtml.

118


Scholarly Research In Progress • Vol. 2, November 2018

2019 Summer Research Immersion Program Each summer, the Geisinger Commonwealth School of Medicine Summer Research Immersion Program (SRIP) brings together first-year medical students for an opportunity to gain research experience in basic science, clinical science, public/ community health, behavioral health or medical education under the guidance of a research mentor. The summer research experience includes a $2,500 educational stipend. At the end of the program, students present their research in a poster session. In addition to research, SRIP students participate in a variety of complementary enrichment activities: • GCSOM and Geisinger faculty research seminars • GCSOM Grand Rounds and clinical seminars at our hospital partners • Special events or conferences related to your research topic • Clinical exposure • Scientific writing & communication workshops SRIP program goals: • To provide GCSOM medical students with an in-depth experience in research • To enhance participants’ knowledge of the scope and types of research relevant to improving health in the region, nationally and globally • To offer opportunities to participate in research across the translational continuum, from laboratory-based biomedical studies to clinical and public health research conducted with community partners • To learn from colleagues about their research experiences • To enhance GCSOM students’ skills in oral and written scholarship

Program dates: SRIP 2019 is an eight-week program held from June 3 through July 26, 2019.

Program deadlines: Application release date: Dec. 3, 2018 Application submission deadline: Feb. 1, 2019 For more information, contact the SRIP director, Elizabeth Kuchinski, MPH (ekuchinski@som.geisinger.edu), or Sonia Lobo, PhD, Associate Dean for Research & Scholarship, (slobo01@som.geisinger.edu). 119


Scholarly Research In Progress • Vol. 2, November 2018

120


Finding Your Way: Opportunities for Student Funding You can find assistance in looking for funding opportunities specifically designed for students from Sponsored Programs in the Office of Research and Scholarship. Funding opportunities can include fellowships, internships, research, programming and collaboration. Sponsored Programs can help you locate and qualify funding opportunities, as well as assist in application prep, budgeting and editing. We are here to help you every step of the way! School policy requires that applications are submitted by OSP, so be sure to call or stop by early so that we can meet your deadline. Sponsored Programs’ subscription to GrantScoop provides access to all faculty, staff and students using a Geisinger Commonwealth computer. This searchable database will provide you with details on a wide range of funding opportunities and can be filtered to show you only student-eligible entries. Visit GrantScoop.com and create your account. You can then access it from home and save your searches. Stay tuned for a new publication designed just for students featuring new and upcoming funding opportunities from both government and private foundations. The newsletter will be broadcast to students monthly. We’d love to hear your feedback, so stay connected and let us know how we can improve to meet your needs.

Contact information: Kathryn Pasqualichio, Grants Specialist Office of Research and Scholarship Sponsored Programs Phone: 570-558-3955 Internal extension: 5335 Email: kpasqualichio@som.geisinger.edu

Geisinger Commonwealth School of Medicine is committed to non-discrimination in all employment and educational opportunities.


525 Pine St. Scranton, PA 18509

570-504-7000

geisinger.edu/gcsom StudentResearch@som.geisinger.edu

246-857-11/18-HDAV/BF


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.