Volume 18 Issue 1: Special Issue in Educational Research and Practice from CDEM and CORD

Page 1

Western Journal of Emergency Medicine

Volume XVIII, Number 1, January 2017

West

Open Access at www.westjem.com

ISSN 1936-900X

Western Journal of Emergency Medicine: Integrating Emergency Care with Population Health Special Issue from CDEM and CORD Indexed in MEDLINE

VOLUME XVIII, NUMBER 1, JANUARY 2017

Special Issue in Educational Research and Practice

PAGES 1-81

A Peer-Reviewed, International Professional Journal


Mark yourfor calendar

ACOEP’s Upcoming Spring Seminar!

April 18-22, 2017 NEW LOCATION!

HIGHLIGHTS INCLUDE:

Bonita Springs, FL

• COLA Review

Ft. Myers Coast

• A mix of CME activity and time to relax on your own • FOEM 5K & 1 Mile DO Dash

OVER 30 HOURS OF CME CREDIT

• Expanded breakout lectures • New tracks and events

Visit www.acoep.org for more details!


Western Journal of Emergency Medicine: Integrating Emergency Care with Population Health JOURNAL FOCUS Emergency medicine is a specialty which closely reflects societal challenges and consequences of public policy decisions. The emergency department specifically deals with social injustice, health and economic disparities, violence, substance abuse, and disaster preparedness and response. This journal focuses on how emergency care affects the health of the community and population, and conversely, how these societal challenges affect the composition of the patient population who seek care in the emergency department. The development of better systems to provide emergency care, including technology solutions, is critical to enhancing population health.

Table of Contents EDITORIALS 1 The Evolving Definition of Education Scholarship: What the Clinician Educator Needs to Know

DS Ander, JN Love

4

Clinical Reasoning: Defining It, Teaching It, Assessing It, Studying It

LD Gruppen

EDUCATION SCHOLARSHIP INSIGHTS 8 Proper Applications for Surveys as a Study Methodology AW Phillips BRIEF EDUCATIONAL ADVANCES (INNOVATIONS) 12 52 Procedures in 52 Weeks: An Innovative Curriculum for Emergency Medicine Residents

R Walsh, J Bass, C Gorbatkin, J Bothwell

EDUCATIONAL ADVANCES 14 A Novel Curriculum to Optimize Emergency Medicine Residents’ Exposure to Pediatrics

C Merritt, SA Gaines, J Smith, SA Santen

20 Promoting Achievement of Level 1 Milestones for Medical Students Going into Emergency Medicine

CG Leung, L Thompson, JW McCallister, DP Way, NE Kman

26

The Council of Emergency Medicine Residency Directors’ (CORD) Academy for Scholarship in Education in Emergency Medicine: A Five-Year Update

J LaMantia, LM Yarris, ML Dorfsman, NM Deiorio, S Wolf

BRIEF RESEARCH REPORT 31 A Needs Assessment for a Longitudinal Emergency Medicine Intern Curriculum

E Shappell, J Ahn

ORIGINAL RESEARCH 35 Development of an Assessment for Entrustable Professional Activity (EPA) 10: Emergent Patient Management

LR Thompson, CG Leung, B Green, J Lipps, T Schaffernocker, C Ledford, J Davis, DP Way, NE Kman

Policies for peer review, author instructions, conflicts of interest and human and animal subjects protections can be found online at www.westjem.com. Volume XVIII, no. 1: January 2017

i

Western Journal of Emergency Medicine


Western Journal of Emergency Medicine: Integrating Emergency Care with Population Health JOURNAL FOCUS Emergency medicine is a specialty which closely reflects societal challenges and consequences of public policy decisions. The emergency department specifically deals with social injustice, health and economic disparities, violence, substance abuse, and disaster preparedness and response. This journal focuses on how emergency care affects the health of the community and population, and conversely, how these societal challenges affect the composition of the patient population who seek care in the emergency department. The development of better systems to provide emergency care, including technology solutions, is critical to enhancing population health.

Table of Contents continued 43

“Let Me Tell You About My…” Provider Self-Disclosure in the Emergency Department Builds Patient Rapport

50

Who to Interview? Low Adherence by U.S. Medical Schools to Medical Student Performance Evaluation Format Makes Resident Selection Difficult

56

Teaching the Emergency Department Patient Experience: Needs Assessment from the CORD EM Task Force

60

An Analysis of the Top-cited Articles in Emergency Medicine Education Literature

69

Supplemental Milestones for Emergency Medicine Residency Programs: A Validation Study

76

Are All Competencies Equal In The Eyes Of Residents? A Multicenter Study Of Emergency Medicine Residents’ Interest in Feedback

KL Zink, M Perry, K London, O Floto, B Bassin, J Burkhardt, SA Santen

MB Osborn, J Yanuck, J Mattson, S Toohey, A Wray, W Wiechmann, S Lahham, MI Langdorf

KS London, J Druck, M Silver, D Finefrock

BW Munzer, J Love, BL Shipman, B Byrne, SJ Cico, R Furlong, S Khandelwal, SA Santen AR Ketterer, DH Salzman, JB Branzetti, MA Gisondi

S Bentley, K Hu, A Messman, T Moadel, S Khandelwal, H Streich, J Noelker

Online Manuscripts

(Full text manuscripts available open access at http://escholarship.org/uc/uciem_westjem) 82 Clinical Assessment of Medical Students in the Emergency Department, a National Consensus Conference

KM Hiller, D Franzen, L Lawson, D Manthey, J Fisher, M Haughey, M Tews, N Dubosh, J House, A Trainor, D Wald, J Jung

84

Incorporation of a Graduate Student Writer into a Productive Research Team

86

Transition of Care Practices from Emergency Department to Inpatient: Survey Data and Development of Algorithm

93

Student-Advising Recommendations from the Council of Residency Directors Student Advising Task Force

JP Fischer, JB House, LR Hopson, MA Perry, N Theyyuni, MS Wolff, CB Sozener, SA Santen

S Lee, J Jordan, HG Hern, C Kessler, S Promes, S Krzyzaniak, F Gallahue, T Stettner, J Druck

E Hillman, L Lutfy-Clayton, S Desai, A Kellogg, XC Zhang, K Hu, J Hess

Policies for peer review, author instructions, conflicts of interest and human and animal subjects protections can be found online at www.westjem.com. Western Journal of Emergency Medicine

ii

Volume XVIII, no. 1: January 2017


Western Journal of Emergency Medicine: Integrating Emergency Care with Population Health JOURNAL FOCUS Emergency medicine is a specialty which closely reflects societal challenges and consequences of public policy decisions. The emergency department specifically deals with social injustice, health and economic disparities, violence, substance abuse, and disaster preparedness and response. This journal focuses on how emergency care affects the health of the community and population, and conversely, how these societal challenges affect the composition of the patient population who seek care in the emergency department. The development of better systems to provide emergency care, including technology solutions, is critical to enhancing population health.

Table of Contents continued 97

Emergency Medicine Resident Orientation: How Training Programs Get Their Residents Started

105

Trends in NRMP Data from 2007-2014 for U.S. Seniors Matching into Emergency Medicine

110

Defining Uniform Processes for Remediation, Probation and Termination in Residency Training

114

Adapting Gel Wax into an Ultrasound-Guided Pericardiocentesis Model at Low Cost

117

Novel Airway Training Tool that Simulates Vomiting: Suction-Assisted Laryngoscopy Assisted Decontamination (SALAD) System

121

Creation of a Realistic Model for Removal of a Metallic Corneal Foreign Body for Less than $75

126

What Do They Want from Us? A Survey of EM Program Directors on EM Application Criteria

129

Demographics and Fellowship Training of Residency Leadership in EM: A Descriptive Analysis

133

Medical Student Documentation in the Electronic Medical Record: Patterns of Use and Barriers

137

Efficient and Effective Use of Peer Teaching for Medical Student Simulation

142

Interview Day Environment May Influence Applicant Selection of Emergency Medicine Residency Programs

J McGrath, M Barrie, DP Way

DE Manthey, ND Hartman, A Newmyer, JC Gunalda, BC Hiestand, KL Askew

JL Smith, M Lypson, M Silverberg, M Weizberg, T Murano, M Lukela, SA Santen R Daly, JH Planas, MA Edens

J DuCanto, KD Serrano, RJ Thompson

JS Sayegh, S Lahham, L Woodhouse, J Seong, CE McCoy K King, D Kass

J Greenstein, R Hardy, J Chacko, A Husain

K Wittels, J Wallenstein, R Patwari, S Patel

JB House, CH Choe, HL Wourman, KM Berg, JP Fischer, SA Santen

J Lewis, N Dubosh, C Rosen, D Schoenfeld, J Fisher, E Ullman

146 Characteristics of Real-Time, Non-Critical Incident Debriefing Practices in the Emergency Department

N Nadir, S Bentley, D Papanagnou, K Bajaj, S Rinnert, R Sinert

152 Continuing Medical Education Speakers with High Evaluation Scores Use More Image-based Slides

I Ferguson, AW Phillips, M Lin

Policies for peer review, author instructions, conflicts of interest and human and animal subjects protections can be found online at www.westjem.com. Volume XVIII, no. 1: January 2017

iii

Western Journal of Emergency Medicine


Western Journal of Emergency Medicine: Integrating Emergency Care with Population Health JOURNAL FOCUS Emergency medicine is a specialty which closely reflects societal challenges and consequences of public policy decisions. The emergency department specifically deals with social injustice, health and economic disparities, violence, substance abuse, and disaster preparedness and response. This journal focuses on how emergency care affects the health of the community and population, and conversely, how these societal challenges affect the composition of the patient population who seek care in the emergency department. The development of better systems to provide emergency care, including technology solutions, is critical to enhancing population health.

Table of Contents continued 159

Can Emergency Medicine Residents Predict Cost of Diagnostic Testing?

163

Exploring Scholarship and the Emergency Medicine Educator: A Workforce Study

169

The Cost and Burden of the Residency Match in Emergency Medicine

174

Differences in Self-expression Reflect Formal Evaluation in a Fourth-year Emergency Medicine Clerkship

CR Tainter, JA Gentges, SH Thomas, BD Burns

J Jordan, WC Coates, S Clarke, DP Runde, E Fowlkes, J Kurth, LM Yarris AM Blackshaw, SC Watson, JS Bush

M Chary, A Leuthauser, K Hu, B Hexom

Policies for peer review, author instructions, conflicts of interest and human and animal subjects protections can be found online at www.westjem.com. Western Journal of Emergency Medicine

iv

Volume XVIII, no. 1: January 2017


Western Journal of Emergency Medicine: Integrating Emergency Care with Population Health Indexed in MEDLINE Jeffrey N. Love, MD, Guest Editor Georgetown University Medical Center

Douglas S. Ander, MD, Guest Editor Emory University School of Medicine

Sorabh Khandelwal, MD, Associate Guest Editor Ohio State University Wexner Medical Center

Jonathan Fisher, MD, Associate Guest Editor University of Arizona Maricopa Medical Center Andrew W. Phillips, MD, Associate Guest Editor Stanford Hospitals and Clinics

Sally A. Santen, MD, PhD, Associate Guest Editor University of Michigan Health System David P. Way, MEd, Associate Guest Editor Ohio State University

David A. Wald, DO, Associate Guest Editor Lewis Katz Temple University School of Medicine Edward Michelson, MD, Associate Editor Texas Tech University

Mark I. Langdorf, MD, MHPE, Editor-in-Chief University of California, Irvine School of Medicine Shahram Lotfipour, MD, MPH, Managing Associate Editor University of California, Irvine School of Medicine

Niels K. Rathlev, MD, Associate Editor University of Massachusetts Baystate School of Medicine

Rick A. McPheeters, DO, Associate Editor Kern Medical

Joel M. Schofer, MD, MBA, Associate Editor Naval Medical Center Portsmouth

Section Editors AAEM/RSA

ED Administration

Cortlyn Brown, MD Gary Johnson, MD University of California, San Francisco Upstate Medical University Medical Center James Langabeer II, MBA, PhD Behavioral Emergencies University of Texas Medical School Michael P. Wilson, MD, PhD University of California, San Diego Emergency Cardiac Care Michael Kurz, MD Leslie Zun, MD, MBA University of Alabama at Birmingham Chicago Medical School Amal Mattu, MD Clinical Practice University of Maryland Michael Abraham, MD University of Maryland Eric Snoey, MD Alameda County Medical Center

Emergency Medical Services

Christopher Kahn, MD, MPH University of California, San Diego

David Thompson, MD Derek Cooney, MD University of California, San Francisco State University of New York Upstate Medical University, New York Kenneth S. Whitlow, DO Kaweah Delta Medical Center Joshua B. Gaither, MD University of Arizona, Tuscon

Critical Care

Joseph Shiber, MD Shira A. Schlesinger, MD, MPH University of Florida - College of Medicine University of California, Irvine Todd Slesinger, MD Aventura Hospital and Medical Center Geriatrics Teresita M. Hogan, MD University of Chicago Christopher “Kit� Tainter, MD University of California, San Diego Kathleen Walsh, DO, MS University of Wisconsin Disaster Medicine Christopher Kang, MD Infectious Disease Madigan Army Medical Center Ioannis Koutroulis, MD, MBA, PhD Drexel University College of Medicine Gentry Wilkerson, MD University of Maryland Stephen Liang, MD, MPHS Washington University School of Education Andrew Phillips, MD Medicine Stanford Hospitals and Clinics Kevin Lunney, MD, MHS, PhD Jeffrey Druck, MD University of Maryland School of University of Colorado Medicine Michael Epter, DO Maricopa Medical Center

Elissa Schechter-Perkins, MD, MPH Boston University School of Medicine

Michael Gottlieb, MD Rush Medical Center

Injury Prevention

Public Health

International Medicine

Chadd Kraus, DO, DrPH, MPH University of Missouri - Columbia

Wirachin Hoonpongsimanont, MD University of California, Irvine Chris Mills, MD, MPH Santa Clara Valley Medical Center Rolando Valenzuela, MD University of Southern California

Trevor Mills, MD, MPH Northern California VA Health Care

Resident/Student/Fellow Forum John Ashurst, DO Lehigh Valley Health Network

Legal Medicine

Greg P. Moore, MD, JD Madigan Army Medical Center

Methodology and Biostatistics Craig Anderson, MPH, PhD University of California, Irvine

Cecylia Kelley, DO Inspira Health Network

Technology in Emergency Medicine Sanjay Arora, MD University of Southern California

Elizabeth Burner, MD, MPH University of Southern California

Robert L. Rogers, MD University of Kentuky

Shu B. Chan MD, MS Resurrection Medical Center

Trauma

Christian McClung, MD MPhil University of Southern California

William Paolo, MD SUNY Upstate

Stormy M. Morales Monks, PhD, MPH Texas Tech Health Science University

Musculoskeletal

Juan F. Acosta DO, MS Pacific Northwest University

David Peak, MD Massachusetts General Hospital/Havard Medical School

Toxicology

Jeffrey R. Suchard, MD University of California, Irvine

Judith Knoll, DO Adena Health System

Brandon Wills, DO, MS Virginia Commonwealth University

Neurosciences

Edward P. Sloan, MD, MPH University of Illinois at Chicago

Ultrasound

William D. Whetstone, MD University of California, San Francisco

Pediatric Emergency Medicine

Judith Klein, MD University of California, San Francisco Paul Walsh, MD, MSc University of California, Davis

Jeremy Hess, MD, MPH University of Washington Medical Center

Gavin Budhram, MD Tufts University Laleh Gharahbaghian, MD Stanford University Shane Summers, MD Brooke Army Medical Center

J. Matthew Fields, MD Thomas Jefferson University Muhammad Waseem, MD Lincoln Medical & Mental Health Center

Official Journal of the California Chapter of the American College of Emergency Physicians, the America College of Osteopathic Emergency Physicians, and the California Chapter of the American Academy of Emergency Medicine

Available in MEDLINE, PubMed, PubMed Central, CINAHL, SCOPUS, Google Scholar, eScholarship, Melvyl, DOAJ, EBSCO, EMBASE, Medscape, HINARI, and MDLinx Emergency Med. Members of OASPA. Editorial and Publishing Office: WestJEM/Depatment of Emergency Medicine, UC Irvine Health, 333 City Blvd, West, Rt 128-01, Orange, CA 92868, USA Office: 1-714-456-6389; Email: Editor@westjem.org

Volume XVIII, no. 1: January 2017

v

Western Journal of Emergency Medicine


Western Journal of Emergency Medicine: Integrating Emergency Care with Population Health Indexed in MEDLINE

Editorial Board

Advisory Board

Erik D. Barton, MD, MBA University of California, Irvine

Jonathan Olshaker, MD Boston University

Peter A. Bell, DO, MBA Ohio University, Heritage College of Osteopathic Medicine

Edward Panacek, MD, MPH University of South Alabama

Peter A. Bell, DO, MBA American College of Osteopathic Emergency Physicians Ohio University, Heritage College of Osteopathic Medicine

Barry E. Brenner, MD, MPH Case Western Reserve University

Niels K. Rathlev, MD University of Massachusetts Baystate School of Medicine

John B. Christensen, MD California Chapter Division of AAEM

David F.M. Brown, MD Massachusetts General Hospital/ Harvard Medical School

Robert M. Rodriguez, MD University of California, San Francisco

Amal Khalil, MBA UC Irvine Health SOM

Francis Counselman, MD Eastern Virginia Medical School

Scott Rudkin, MD, MBA University of California, Irvine

Robert W. Derlet, MD University of California, Davis

Peter Sokolove, MD University of California, San Francisco

Elena Lopez-Gusman California ACEP American College of Emergency Physicians

Samuel J. Stratton, MD, MPH Orange County, CA, EMS Agency

Shahram Lotfipour, MD, MPH UC Irvine Health SOM

Steven Gabaeff, MD Clinical Forensic Medicine

Robert Suter, DO, MHA UT Southwestern Medical Center

Brent King, MD, MMM University of Texas, Houston

Scott Zeller, MD University of California, Riverside

Edward Michelson, MD Texas Tech University

Leslie Zun, MD, MBA Chicago Medical School

William Mallon, MD California ACEP American College of Emergency Physicians University of Southern California Keck School of Medicine

Daniel J. Dire, MD University of Texas Health Sciences Center San Antonio

Mark I. Langdorf, MD, MHPE UC Irvine Health SOM

Trevor Mills, MD, MPH California Chapter Division of AAEM LSU Medical Center Aimee Moulin, MD California ACEP American College of Emergency Physicians University of California, Davis Robert Suter, DO, MHA American College of Osteopathic Emergency Physicians UT Southwestern Medical Center Jan Wachtler American College of Osteopathic Emergency Physicians

Linda S. Murphy, MLIS University of California, Irvine School of Medicine Librarian

Editorial Staff

International Editorial Board Pablo Aguilera, MD Pontificia Universidad Catolica de Chile, Santiago, Chile Anwar Al-Awadhi, MD Mubarak Al-Kabeer Hospital, Jabriya, Kuwait Arif A. Cevik, MD Eskişehir Osmangazi University Medical Center, Eskisehir, Turkey Francesco Dellacorte, MD Azienda Ospedaliera Universitaria “Maggiore della Carità,” Novara, Italy Abhinandan A.Desai, MD University of Bombay Grant Medical College, Bombay, India Gayle Galleta, MD Sørlandet Sykehus HF, Akershus Universitetssykehus, Lorenskog, Norway Vijay Gautam, MBBS University of London, London, England Wirachin Hoonpongsimanont, MD, MSBATS

Joyce Y. Kim, BS Editorial Director

Siriraj Hospital, Mahidol University, Bangkok, Thailand Rosidah Ibrahim, MD Hospital Serdang, Selangor, Malaysia

Usman Alam, BS Marketing Director

Katsuhiro Kanemaru, MD University of Miyazaki Hospital, Miyazaki, Japan

June Casey, BA Copy Editor

Amin A. Kazzi, MAAEM, MD The American University of Beirut, Beirut, Lebanon

Maryam Farooqui, BS Associate Editorial Director

Steven H. Lim, MD Changi General Hospital, Simei, Singapore

Alissa Fiorentino WestJEM Staff Liaison

Khrongwong Musikatavorn, MD King Chulalongkorn Memorial Hospital, Chulalongkorn University, Bangkok, Thailand

Nancy Hernandez, BA Associate Publishing Director

Bandr Mzahim, MD King Fahad Medical City, Riyadh, Saudi Arabia

Vincent Lam, BS Associate Publishing Director Samantha Shwe, BS Publishing Director

Jacob (Kobi) Peleg, PhD, MPH Tel-Aviv University, Tel-Aviv, Israel

Official Journal of the California Chapter of the American College of Emergency Physicians, the America College of Osteopathic Emergency Physicians, and the California Chapter of the American Academy of Emergency Medicine

Available in MEDLINE, PubMed, PubMed Central, CINAHL, SCOPUS, Google Scholar, eScholarship, Melvyl, DOAJ, EBSCO, EMBASE, Medscape, HINARI, and MDLinx Emergency Med. Members of OASPA. Editorial and Publishing Office: WestJEM/Depatment of Emergency Medicine, UC Irvine Health, 333 City Blvd, West, Rt 128-01, Orange, CA 92866, USA Office: 1-714-456-6389; Email: Editor@westjem.org

Western Journal of Emergency Medicine

vi

Volume XVIII, no. 1: January 2017


Western Journal of Emergency Medicine: Integrating Emergency Care with Population Health Indexed in MEDLINE

This open access publication would not be possible without the generous and continual financial support of our society sponsors, department subscribers, and chapter subscribers. Professional Society Sponsors American College of Osteopathic Emergency Physicians California Chapter Division of American Academy of Emergency Medicine

California ACEP

Academic Department of Emergency Medicine Subscriber Allegheny Health Network Pittsburgh, PA American University of Beirut Beirut, Lebanon Arrowhead Regional Medical Center Colton, CA Baylor College of Medicine Houston, TX Baystate Medical Center/Tufts University Springfield, MA Boston Medical Center Boston, MA Brigham and Women’s Hospital Department of Emergency Medicine Boston, MA Brown University Providence, RI California State University Fresno Fresno, CA Carl R. Darnall Army Medical Center Fort Hood, TX Conemaugh Memorial Medical Center Johnstown, PA Eastern Virginia Medical School Norfolk, VA Emory University Atlanta, GA Florida Hospital Medical Center Orlando, FL Georgia Regents University Emergency Medicine Residency and Fellowship Augusta, GA Good Samaritan Hospital Medical Center West Islip, NY Henry Ford Medical Center Detroit, MI Highland Hospital Oakland, CA

INTEGRIS Health Oklahoma City, OK Kaweah Delta Health Care District Visalia, CA Kennedy University Hospitals Turnersville, NJ Kern Medical Center Bakersfield, CA Lakeland HealthCare St. Joseph, MI Lehigh Valley Hospital and Health Network Allentown, PA Louisiana State University Health Sciences Center New Orleans, LA Madigan Army Medical Center Tacoma, WA Maimonides Medical Center Brooklyn, NY Maricopa Medical Center Phoenix, AZ Massachusetts General Hospital Boston, MA Mayo Clinic College of Medicine Rochester, MN Medical College of Wisconsin Milwaukee, WI Mt. Sinai Medical Center Miami Beach, FL National University Hospital Singapore, Singapore New York Methodist Hospital Brooklyn, NY North Shore University Hospital Manhasset, NY Northwestern Medical Group Chicago, IL Ohio State University Medical Center Columbus, OH Oklahoma University Norman, OK

Penn State Milton S. Hershey Medical Center Hershey, PA Presence Resurrection Medical Center Chicago, IL Regions Hospital/ Health Partners Institute for Education and Research St. Paul, MN Robert Wood Johnson Hospital New Brunswick, NJ Southern Illinois University Carbondale, IL Stanford University Palo Alto, CA SUNY Upstate Medical Center Syracuse, NY Temple University Philadelphia, PA University Hospitals Case Medical Center Cleveland, OH University of Alabama, Birmingham Birmingham, AL University of Arizona Tucson, AZ University of California, Davis Medical Center Sacramento, CA University of California, San Francisco San Francisco, CA University of California, San Francisco, Fresno Fresno, CA University of California Irvine Orange, CA University of California, Los Angeles Los Angeles, CA University of California San Diego La Jolla, CA University of Colorado & Denver Health Denver, CO University of Florida Jacksonville, FL

State Chapter Subscriber

Arizona Chapter Division of the American Academy of Emergency Medicine California Chapter Division of the American Academy of Emergency Medicine Florida Chapter Division of the American Academy of Emergency Medicine Great Lakes Chapter Division of the American Academy of Emergency Medicine

University of Illinois at Chicago Chicago, IL University of Iowa Iowa City, IA University of Kansas Hospital Kansas City, KS University of Louisville Louisville, KY University of Maryland Baltimore, MD University of Michigan Ann Arbor, MI University of Missouri Columbia, MO University of Nebraska Medical Center Omaha, NE University of Nevada Las Vegas, NV University of Oklahoma Norman, OK University of Southern California/Keck School of Medicine Los Angeles, CA University of Tennessee, Memphis Memphis, TN University of Texas Houston, TX University of Washington Seattle, WA University of Wisconsin Hospitals and Clinics Madison, WI Virginia Commonwealth University Medical Center Richmond, VA York Hospital York, ME Wake Forest University Winston-Salem, NC Wright State University Dayton, OH

Tennessee Chapter Division of the American Academy of Emergency Medicine Uniformed Services Chapter Division of the American Academy of Emergency Medicine Virginia Chapter Division of the American Academy of Emergency Medicine

International Society Partners

Sociedad Argentina de Emergencias Thai Association for Emergency Medicine Norwegian Society for Emergency Medicine Sociedad Chileno Medicina Urgencia Emergency Medicine Association of Turkey Lebanese Academy of Emergency Medicine Mediterranean Society of Emergency Medicine To become a WestJEM departmental sponsor, waive article processing fee, receive print and copies for all faculty and electronic for faculty/residents, and free CME and faculty/fellow position advertisement space, please go to http://westjem.com/subscribe or contact: Alissa Fiorentino WestJEM Staff Liaison Phone: 1-800-884-2236 Email: sales@westjem.org

Volume XVIII, no. 1: January 2017

vii

Western Journal of Emergency Medicine


Editorial

The Evolving Definition of Education Scholarship: What the Clinician Educator Needs to Know Douglas S. Ander, MD* Jeffrey N. Love, MD†

* Emory University School of Medicine, Department of Emergency Medicine, Atlanta, Georgia † Georgetown University School of Medicine, Department of Emergency Medicine, Washington, D.C.

Section Editor: Mark I. Langdorf, MD, MHPE Submission history: Submitted December 6, 2016; Revision received December 7, 2016; Accepted December 7, 2016 Electronically published December 15, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.12.33326

[West J Emerg Med. 2017;18(1)1-3.]

Medical education is faced with a growing number of challenges. The playing field that most of us know and recognize has been evolving over the past decade. Many of the truths we knew as educators are no longer accurate and we are faced with educating our learners in this new environment. Accreditation standards through national organizations are more rigorous and based on attainment of competency; therefore, outcome-based education has developed as a key factor. The Accreditation Council for Graduate Medical Education (ACGME) introduced the six domains of clinical competency to the profession, and in 2009 it began a multiyear process of restructuring its accreditation system to be based on educational outcomes in these competencies.1 The Liaison Committee on Medical Education in standard 6.1 of its Functions and Structure of a Medical School states that “the faculty of a medical school define its medical education program objectives in outcome-based terms that allow the assessment of medical students’ progress in developing the competencies that the profession and the public expect of a physician.”2 Both undergraduate and graduate medical education accreditation agencies are focusing on educational outcomes. It is no longer good enough to demonstrate that your learners performed the skills; now you must document achievement of those competencies. Our clinical environment is less conducive to concentrating on education due to documentation, billing requirements, and the sheer volume

in our emergency room departments.3-4 Evolving educational pedagogy is more focused on small groups, simulation, and less on large-group formats. These challenges are opportunities for educators but require new strategies, which require research to determine the best approach. The Western Journal of Emergency Medicine (WestJEM) dedicated itself two years ago to being a forum for educational scholarship. Partnering with the Council of Emergency Medicine Residency Directors (CORD) and the Clerkship Directors in Emergency Medicine (CDEM), WestJEM has developed an education supplement whose goal is to promote quality educational scholarship. All quality teaching is based on a scholarly approach that will naturally lead to educational scholarship. The definition of scholarship has evolved over the past several decades. In 1990 with the release of the Boyer report for the Carnegie Foundation a clearer definition of scholarship was defined.5 Boyer described four types of scholarship: discovery, integration, application, and teaching (Table). Discovery is what we typically consider to be traditional research, using the scientific method to objectively investigate the phenomenon under study. Integration interprets the use of knowledge across disciplines. An educator reports whether their experiences are useful beyond their own discipline. The third element, Application, focuses on using the educator’s findings to aid society. For instance, organizing and publishing the results of

Table. Boyer’s classification of the four types of scholarship. Type of scholarship

Description

Scholarship of discovery

Original research

Scholarship of integration

Making connections across disciplines

Scholarship of application

Use of research, experience and expertise to provide a service to the greater community

Scholarship of teaching

Study of teaching and learning processes in a systematic method to optimize learning

Volume XVIII, no. 1: January 2017

1

Western Journal of Emergency Medicine


Ander et al.

Educational Scholarship a consensus conference or development of simulation cases that are used to train residents thereby improving patient care. Development of blogs and podcasts that enhance patient care would also be considered Scholarship of Application. The final type of scholarship is Teaching, when a scholarly approach is used as the basis for teaching. This means studying various teaching models and practices to optimize learning. Glassik in 2000 expanded on Boyer’s work by defining how we should measure quality in scholarship.6 He stated that for scholarship to be praised it must be characterized by clear goals, adequate preparation, appropriate methods, outstanding results, effective communication, and a reflective critique. It is vitally important that the distinction between teaching and scholarly teaching be clear. That distinction was clarified by Shulman when he stated that scholarly work must meet these criteria: • The work must be made public. • The work must be available for peer review and critique according to accepted standards. • The work must be able to be reproduced and built on by other scholars.7 Educators must keep in mind that any teaching can be considered scholarship if the endeavor is approached systematically and proactively in a scholarly manner. The teaching that is done on a regular basis or new curricula may be studied as research (Scholarship of Discovery) or approached as educational scholarship (Scholarship of Teaching). The key is applying Glassick’s standards for scholarship and determining whether the test of scholarship proposed by Shulman is demonstrated in the work you are pursuing. Most recently, a Consensus Conference on Educational Scholarship was convened by the AAMC-GEA in 2006.8 One aim of the conference was to reaffirm a previously identified group of five educational activity categories commonly identified as scholarship within educators’ portfolios, beyond education research. Through an iterative process, conference participants developed standards for these five educational activities consistent with principles of scholarship with the goal of facilitating the ability of promotion committees to evaluate the value of educators’ contributions. The five educational activities include the following: • Teaching • Curriculum development • Advising and mentoring • Education leadership and administration • Learner assessment The documentation standard for each category consists of two components: (1) Educational excellence in terms of quantity and quality; and (2) Engagement with the education community by documenting how the educators’ work was informed by current knowledge in the field. The goal for educators is turning their educational programming and responsibilities into educational

Western Journal of Emergency Medicine

scholarship. The first step is to develop a working understanding of the subject at hand by reviewing the available literature. This provides a conceptual framework for the work that follows.9 One particularly useful approach is to consider the following three-phase model. Phase 1: Clearly describe what you want to do. What is the educational activity? Phase 2: Collect data to improve what you do or prove your hypothesis. This is the scholarly approach aspect. Finally, Phase 3: Share your finding to improve what the rest of the community does. This is scholarship. We propose that all educators apply these principles to all their teaching endeavors. Our challenges provide us with the opportunity to be innovative and apply the scholarly approach as we tackle our new educational environment. When approached using Glassick’s definitions of scholarship and if they meet the test of true scholarship you will be adding to the greater body of literature that will improve our learners’ educational experience. We encourage everyone to consider approaching all their educational programs using a scholarly approach. Every time you are teaching or developing a curriculum be scholarly; not only will the result be pedagogically sound it will also be a basis for educational scholarship.

Address for Correspondence: Douglas S. Ander, MD, 49 Jesse Hill Jr. Dr., Atlanta, GA 30303. Email: dander@emory.edu. Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none. Copyright: © 2016 Ander et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

REFERENCES 1. Nasca TJ, Philibert I, Brigham T, et al. The next GME accreditation system – rationale and benefits. NEJM. 2012;366:1051-6. 2. American Association of Medical Colleges (AAMC) Liaison Committee on Medical Education (LCME). Standards for Accreditation of Medical Education Programs Leading to the M.D. Degree. April 2015 (Effective July 2016). 3. Wei G, Arya R, Ritz ZT, et al. How does emergency department crowding affect medical student test scores and clerkship

2

Volume XVIII, no. 1: January 2017


Ander et al.

Educational Scholarship

evaluations. West J Emerg Med. 2015;16:913-8.

scholarship of teaching. Acad Med. 2000;75:877-80.

4. Mahler SA, McCartney JR, Swoboda TK, et al. The impact of

7. Hutchings P, Shulman, LS. The scholarship of teaching: New

emergency department overcrowding on resident education. J Emerg

elaborations, new developments.” Change. 1999;31(5):10-15.

Med. 2012;42:69–73.

8. Simpson D, Fincher RM, Hafler JP, et al. Advancing Educators and

5. Boyer EL. Scholarship reconsidered – priorities of the professoriate.

Education by Defining the Components and Evidence Associated

The Carnegie Foundation for the Advancement of Teaching. 1990.

with Educational Scholarship. Med Educ. 2007;41:1002-9.

6. Glassick CE. Boyer’s expanded definitions of scholarship, the

9. Bordage G. Conceptual Frameworks to Illuminate and Magnify. Med

standards for assessing scholarship, and the elusiveness of the

Volume XVIII, no. 1: January 2017

Educ. 2009;43:312-9.

3

Western Journal of Emergency Medicine


Editorial

Clinical Reasoning: Defining It, Teaching It, Assessing It, Studying It Larry D. Gruppen, PhD

University of Michigan Medical School, Department of Learning Health Sciences, Ann Arbor, Michigan

Section Editor: Jeffrey Love, MD Submission history: Submitted November 18, 2016; Accepted November 18, 2016 Electronically published December 5, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.11.33191

[West J Emerg Med. 2017;18(1):4-7.]

Clinical reasoning is a perennial focus of medical education, performance assessment, and study. It might be argued to be the defining characteristic of the profession. It is, however, a very complex and multi-faceted phenomenon that can create considerable confusion and cross-communication. Its importance makes it worthwhile to consider some of those complexities.

to teaching, assessment and study. It is unlikely that we will achieve a clean taxonomy of clinical reasoning activities anytime soon, so in the meantime, it is important for anyone trying to teach, assess, or study clinical reasoning to recognize the complexity of the terms and be explicit about their operational definition. In spite of this conceptual sprawl, there are still significant aspects of clinical reasoning that are largely ignored in the literature. Because it is often defined in terms of cognition, such things as context, affect, and institutional factors have rarely been examined for relevance to clinical reasoning. There is, however, a growing awareness of the importance of context and the larger system in which clinical reasoning takes place.2 Thinking about clinical reasoning as if it were isolated in the physician’s head is no longer viable. Another aspect of clinical reasoning that has suffered significant neglect is management –attention is primarily devoted to diagnostic reasoning, not therapeutic reasoning. The preoccupation with diagnostic tasks is understandable. There is the prospect of a “correct” diagnosis and the attraction of being able to classify reasoning as successful or unsuccessful is undeniable. If one can be “scored” as right or wrong, all the reasoning steps that led up to that answer can be examined in the same right-wrong light. In contrast, therapy is much more difficult to classify as “right” and “wrong.” It depends on many variables that can be combined in numerous ways and it is often proven right or wrong only in hindsight. Individual physicians can make plausible arguments for very different management alternatives. It is much more a “matter of opinion” or judgment than a universally correct solution.

Defining it Like the fable of the blind men and the elephant, each of whom, feeling a different part of the elephant, described it in very different ways, clinical reasoning is a vast, complex construct that is described and used in different ways by different people. There is no generally accepted definition of clinical reasoning and, indeed, many articles about clinical reasoning never define it explicitly; it is often assumed as a universally understood construct. For the present commentary, we can describe the clinical reasoning process as including the physician’s integration of her own (biomedical and clinical) knowledge with initial patient information to form a case representation of the problem. The physician uses this problem representation to guide the acquisition of additional information and then, on the basis of this information, revises the problem representation. She repeats the information gathering – representation revision cycle until she reaches a threshold of confidence in that representation to support a final diagnosis and/or management actions.1 This very broad description subsumes numerous additional phenomena and questions: how is knowledge organized and accessed, how does expertise manifest itself in clinical reasoning, how are alternative representations evaluated, and so forth. It is readily apparent to anyone reading the literature that “clinical reasoning” is used for a considerable variety of activities. Indeed, a skeptic may well ask “what is NOT clinical reasoning?” If the term comes to encompass any physician thinking about clinical problems, the concept becomes so expansive as to risk becoming useless as a guide

Western Journal of Emergency Medicine

Teaching it Considerable effort goes into teaching clinical reasoning. Sometimes, this is the focus of specific courses, but it is a key goal of almost any course, clerkship or clinical rotation.

4

Volume XVIII, no. 1: January 2017


Gruppen

Clinical Reasoning: Defining, Teaching, Assessing, Studying

Numerous innovations have been developed for teaching various aspects of clinical reasoning using carefully designed and selected cases, mnemonics for gathering information, identification of critical information to discriminate among diagnostic alternatives, appropriate methods for judging and managing uncertainty, de-biasing methods, and the like. These interventions are often designed to address common problems that learners demonstrate in clinical reasoning: inadequate knowledge, faulty data gathering, faulty data processing, or faulty metacognition.3 A risk in all of these efforts is that we come to believe we are teaching “clinical reasoning” as a generalizable skill that can be applied to any clinical problem. Unfortunately, this fond hope has little empirical support. From the earliest studies of medical problem solving4,5 to the present, the most reproducible result is that clinical reasoning performance is highly content (and context) specific. Solving a clinical problem in one discipline holds little predictive value for how one will do with a problem in another area. Even in problems with the same diagnosis, there is little consistency in performance. It is apparent that “reasoning skills” or “critical thinking” do not go far in helping develop clinical reasoning. Instead of general processes, it is knowledge that is key to performance. Indeed, most educational interventions that focus on clinical reasoning are also (perhaps implicitly) conveying knowledge in critical areas of medicine and it is this knowledge acquisition that fosters better performance. At the extreme, this can be seen in the development of pattern recognition, in which knowledge of common patterns and relationships among information lead to recognition of disease possibilities WITHOUT conscious reasoning. Indeed, some do not consider “mere” pattern recognition as a manifestation of clinical reasoning simply because it bypasses the conscious, effortful thought processes and relies on automated cognitive processes.6 Clinical reasoning extends well into non-conscious as well as conscious processes.

the same content domain, the context of the task influences performance. Context includes psychological variables, such as fatigue and stress or immediately preceding patient experiences, social variables, such as team relationships and support, and institutional/environmental factors, such as inpatient vs. outpatient setting.7 Studying it As might be predicted from the centrality of clinical reasoning, there is a substantial body of research associated with it. This research can be divided into two broad perspectives – a descriptive perspective that focuses on the actual cognitive activities and actions of physicians while engaged in clinical reasoning, and a prescriptive perspective that defines optimal, rational models for reasoning and investigates how and to what extent physicians deviate from these normative models. The descriptive perspective has its roots in cognitive psychology and began as a special case of general problemsolving studies. It focuses on clinical reasoning as a domain in which the problems are complex and there is a clear role for expertise. The critical role of knowledge distinguishes medicine from many other domains of problem-solving research, such as games, mathematics or logic, in which a relatively small number of rules were adequate for correct solutions. Descriptive studies often highlight four research themes: knowledge organization, cognitive processes, problem structure, and expertise characteristics. Knowledge organization is a lynchpin of research on cognition generally and this interest extends to medicine as well. Theories of knowledge organization posit a wide range of explanatory constructs (prototypes, schemas, scripts, mental models, networks, etc.) and address questions about knowledge acquisition, retrieval and transfer. Many of these cognitive theories have concentrated on the use of knowledge rather than its acquisition, but educational theories of how knowledge is best acquired are also common in medical education. A great deal of the research on clinical reasoning addresses the various cognitive processes involved. For example, foundational processes such as perception turn out to be essential to expertise. Experts “see” the world differently from novices by virtue of sophisticated “pattern recognition” capabilities that effectively move some of their knowledge to the unconscious, rapid, and automated process of perception. Attention is another cognitive process in which clinical expertise has an advantage in focusing on relevant information and not getting distracted by irrelevancies. Information gathering and evaluation are other critical cognitive processes that drive many studies. Comprehending and building a cognitive representation of a clinical problem are more advanced cognitive processes that are also heavily influenced by underlying knowledge. There are other cognitive processes and numerous theories that inform and

Assessing it Numerous methods have been developed to assess clinical reasoning – or some part of it. A few examples are provided in the table. Each method addresses a component of the larger clinical reasoning process, often in the form of focusing on a particular sub-task, such as information gathering, adjusting diagnostic hypotheses for new information, using basic science knowledge to reason through an electrolyte problem, or prioritizing diagnostic alternatives. Each assessment method makes assumptions about the underlying construct (clinical reasoning) that must be considered before making general conclusions about an examinee’s competence. Like teaching clinical reasoning, assessing it confronts the vexing phenomenon of content specificity. Even more challenging is the growing recognition that, even within

Volume XVIII, no. 1: January 2017

5

Western Journal of Emergency Medicine


Clinical Reasoning: Defining, Teaching, Assessing, Studying Gruppen Table. Methods of assessing clinical reasoning. Assessment method

Description

Chart stimulated recall

Using the patient chart generated by the clinician, probe for recall of the reasoning process in connection with key elements of the chart.

Concept map

Graphic representation of knowledge constructs and relationships among them (organization). Used for both teaching and assessment.

Direct observation

Observation in a clinical reasoning task and judgment of performance against specified criteria.

Extended matching, multiple choice questions

Select best response from a restricted number of alternative answers. Most commonly used to asses knowledge, but amenable to more sophisticated tasks.

Patient management problems

A structured patient case that allows flexible selection of clinical information and the development of a dynamic diagnostic or management decision.

Post-encounter note

Written summary of patient case, relevant information, diagnosis, and treatment plan.

Script concordance tests

Assesses the impact of new information on a diagnostic hypothesis or the probability pursuing a specified action.

Simulation, standardized patients

Structured patient case with a trained actor that requires the learner to do a history and physical examination and generate a diagnostic solution.

Think aloud, oral exam

Verbalize one’s reasoning process as one works through a clinical case or specified problem – with or without prompts and probes from an examiner.

stimulate a wealth of research questions. The prescriptive perspective on clinical reasoning has its roots in computer science, economics, and probability theory. These disciplines provide the normative models for dealing with uncertainty, modeling complex decision alternatives, and balancing competing values. In comparison to these normative models, people (including physicians) are often irrational, illogical, and badly flawed reasoners. They regularly violate many of these normative principles and make predictable errors (biases) because they use simple shortcuts (i.e, heuristics). The flawed (from the prescriptive perspective) nature of clinical reasoning leads to two kinds of research. One is the investigation of the conditions under which physician reasoning is more or less problematic and understanding how these errors and biases emerge. Often, the objective is to improve reasoning through educational interventions (e.g., de-biasing techniques). The second is to improve reasoning through decision support tools or computerbased programs that relieve physicians of many of the components of reasoning that produce errors. Decision support tools and reasoning models may be diagnostic or therapeutic in focus and are promoted as ways to reduce the undesirable variability in physician decisions that arise from faulty and inconsistent reasoning. In summary, clinical reasoning is something of a “god term,” which supersedes and dominates many subordinate terms and concepts.8 Its “power” leads to rather indiscriminate and unthinking use which, in turn, contributes to confusion Western Journal of Emergency Medicine

and conflicting discussions of the nature and function of clinical reasoning. If nothing else, I hope this commentary contributes to recognizing that we need to be careful about what we mean when we talk about clinical reasoning. We need to be more precise in defining what aspect of clinical reasoning we are interested in. We also need to use theory to help frame our thinking about this complex construct. Arguments about which is the “right” theory are moot – there is no one right way to think about clinical reasoning, but all will benefit from complementary perspectives that each contribute a piece to the greater puzzle.

Address for Correspondence: Larry D. Gruppen, PhD, University of Michigan Medical School, Department of Learning Health Sciences, 500 S State St, Ann Arbor, MI 48109. Email: lgruppen@ med.umich.edu. Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none. Copyright: © 2016 Gruppen. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

6

Volume XVIII, no. 1: January 2017


Gruppen

Clinical Reasoning: Defining, Teaching, Assessing, Studying

REFERENCES

year retrospective. Swanwick T, ed. Eval Health Prof. 1990;13:5-36. 5. Elstein AS, Shulman LS, Sprafka SA. Medical Problem Solving: An

1. Gruppen LD and Frohna AZ. Clinical reasoning. In: Norman GR,

Analysis of Clinical Reasoning. Cambridge, MA: Harvard University

van der Vleuten CPM, Newble DI, eds. International Handbook of

Press; 1978.

Research in Medical Education. Dordrecht, The Netherlands: Kluwer

6. Kahneman D. Thinking, Fast and Slow. (Farrar S and G, ed.). New

Academic Publishers;2002:205-30.

York: MacMillan; 2011.

2. Durning SJ and Artino ARJ. Situativity theory: A perspective on how

7. Blanchard K, Blanchard M, Carew D, et al. Situational Leadership II:

participants and the environment can interact: AMEE Guide no. 52.

The integrating concept. In: Ken Blanchard Associates, ed. Leading

Med Teach. 2011;33:188-99.

at a Higher Level. Upper Saddle River, NJ: Prentice Hall; 2007.

3. Cutrer WB, Sullivan WM, Fleming AE. Educational strategies for

8. Lingard LA. What we see and don’t see when we look at

improving clinical reasoning. Curr Probl Pediatr Adolesc Health Care.

“competence”: notes on a god term. Adv Heal Sci Educ Theory Pract.

2013;43(9):248-57.

2009;14(5):625-8.

4. Elstein AS, Shulman LS, Sprafka SA. Medical Problem Solving: A ten-

Volume XVIII, no. 1: January 2017

7

Western Journal of Emergency Medicine


Editorial

Proper Applications for Surveys as a Study Methodology Andrew W. Phillips, MD, MEd

Stanford University, Department of Anesthesia, Division of Critical Care, Stanford, California

Section Editor: Mark I. Langdorf, MD, MHPE Submission history: Submitted August 7, 2016; Revision received August 11, 2016; Accepted November 22, 2016 Electronically published December 5, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.11.32000

[West J Emerg Med. 2017;18(1):8-11.]

A survey instrument is any series of pre-defined questions intended to collect information from people, whether in person, by Internet, or any other media.1,2 Surveys are ubiquitous in health professions education research, used in approximately half of recently published articles,3 likely because of their low cost, relative speed, and (often misguided) perception that they are simple to use. A survey instrument is merely the tool used for survey methodology, which encompasses the entire application of the survey instrument, such as selecting a sampling frame, maximizing the response rate, and accounting for nonresponse bias.4 The distinction is important because survey methodology is a research method like any of the various other methodology options (e.g. observational cohorts and randomized controlled trials), and there are specific situations for which a particular method is indicated or contraindicated. The goal of this article is to provide guidance to researchers about when a survey is the appropriate methodology for a given research question. The importance of methodology choice is second only to choosing the primary research question itself. For comprehensive survey methodology reviews, readers are encouraged to review dedicated references.1,2,5 The rest of this article will address the fundamental question: When should I use a new survey?

For example, the Maslach Burnout Inventory (MBI) was created to address the novel (at the time) construct of burnout.7 As a construct, burnout is a cohesive idea, explained by supportive ideas (subscales that represent domains), but not fully explained by observable data. Burnout is a human quality and so must be addressed by a survey. Similarly, an opinion is a human quality and must be addressed by a survey, such as a preference for a product or teaching method. It is worth stressing that opinion surveys also require the same rigor as psychometric instruments. WHEN NOT TO USE A SURVEY (CONTRAINDICATIONS) (Relative) Contraindication #1: Observable or Recorded Data Already Exist. Using a survey when observable or recorded data exist is a relative contraindication because—although direct observation or a primary source is the most accurate method— sometimes a survey is the only practical way to obtain the data. A survey, however, should be the last resort because it is subject to interpretation and recall bias. For example, daily activity (e.g. amount of time spent with patients versus a computer) is more accurately recorded by a third-party observer than self-reporting on surveys.8 If direct measurement is not a reasonable possibility, then frequent journal entries, which could be considered a repeated measures survey method, is the next best option. Circulation has a good decision tree for researchers studying physical activity, and the principles can be applied to any difficult-tomeasure activity.9 Another example of observable data is how much students learned. Actual learning gains (i.e. learning something new) are not equivalent to learners’ opinions of their learning gains.10-12 Learners’ opinions are a real entity and sometimes important for a study question. However, researchers should not substitute a survey of learners’ opinions for tangibly measurable learning gains (e.g. test score improvements or patient outcomes) if the study question is about actual learning gains.

WHEN TO USE A NEW SURVEY (INDICATIONS) The best use of survey methodology is to investigate human phenomena, such as emotions and opinions.2 These are data that are neither directly observable, nor available in documents. Moreover, a new survey instrument is only indicated when a prior instrument does not exist or is determined empirically to have insufficient validity and reliability evidence for the sampling frame of interest.1,2 When properly constructed, a survey—regardless of topic and whether exploring an emotion or opinion—has the equivalent rigor of a psychometric instrument.5,6 A psychometric instrument can even be used as a survey to explore emotion.

Western Journal of Emergency Medicine Volume

8

XVIII, no. 1: January 2017


Phillips

Proper Applications for Surveys in Research

Survey methodology can also be used when it is unreasonable to obtain the primary records themselves. For example, a researcher may ask an office of medical education to complete a survey with data such as total number of residents, how their elective time is used, and how many residents required remediation. Although obtaining the primary documents for each of these questions would be best, it would likely be improbable to obtain the information from all of the different specialties. Thus, the graduate medical education office can complete the survey instrument for the researcher. However, it is important that the survey is completed using the records, not an individual’s recollection. It bears repeating that a survey should be the last resort for observable and recorded data. One of the most common misuses of survey methodology is to obtain observable and recorded data.

resources to find pre-existing survey instruments. Alternative Approach An early search for pre-existing surveys is essential if a researcher plans to use survey methodology. Use the exact same survey—word for word—if possible, and investigate reliability and validity evidence in the new cohort of interest, even if the exact same survey is used (word for word).2,15 Example: Galan F et al. Burnout risk in medical students in Spain using the Maslach Burnout Inventory-Student survey. Int Arch Occup Environ Health. 2011.84:453-459.16 Galan and colleagues defend their need to alter individual words for what they believed to be a unique cohort and successfully re-demonstrated reliability and validity evidence before using the survey.

Alternative Approach: Use Direct Observation or Records When Possible. Researchers should carefully evaluate the most accurate way to measure the variable(s) of interest. Offices of medical education or the Association of American Medical Colleges, for example, can be primary sources for population data. Using the most accurate source for different questions within a study may require combining data from an external source and data from a survey.

Contraindication #3: The Concept Is Ill-Defined. Survey methods range from a researcher personally asking respondents each question—with great ability to further explore respondent answers—to third-party questionnaires—without any ability to explore or clarify respondent answers. It is important to recognize the differences in data obtained from each survey format and apply the methodology appropriately. An ill-defined concept is a contraindication to use a survey, and qualitative grounded theory interviews or ethnography should be strongly considered. This especially applies to designing potential responses for survey questions.2 Researchers who use a questionnaire for a poorly defined concept run the risk of omitting options that respondents would have selected if they had been available because a questionnaire limits response options.† The results become artificially narrow and do not adequately represent the sampling frame.

Example: Straus CM et al. Medical student radiology education: Summary and recommendations from a national survey of medical school and radiology department leadership. J Am Coll Radiol. 2014.11(6):606-610.13 Note how Straus and colleagues surveyed radiology department chairs for opinions but requested numerical information (e.g. number of students matching in radiology each year) from records held by the offices of medical education.13 Contraindication #2: A Pre-Existing Survey Exists. Often a similar—if not exactly the same—concept has been surveyed by other researchers. Although the primary research question may warrant a survey methodology, a suitable existing survey is a contraindication to create and apply a new survey.* We as researchers are limiting greater concept understanding because we cannot combine findings, such as in a meta-analysis, 14 if we do not use pre-existing surveys when they are available. The Figure contains a list of

Alternative Approach A questionnaire limits response options and should only be used when a concept is understood well enough to supply a full range of response options. Researchers should start with qualitative method interviews or focus groups17 to explore a wide range of concept interpretations and opinions.2 Example: McLeod et al. Using focus groups to design a valid questionnaire. Academic Medicine. 2000. 75(6):671.18

* A more in-depth explanation of using pre-existing surveys is beyond the scope of this article, but researchers should be aware that even pre-existing surveys must be re-evaluated for validity and reliability evidence. Lack of validity and reliability evidence for a preexisting survey when applied to a new sampling frame is an indication to edit the pre-existing survey or create a new one. † Including “other” with a narrative response as a response option is not the best way—and usually still inadequate—to capture poorly understood concepts.

Volume XVIII, no. 1: January 2017

9

Western Journal of Emergency Medicine


Proper Applications for Surveys in Research Phillips Mental Measurements Yearbook (MMY). Publisher: The Buros Institute for Mental Measurements, Lincoln, NE Health & Psychosocial Instruments (HaPI). Publisher: Behavioral Measurement Database Services. Pittsburgh, PA Directory of Unpublished Experimental Measures Publisher: American Psychological Association, Dubuque, IA. Editors: Bert A. Goldman & David F. Mitchell. Most current volume: 9 (2007) Figure. Databases of previously established survey instruments.

The authors in this example set out to explore a concept that had been previously overlooked. Since no prior data existed, they started with focus groups to first define the construct, then built a questionnaire to explore the construct in the cohort of interest.18 Contradiction #4: The Sampling Frame Is Not Qualified. The accuracy of a survey is only as strong as the accuracy that each respondent can provide. Although a survey method may be indicated, it may be contraindicated in a certain sampling frame. For example, the meaning of learner evaluations of faculty has long been questioned. Are learners qualified to judge instructors? Are instructor evaluations by learners meaningful?19,20 Researchers who assert that learners are not qualified to evaluate instructors would also assert that a class survey about an instructor’s abilities would be inappropriate (although this practice is ubiquitous). Another example of an unqualified sampling frame is when speculative questions are asked, such as, “What do your peers think?” Although a different context, the underlying principle remains the same since respondents are unqualified to present data for what others may think. Alternative Approach Consider the qualifications of a given sampling frame for the particular question of interest. If the primary research question requires the respondents to have expertise, consider a sampling frame with that specific expertise or use a different study methodology, such as observation or testing.

unobservable, human phenomena such as emotions and opinions. Researchers should use methods other than surveys to gather observable data whenever possible. Moreover, many research questions are well suited to using mixed methods that include a survey in addition to other data collection methods.

Address for Correspondence: Andrew W. Phillips, MD, MEd, Stanford University, Department of Anesthesia, Division of Critical Care 450 Serra Mall, Stanford, CA 94305. Email: warejko1@ gmail.com. Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none. Copyright: © 2016 Phillips. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

REFERENCES 1. Dillman DA, ed. Mail and Internet Surveys. 2nd ed. New York: John Wiley and Sons, Inc; 2000.

Example: Grover PL. Evaluation of instructional skills of medical teachers: the participant observer in the medical school. Med Educ. 1980; 14:12-15.21

2. Artino AR, La Rochelle JS, Dezee KJ, et al. Developing questionnaires for educational research: AMEE Guide No. 87. 2014;(36):463-74.

Grover introduces the idea of a trained third-party observer to evaluate medical student instructors. Depending on the primary research question (opinion of lecturing abilities versus learning outcomes), student examinations may be more accurate as well.

3. Phillips AW, Friedman B, Utrankar A, et al. Surveys of health

CONCLUSION Survey methodology is an important medical education research tool but should mainly be used to characterize

5. Streiner DL, Norman GR, Cairney J. Health Measurement Scales.

Western Journal of Emergency Medicine

professions trainees: prevalence, response rates and predictive factors to guide researchers. Acad Med. 2016. epub ahead of print. 4. Fink AG. How to Conduct Surveys: A Step-by-Step Guide. Thousand Oaks, CA; Sage; 2009. OUP; Oxford; 2014. 6. Litwin MS. How to Measure Survey Reliability and Validity. Thousand

10

Volume XVIII, no. 1: January 2017


Phillips Oaks, CA; SAGE; 1995.

Proper Applications for Surveys in Research 14. Rao SR, Graubard BI, Schmid CH, et al. Meta-analysis of survey

7. Maslach C, Jackson SE, Leiter MP. MBI Maslach Burnout Inventory.

data: application to health services research. Health Serv Outcomes

3rd ed. Palo Alto, CA; Consulting Psychologists Press; 1996. 8. Burke TA, McKee JR, Wilson HC, et al. A comparison of time-and-

Res Method. 2008;8(2):98-114. 15. Sullivan GM. A Primer on the Validity of Assessment Instruments. J

motion and self-reporting methods of work measurement. J Nurs Adm. 2000;30(3):118-25.

Grad Med Educ. 2011;3(2):119. 16. Galán F, Sanmartín A, Polo J, et al. Burnout risk in medical students

9. Strath SJ, Kaminsky LA, Ainsworth BE, et al. Guide to the

in Spain using the Maslach Burnout Inventory-Student Survey. Int

Assessment of Physical Activity: Clinical and Research Applications. Circulation. 2013;128(20):2259-79.

Arch Occup Environ Health. 2011;84(4):453-9. 17. Stalmeijer RE, McNaughton N, Van Mook WNKA. Using focus groups

10. Benbunan-Fich R. Is self-reported learning a proxy metric for

in medical education research: AMEE Guide No. 91. Med Teach.

learning? Perspectives from the information systems literature. Acad Manag Learn Edu. 2010;9(2):321-8.

2014;36(11):923-39. 18. McLeod PJ, Meagher TW, Steinert Y, et al. Using focus groups to

11. Porter SR. Self-reported learning gains: A theory and test of college student survey response. Res High Educ. 2013;54:201.

design a valid questionnaire. Acad Med. 2000;75(6):671. 19. Rodin M and Rodin B. Student evaluations of teachers. Science.

12. Fitzgerald JT, White CB, Gruppen LD. A longitudinal study of self‐ assessment accuracy. Med Educ. 2003;37:645-9.

1972;177(4055):1164-6. 20. Guest AR, Roubidoux MA, Blane CE, et al. Limitations of student

13. Straus CM, Webb EM, Kondo KL, et al. Medical student radiology

evaluations of curriculum. Acad Radiol. 1999;6(4):229-35.

education: summary and recommendations from a national survey

21. Grover PL. Evaluation of instructional skills of medical teachers:

of medical school and radiology department leadership. J Am Coll

the participant observer in the Medical School. Med Educ.

Radiol. 2014;11(6):606-10.

1980;14(1):12-15.

Volume XVIII, no. 1: January 2017

11

Western Journal of Emergency Medicine


Brief Educational Advances

52 Procedures in 52 Weeks: An Innovative Curriculum for Emergency Medicine Residents Ryan Walsh, MD John Bass, MD Chad Gorbatkin, MD Jason Bothwell, MD

Madigan Army Medical Center, Department of Emergency Medicine, Tacoma, Washington

Section Editor: Sally A. Santen, MD, PhD Submission history: Submitted June 14, 2016; Revision received August 16, 2016; Accepted September 30, 2016 Electronically published November 21, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.9.31254

[West J Emerg Med. 2017;18(1):12-13.]

BACKGROUND The specialty of emergency medicine (EM) requires that providers are as competent in rare procedures as they are in common ones.1 Emergency physicians (EPs) need to be able to perform an array of procedures, many of them life-saving, often on very short notice. The Accreditation Council for Graduate Medical Education (ACGME) has identified 18 “key procedures” as training requirements for EM residency programs.2,3 However, the list of requisite procedures does not even approach the number of procedures that are encountered in the scope of EM practice. For example, lateral canthotomy, escharotomy, and resuscitative thoracotomy are potential eye, limb, and lifesaving procedures. All are infrequently encountered,within the scope of an EP’s practice, and yet not required by the ACGME. This creates a challenge for EM educators, who remain charged with graduating competent physicians who have the mental and technical expertise to perform such procedures.4,5 Another challenge faced when teaching and, more importantly, performing rare procedures is logistics. The required equipment is often expensive, stored in small quantities, and sometimes difficult to find. Equipment setup and application is a perishable skill, and rehearsal is essential to ensure the success of the procedure when it is finally needed. OBJECTIVES Our objective was to implement a high-yield weekly training session that effectively teaches important emergency department procedures. We wanted our residents to not only become competent in the mental and technical aspects of performing the procedure, but also the logistics of finding and assembling the required materials within our own department. Additionally, when it came to prepackaged and sterile surgical sets, we wanted our residents

Western Journal of Emergency Medicine

to learn what was contained within the sets so that in an emergent scenario they would already be familiar with the contents. Lastly, we wanted to use a method of instruction that best used our available resources outside of the traditional classroom. CURRICULAR DESIGN After polling all the residents and attendings at our program, we compiled a list of 52 EM procedures. These ranged from emergent life-saving procedures such as cricothyroidotomy and pediatric jet ventilation to ring removal and nerve blocks. Once the list was compiled, we searched the Internet and located high-quality instructional videos for each procedure (Supplement 1). We then published this list of 52 procedural videos to our residency website and implemented it into our morning report curriculum, which historically consisted of an oral boards case or a simulation case during morning shift change. For this “procedure morning report,” we have a secondyear EM resident paired up with an attending, teaching the assigned procedure every Friday. The instructors guide learners through the assigned procedure by first showing the instructional video, and then pairing this with handson training using a variety of simulated and tissue models, home-made training devices, or other necessary equipment. By design, the training takes place in our ED rather than in the classroom. Instructors demonstrate the location of all equipment, and actually open the kits to show the learners all of the component parts and how to use them. Participants are the off-going night shift and the oncoming day shift, for a total of eight EM residents, plus any rotating residents and students. Nurses also participate when the procedure may require their assistance. The training typically lasts approximately 30 minutes, and always includes hands-on participation by the learners.

12

Volume XVIII, no. 1: January 2017


Walsh et al. IMPACT/EFFECTIVENESS After two years of implementation, we surveyed the current 36 EM residents regarding their experiences with the curriculum. Seventy-five percent of our residents (27/36) responded to the survey. Eighty-five percent (23/27) of respondents found the published instructional videos easily accessible and “very helpful.” All respondents reported increased competence and confidence following the instruction. Among second- and third-year participants, all of whom had been instructors and learners, 80% found the role of instructor to be “very helpful in skill mastery.” While not specifically included in our survey, other noted benefits include a teaching role for second-year residents and the integration of nursing in our curriculum. With over two years of experience now, we have found that this has been a worthwhile addition to our curriculum that can be easily implemented in other EM residencies. One limitation of this curriculum is the cost of supplies. To mitigate this in our department, we try to use expired products whenever possible and save training sets for future use. Another limitation is that the effectiveness of the curriculum is based off residents’ subjective feedback. Future research can evaluate procedural performance before and after implementation.

52 Procedures in 52 Weeks Curriculum Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none. Copyright: © 2016 Walsh et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http:// creativecommons.org/licenses/by/4.0/

REFERENCES 1. Wang E, Quinones J, Fitch MT, et al. Developing Technical Expertise in Emergency Medicine – The Role of Simulation in Procedural Skill Acquisition. Acad Emerg Med. 2008;15:1046-57. 2. Review Committee for Emergency Medicine Accreditation Council for Graduate Medical Education. Frequently Asked Questions: Emergency Medicine. 2015;7-8. 3. Accreditation Council for Graduate Medical Education. ACGME Program Requirements for Graduate Medical Education in Emergency Medicine. 2015;16. 4. Druck J, Valley M, Lowenstein S. Procedural Skills Training During Emergency Medicine Residency: Are We Teaching the Right Things? West J Emerg Med. 2009;10(3):152-6. 5. Grall K, Stoneking L, et al. An Innovative Longitudinal Curriculum to Increase Emergency Medicine Residents’ Exposure to Rarely

Address for Correspondence: Ryan Walsh, MD, Madigan Army Medical Center, Department of Emergency Medicine, Bldg 9040 Fitzsimmons Ave, Tacoma, WA 98431. Email: rdub88@gmail.com.

Volume XVIII, no. 1: January 2017

Encountered and Technically Challenging Procedures. Adv in Med Ed and Prac. 2014;5:229-36.

13

Western Journal of Emergency Medicine


Educational Advances

A Novel Curriculum to Optimize Emergency Medicine Residents’ Exposure to Pediatrics Chris Merritt, MD, MPH* Sarah A. Gaines, MD† Jessica Smith, MD† Sally A. Santen, MD, PhD‡

*Alpert Medical School of Brown University, Rhode Island Hospital/Hasbro Children’s Hospital, Department of Emergency Medicine, Section of Pediatric Emergency Medicine, Providence, Rhode Island † Alpert Medical School of Brown University, Rhode Island Hospital, Department of Emergency Medicine, Providence, Rhode Island ‡ University of Michigan Medical School, Department of Emergency Medicine, Ann Arbor, Michigan

Section Editor: David P. Way, MEd Submission history: Submitted June 14, 2016; Revision received September 8, 2016; Accepted October 10, 2016 Electronically published November 15, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.10.31248

[West J Emerg Med. 2017;18(1):14-19.]

BACKGROUND Emergency medicine (EM) residency graduates have a profound impact on the quality of pediatric emergency care. Consequently, residency training programs must provide broad clinical training in both adult and pediatric emergency medicine (PEM).1-6 The teaching of pediatrics to EM residents has historically included an inpatient “ward” rotation.7 Inpatient rotations are provided to help EM residents understand the experience of the hospitalized child, yet the educational benefit for EM trainees is not valued as much as educational experiences in the setting of the emergency department (ED). Designing a high-yield pediatric experience that allows EM residents to understand the progression of common illnesses and anticipate the medical and psychosocial needs of hospitalized children remains a challenge. Awareness of this need led us to develop a novel curriculum in pediatrics for EM residents – Pediatric Emergency Medicine with FollowUp (PEMFU). The goal of this curriculum is to prepare EM residents to provide pediatric emergency care via a situated experience in pediatric medicine. OBJECTIVE Our goal is to describe the development of a novel curriculum for teaching and learning pediatric medicine in an EM residency program based on an assessment of need and structured around the conceptual framework of situated learning. We also describe the implementation of this curriculum within a single EM residency, and report early outcomes. CURRICULAR DESIGN Needs Assessment - The Case for a New Approach After feedback from educators, residents, and medical Western Journal of Emergency Medicine

directorship suggested that educational needs were unmet under the traditional pediatric ward rotation format, the ward rotation was withdrawn from our residency curriculum. In its stead, PEMFU was designed using the six-step curriculum development framework developed by Kern beginning with general and targeted needs assessments.8 Through focus groups, anonymous surveys of current and former residents, and numerous discussions, we identified five key aspects of the inpatient pediatric rotation important to the development of pediatric competence within EM residency that may be unavailable in other venues. • Assessed Need 1: Inpatient rotations foster interactions with patients and families, and experience with the pediatric physical examination. The new curriculum (PEMFU) would need to foster developmentallyappropriate and family-centered practice via experience caring for children and families. • Assessed Need 2: EM residents must understand and anticipate the needs of hospitalized pediatric patients. PEMFU would need to allow trainees to understand the continuum of pediatric illness/injury and the evidencebased management and disposition of hospitalized children. • Assessed Need 3: EM residents risk losing the collegial relationships formed by working side by side with inpatient providers and consultants. At a time when more hospitalized patients make their way through the ED, PEMFU would need to address this unintended socialprofessional consequence, and continue to allow EM residents to develop working relationships with other healthcare professionals. • Assessed Need 4: There should be an emphasis on the critical importance of lifelong, self-directed learning to the practice of medicine. PEMFU would need to allow 14

Volume XVIII, no. 1: January 2017


Merritt et al.

A Novel Curriculum to Optimize EM Residents’ Exposure to Pediatrics

EM residents to reinforce the skills critical to lifelong self-directed learning in EM. Assessed Need 5: While an understanding of inpatient pediatrics remains important to the educational development of EM residents, adult learners learn better via participating in a social and professional setting that more closely mirrors their future practice. This concept of situated learning9 was vital to the development of the PEMFU curriculum. PEMFU would need to allow EM residents to develop skill and expertise in pediatric care in the context of the practice of emergency medicine.

Learning objectives, mapped to the needs assessment, were developed to guide curriculum development under the aegis of a single broad goal of learning to provide excellent pediatric emergency care (Table 1). A logic model was developed to guide the education strategies employed and to direct program evaluation (Appendix).10 PEMFU was implemented in the 2014-2015 academic year; 12 second-year resident learners rotated in the pilot season. Eight components comprise the curriculum (Table 2). The bulk of the clinical experience takes place from caring for patients in the ED – the social-professional environment most meaningful to future emergency physicians.11-14 Core to this innovation is that residents continue to follow the course of all patients they admit to the hospital. Residents regularly visit their patients and families in

the hospital, read daily progress and consultation notes, follow up on test results, and interact with the inpatient teams to discuss the ongoing care of these admitted patients. Free of the administrative burden of the ward teams, EM residents observe the longitudinal course of illness and the experiences of hospitalized children and families. Additionally, residents round once weekly on the wards with a PEM subspecialist experienced in inpatient medicine, visiting the bedside, engaging in discussion of diagnosis and treatment as well as the psychosocial experiences of hospitalization. Residents follow up by phone with a smaller subset of discharged patients. Dedicated time has been built into residents’ schedules to accommodate these follow-up activities. IMPACT AND EFFECTIVENESS As part of programmatic assessment,15 we employed a series of assessment components from a variety of raters to provide feedback to learners and guide continuous quality improvement for the experience. Clinical Assessment: Clinical faculty who supervise EM residents in the ED assess resident performance and provide formative feedback, recording this assessment in an online form. These assessments collectively contribute to the residents’ summative assessment. Direct Observation: One faculty member performs a monthly 2-3 hour direct observation session during an ED

Table 1. Pediatric emergency medicine with follow-up (PEMFU) goal & objectives. Goal: Emergency medicine (EM) residents, through further supervised pediatric experience, reflection, knowledge development, and understanding of the acute presentation and longitudinal course of pediatric illness and injury, will be able and competent to provide evidence based excellent care to ill and injured children. Objective

Curriculum components

ACGME core competencies

EM residents will demonstrate and apply developmentally appropriate practice in pediatrics, developing skills in the approach to children of a variety of ages and developmental stages

- - -

ED-based clinical care Direct observation Reflection

PC, PBLI, ICS, P

EM residents will demonstrate application of patient- and familycentered practice, recognizing and integrating the importance of social and family factors in pediatric care

- -

ED-based clinical care Direct observation

PC, PBLI, ICS, P

EM residents will continue to develop sound clinical reasoning, and discuss and provide support for appropriate evidence-based management and disposition of acutely ill and injured children

- - - -

PC, MK, PBLI, SBP, ICS, P

-

ED-based clinical care Patient follow-up Ward rounds Educational conference Core content reading

EM residents will reflect upon their professional identities as emergency physicians, able to collaborate with colleagues from many disciplines, and secure in their roles in the continuum of medical care of patients and families

- - -

ED-based clinical care Direct observation Reflection

PBLI, ICS, P

EM residents will appraise and critique – through patient outcomes, reading, discussion, and written analysis – an array of approaches to pediatric complaints and conditions.

- - -

Patient follow-up MK, PBLI, SBP Core content reading Case report/literature review PC, patient care; MK, medical knowledge; PBLI, practice-based learning/improvement; SBP, systems-based practice; ICS, interpersonal/communication skills; P, professionalism, ACGME, Accreditation Council for Graduate Medical Education

Volume XVIII, no. 1: January 2017

15

Western Journal of Emergency Medicine


A Novel Curriculum to Optimize EM Residents’ Exposure to Pediatrics

Merritt et al.

Table 2. Pediatric emergency medicine with follow-up: eight specific educational interventions. 1. Emergency department (ED) based clinical care (5 weekly shifts in the pediatric ED), supervised by pediatric emergency medicine (PEM) subspecialists This component comprises the core clinical experience, and is the basis on which the situated learning curriculum rests. Based in the ED, it characterizes situated learning for emergency physicians (EPs) in training. 2. Follow-up on all admitted patients (visiting patients at the bedside, reading daily progress notes, follow-up on test results, interaction with inpatient teams). Following the course of admitted patients, EM residents learn to anticipate the progression of pediatric illness and the rationale for therapies utilized. An important skill, practicing EPs frequently perform patient follow-up as a form of self-directed education, continuous quality improvement and professional satisfaction. 3. Telephone follow up on patients discharged from the ED (minimum 2 patients/week). A log is kept, including pertinent follow-up details. Through telephone follow up with discharged patients, EM residents identify opportunities for improvement, and incorporate this feedback into future practice. 4. Weekly “ward rounds” with PEM faculty, seeing inpatients at the bedside, discussing the presentation, clinical or psychosocial findings, diagnoses, treatment and/or outcomes EM residents identify the effects of illness or injury on patients/families, incorporating feedback into future practice. Faculty use modeling, coaching and scaffolding techniques to externalize thought processes, encouraging discussion and reflection. 5. Attendance at a weekly educational conference This conference focuses on clinical and systems issues, moderated by pediatric hospitalist faculty. EM residents participate to understand issues that affect patients and families whose illness experience includes the ED. 6. Core content reading list and completion of 10 online modules Each EM resident works through a core set of literature and asynchronous online modules23 (used with authors’ permission), accompanied by guiding objectives and serving as the basis for teaching discussions and further self-directed learning. 7. Direct observation session: Once per month, for 2-3 hours, the EM resident is directly observed in their ED interactions with patients, families, and other providers. EM faculty performing this observation focus on coaching, providing formative feedback using a standardized tool based on entrustable professional activities. A self-reflection exercise is built into this feedback, encouraging the learner to reflect on his or her ongoing professional identity formation 8. Case-based written report: The resident identifies a clinical question or case encounter, and writes a brief review of the literature to illustrate important PEM concepts. This exercise reinforces the importance of lifelong, self-directed learning. Faculty provide feedback, and cases and discussions are shared via a moderated online blog with an associated discussion forum for post-publication peer review.

shift for each resident. Structured formative feedback is provided covering important skills of history-taking, ageappropriate pediatric physical examination, diagnostic decision-making, and a developmentally-appropriate approach to children. A self-reflection exercise is built into the feedback process, encouraging reflection for ongoing professional identity formation. Summary Assessment: A summary assessment is completed by the course director, assessing the core elements of the curriculum, including the resident’s written work product, interactions during ward rounds and teaching sessions, and group-sourced feedback from faculty, nursing, and other staff. PEMFU provides important insight into the progression of EM residents through the EM Milestones,16 particularly Milestone 7 (Disposition) and Milestone 17, in which residents must demonstrate “awareness of and responsiveness to the larger context and system of health care.” Western Journal of Emergency Medicine

Evaluation and Feedback from Stakeholders Reactions to PEMFU have been favorable from faculty, residents, and families. Parents, in particular, appreciate that emergency providers are committed to following up with each patient, even when they are no longer specifically responsible for their care. Specific strengths cited by residents include developing close working relationships with the clinical faculty mentors, as well as increased access to pediatric colleagues, and increased individualized interactive instruction. Following Kirkpatrick’s evaluation model,17 reactions to the program are regularly solicited from residents completing the rotation, using face-to-face debriefing and online evaluation forms. A retrospective post-then-pre survey was performed to gauge the curriculum’s impact on residents’ attitudes and self-reported behaviors (Figure 1). In addition, 11 of 12 residents responded with “agree” or “strongly agree” with the statement: “The PEM-FU rotation has led to 16

Volume XVIII, no. 1: January 2017


Merritt et al.

A Novel Curriculum to Optimize EM Residents’ Exposure to Pediatrics

Comfortable with common pediatric complaints

Comfortable with critically ill pediatric patients

Pre-PEMFU Post-PEMFU

Routinely follow-up on admitted patients I apply developmentally-appropriate, family centered ED care Care of children is an important part of EM practice 0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

Figure 1. Residents’ reported attitudes and behaviors before and after participation in the PEMFU curriculum. Twelve residents responded to this retrospective post-then-pre survey, asking them to consider their current (post-PEMFU) attitudes and behaviors, and then asked to consider their attitudes and behaviors before participating in the curriculum (pre-PEMFU). Attitudes and behaviors were measured by assessing agreement with a series of statements on a scale from 1 (Strongly Disagree) to 5 (Strongly Agree). PEMFU, pediatric emergency medicine with follow up; ED, emergency department; EM emergency medicine

improved patient care and outcomes for my ED patients.” This evaluation was performed in the course of educational quality improvement, and was exempted from review by the medical center’s institutional review board. Recommendations for improvement have related to specific clinical content: interpreting pediatric radiographic studies, exposure to pediatric critical care, the psychomotor aspects of interacting with children (especially infants and toddlers), and concerns regarding the seasonal nature of disease processes seen in the pediatric ED. A number of constraints make higher-level evaluation of educational outcomes challenging. In-training examinations, based on the American Board of Emergency Medicine Core Content, do not distinguish pediatrics from other components of EM practice; pediatric-specific knowledge measurements are difficult to ascertain. Whether this novel curriculum will have effects on important clinical outcomes in the longer term remains to be seen. Further, the literature on pediatric training within EM residency consists mainly of content recommendations; specific curricular recommendations are lacking, making comparisons between curricular approaches challenging.2 Teaching the PEMFU program is effort-intensive and requires administrative support to allow time for weekly attending rounds, observation sessions, and ad hoc teaching sessions. In our model, a single PEM specialist has Volume XVIII, no. 1: January 2017

responsibility for supervision and planning, though this could conceivably be shared among several faculty. DISCUSSION This curriculum represents an example of situated learning, in which residents learn by working clinically in the environment where their knowledge and skills will be put to use in their future professional practice (i.e. in the ED caring for ill and injured children), rather than extrapolating from a foreign learning environment. This is thought to contribute to deeper and more meaningful learning.18 Through the use of mentored learning, PEMFU employs the methods described within a cognitive apprenticeship; supervising faculty provide contextualized support while supervising clinical care through role modeling, coaching, and articulation techniques.19,20 We know that knowledge and skills, if not used regularly, do not last. Pusic and colleagues have described “experience curves” chronicling the process of knowledge and skill accrual, followed by decay, with return to competence via interval training or experience.21,22 The PEMFU experience fits into a longitudinal PEM framework within our EM residency program intended to minimize this decay. Likened to a “bolus and drip,” there are two intensive pediatric experiences (“boluses”) in the first two years (PEMFU comprises the second “bolus”), followed in the third and fourth years by a 17

Western Journal of Emergency Medicine


Merritt et al.

A Novel Curriculum to Optimize EM Residents’ Exposure to Pediatrics

Figure 2. The “Bolus and Drip” Model - A framework for understanding pediatric emergency medicine (PEM) experience curves as experienced by EM residents in a four-year EM residency program. There are month-long PEM experiences (boluses) in each of the first two years of residency (dashed arrows), of which the described curriculum constitutes the second “bolus” – the longer dashed arrow. This is followed by a series of PEM shifts, interspersed among the remainder of the four-year residency experience (the “drip”, solid arrows), meant to provide PEM refreshers, and to simulate the frequency with which a typical general EM provider can expect to manage pediatric patients.

“drip” of PEM shifts interspersed into the longitudinal clinical experience. We believe that this PEM curriculum, augmented with refresher experiences over the subsequent two years (Figure 2), will help learners achieve and maintain competence.

REFERENCES 1. Christopher N. Pediatric emergency medicine education in emergency medicine training programs. Acad Emerg Med. 2000;7(7):797-9. 2. Cloutier RL, Walthall JDH, Mull CC, et al. Best educational practices

CONCLUSION We have developed a novel educational method, with dedicated pediatric ED time, deliberate patient follow up, ward rounds, discussion of focused pediatric topics, and direct observation. This model could be customized to fit a variety of educational settings in pediatric or adult medicine training for emergency physicians. We believe that this novel curriculum represents one model for integrating knowledge of pediatric illness and injury – acute and longitudinal – into the emergency medicine residency education paradigm.

in pediatric emergency medicine during emergency medicine residency training: guiding principles and expert recommendations. Acad Emerg Med. 2010;17(2):104-13. 3. Counselman FL, Borenstein MA, Chisholm CD, et al. The 2013 Model of the Clinical Practice of Emergency Medicine. Acad Emerg Med. 2014;21(5):574-98. 4. Ros SP, Cetta F, Ludwig S. Pediatric education in emergency medicine residency programs--10 years later. Pediatr Emerg Care. 1993;9(3):143-5. 5. Langhan M, Keshavarz R, Richardson LD. How comfortable are emergency physicians with pediatric patients? J Emerg Med. 2004;26(4):465-9. 6. Bourgeois FT and Shannon MW. Emergency Care for Children in

Address for Correspondence: Chris Merritt, MD, MPH, Alpert Medical School of Brown University, Hasbro Children’s Hospital, Department of Emergency Medicine, Section of Pediatric Emergency Medicine, Providence, 593 Eddy St,. Providence, RI 02903. Email: cmerritt@ brown.edu.

Pediatric and General Emergency Departments. Pediatr Emerg Care, PEC. 2007;23(2):94-102. 7. Singer JI and Hamilton GC. Objectives to direct the training of emergency medicine residents in pediatric emergency medicine. J Emerg Med. 1993;11(2):211-8.

Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none. Copyright: © 2016 Merritt et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

Western Journal of Emergency Medicine

8. Kern and David E. E. Curriculum Development for Medical Education: A Six-Step Approach. 2nd ed. (Kern DE, Thomas PA, Hughes MT, eds.). Baltimore, MD: Johns Hopkins University Press; 2009. Wenger E and Trayner-Wenger B. Communities of practice: a brief introduction. April 2015. 2015:1-8. 9.

Kellogg Foundation. Using Logic Models to Bring Together Planning, Evaluation, and Action: Logic Model Development Guide. 2004:62.

18

Volume XVIII, no. 1: January 2017


Merritt et al.

A Novel Curriculum to Optimize EM Residents’ Exposure to Pediatrics

10. Lave J, Wenger E. Situated Learning: Legitimate Peripheral

16. Watkins R, Leigh D, Foshay R, et al. Kirkpatrick plus: Evaluation and

Participation. Cambridge University Press; 1991.

continuous improvement with a community focus. Educ Technol Res

11. Dennen VP and Burner KJ. The cognitive apprenticeship model in educational practice. In: Handbook of Research on Educational

Dev. 1998;46(4):90-6. 17. Hara N and Schwen T. Communities of Practice in Workplaces;

Communications and Technology: A Project of the Association

Learning as a Naturally Occurring Event. Perform Improv Q.

for Educational Communications and Technology. Routledge;

2006;19(2):93-114.

2008:1296.

18. Collins A, Brown JS, Newman SE. Cognitive apprenticeship:

12. Woolley NN and Jarvis Y. Situated cognition and cognitive

teaching the crafts of reading, writing, and mathematics. In: Resnick

apprenticeship: a model for teaching and learning clinical skills in a

L, ed. Knowing,Learning, and Instruction: Essays in Honor of

technologically rich and authentic learning environment. Nurse Educ

Robert Glaser. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.;

Today. 2007;27(1):73-9. 13. Stalmeijer RE, Dolmans DHJM, Wolfhagen IHAP, et al. Cognitive

1989:453-94. 19. Stalmeijer RE, Dolmans DHJM, Snellen-Balendong HAM, et al.

apprenticeship in clinical practice: can it stimulate learning in

Clinical Teaching Based on Principles of Cognitive Apprenticeship.

the opinion of students? Adv Health Sci Educ Theory Pract. 2009;14(4):535-46.

Acad Med. 2013;88(6):861-5. 20. Pusic M V., Kessler D, Szyld D, et al. Experience curves as an

14. Van der Vleuten CPM, Schuwirth LWT, Driessen EW, et al. A

organizing framework for deliberate practice in emergency medicine

model for programmatic assessment fit for purpose. Med Teach. 2012;34(3):205-14.

learning. Acad Emerg Med. 2012;19(12):1476-80. 21. Pusic MV, Boutis K, Hatala R, et al. Learning Curves in Health

15. The Emergency Medicine Milestones Project, A Joint Initiative of The Accreditation Council for Graduate Medical Education

Professions Education. Acad Med. 2015;90(8):1034-42. 22. Chang TP, Pham PK, Sobolewski B, et al. Pediatric emergency

and The American Board of Emergency Medicine. 2015.

medicine asynchronous e-learning: a multicenter randomized

Available at: https://www.acgme.org/Portals/0/PDFs/Milestones/

controlled Solomon four-group study. Acad Emerg Med.

EmergencyMedicineMilestones.pdf. Accessed April 11, 2016.

2014;21(8):912-9.

Volume XVIII, no. 1: January 2017

19

Western Journal of Emergency Medicine


Educational Advances

Promoting Achievement of Level 1 Milestones for Medical Students Going into Emergency Medicine Cynthia G. Leung, MD, PhD* Laura Thompson, MD* Jennifer W. McCallister, MD† David P. Way, MEd* Nicholas E. Kman, MD*

*Ohio State University College of Medicine, Department of Emergency Medicine, Columbus, Ohio † Ohio State University College of Medicine, Department of Internal Medicine, Columbus, Ohio

Section Editor: David A. Wald, DO Submission history: Submitted June 14, 2016; Revision received November 17, 2016; Accepted October 27, 2016 Electronically published December 5, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.10.31247

[West J Emerg Med. 2017;18(1)20-25.]

BACKGROUND Over the past decade, U.S. medical schools have begun reformulating their fourth-year curricula, moving from an open format of career exploration and audition electives to a more structured program designed to prepare students for patient-care responsibilities upon entering residency.1-4 This trend is attributable to recommendations handed down from several key organizations. In 2011, the Alliance for Clinical Education (ACE) endorsed the use of the Accreditation Council of Graduate Medical Education (ACGME) Core Competencies and the Association of American Medical Colleges (AAMC) Entrustable Professional Activities (EPA) to guide medical educators in redesigning the fourth-year curriculum. Additionally, ACE published four specific guidelines. First, they recommended that all students demonstrate progress towards mastery of the six ACGME Core Competencies. Second, they stated that all students should complete a capstone course specifically designed to prepare them for residency. Third, they said that medical school curricula should provide specialty-specific objectives to prepare students for residency in their intended specialty. Finally, they endorsed a system for helping students identify and correct gaps in their knowledge and skills during the fourth year.1 The authors began their efforts to respond to the call for fourth-year curriculum revision with a review of the literature, specifically looking for what medical graduates entering emergency medicine (EM) were lacking upon entry into residency. A study by Lyss-Lerman found that program directors believed that interns’ primary shortcomings were in the areas of medical knowledge, professionalism, organizational skills, and self-reflection.4 More recently, the development of Level 1 ACGME Milestones has helped to more clearly articulate expectations of graduating medical Western Journal of Emergency Medicine

students upon entering residency.5 Weizberg, et al. conducted a multi-institution study in which EM interns were assessed on eight Level 1 milestones within the first month of residency and found that fewer than 75% met Level 1 for any of the eight milestones assessed.6 These studies emphasize the need to revise curricula to better prepare students for the transition from undergraduate to graduate medical education. To bridge the gap between traditional third-year core clerkships and the internship year, our institution introduced courses of study customized for our student’s intended specialty. The courses of study, called “Clinical Tracks,” are longitudinal across the fourth year of medical school and are designed to prepare students for the next stage of training by offering a framework for entry level, specialtyspecific learning milestones. The EM Clinical Track, with associated learning objectives and assessments, was based on the EM milestones developed jointly by the ACGME and the American Board of Emergency Medicine (ABEM).7 OBJECTIVES The objective of this innovation was to transform a traditional fourth year-curriculum into a program designed to better prepare medical students for a residency in EM. The result was the Clinical Track in EM, a competency-based curriculum that offered medical students the opportunity to achieve and demonstrate competency in all 23 of the Level 1 EM milestones. CURRICULUM DESIGN The Clinical Track in EM was conceived as a comprehensive longitudinal curriculum comprised of a series of required fourth-year clerkships supplemented with a menu of recommended electives. With guidance from a faculty advisor, the students designed a clinical track that provided them with 20

Volume XVIII, no. 1: January 2017


Leung et al. the best opportunities to develop knowledge and skills deemed essential for starting an internship in EM. A critical feature of the EM clinical track was a series of competency-based assessments designed to provide students with feedback on their progress toward achieving Level 1 EM milestones. Pre-existing learning objectives from the required fourthyear clerkships, including clerkships in EM, ambulatory medicine, chronic care, and intensive care, were mapped to the Level 1 EM milestones. Many of the pre-existing assessments within these clerkships were determined to provide the information needed to assign student performance levels for most of the 23 EM milestones. The pre-existing assessments included the following: 1. EM clerkship Clinical Performance Assessment (CPA) provided a global assessment of patient care milestones based on end-of-shift performance evaluations gathered from numerous faculty over the course of the clerkship. 2. EM clerkship EPA 10 simulation assessment provided a standardized measure of a student’s management of the emergent patient in a realistic emergency department (ED) setting.8 3. EM clerkship procedure-lab assessments measured Level 1 milestones for airway management, ultrasound, wound care and vascular access. 4. EM clerkship quizzes measured core medical knowledge and the application of knowledge to clinical problems. 5. The ambulatory medicine clerkship Critical Appraisal of Topic (CAT) assignment was used to measure how well a student used evidence-based medicine to appraise a clinical question. 6. The Health Systems, Informatics and Quality (HSIQ) project, a longitudinal experience in which students identified a system failure in care delivery and wrote a proposal for a viable quality improvement intervention, assessed understanding of healthcare delivery systems.9 To fill the gaps, new assessments specifically for the Clinical Track in EM were developed and incorporated into a clinical elective called Advanced Topics in Emergency Medicine (ATEM).10 The new assessments included the following: 1. Assessment shifts in which students were evaluated on specific EM milestones through direct observation of a patient encounter by core education faculty. A key feature of the assessment shift is the observation instrument, which contains behavioral anchors taken directly from the Level 1 and 2 EM milestones. This facilitated the assignment of milestone levels. A copy of the instrument used for assessment shifts is included as an appendix; faculty completed this form on a tablet device, using the MyProgress Software Platform.11 ATEM students were required to complete three assessment shifts.

Volume XVIII, no. 1: January 2017

Promoting Achievement of Level 1 EM Milestones 2. A capstone simulation assessment based on the EM oral board’s triple case12 was designed to assess EM milestones that are more difficult to evaluate in the clinical setting such as emergency stabilization (PC1) and multitasking (PC8). 3. A procedure log for logging procedures performed in the clinical environment, and checklist assessments for evaluating procedures performed in simulation. 4. A patient follow-up log required students to review cases seen in the ED, identify members of the care team and delineate the resources involved in the patient’s care. 5. Additional knowledge quizzes specific to Level 1 EM milestones were also added to the ATEM course. A clinical competency committee (CCC) consisting of the clinical track director, the EM clerkship director, and the Part 3 (fourth-year) director reviewed the relevant assessment data for each student. Using the ACGME-ABEM scoring rubric,7 each student was assigned a level for each of the 23 EM milestones. Level assignments were based on a student’s consistent performance at not only the assigned level, but lower levels as well. For instance, assignment to a Level 2 required performance of both Level 1 and Level 2 criteria for any given milestone. An intermediate level (i.e. 1.5) was assigned if a student demonstrated only some of the higher level behaviors. Multiple sources of assessment data were used to assign levels for each milestone with the exception of a few of the procedural and systems-based milestones as shown in Table 1. In cases that contained conflicting assessment data, the most recent evidence was used, particularly if the student showed improvement over time. IMPACT/EFFECTIVENESS Outcomes Seventeen students from a class of 185 enrolled in and completed the Clinical Track in EM during the inaugural year. The assessment data gathered throughout the clinical track year was sufficient for the CCC to assign a milestone level for students on 21 of the 23 EM milestones. Most students attained Level 1 or higher for 17 of the 23 EM milestones (see Table 2, and Figure). Notable exceptions include PC5Pharmacotherapy (most students failed to consistently ask about allergies to medications); PC14-Vascular Access (a little more than 35% failed to perform arterial puncture); and PROF2-Accountability (more than half of the students failed to turn in their patient follow-up logs). The figure shows the median scores of the 17 students (boxes) and the range of scores (whiskers) for each of the 23 milestones. The Ohio State University Institutional Review Board determined this evaluation to be exempt from review. The CCC panel was unable to fully assess two milestones, PC11-Anesthesia/Acute Pain Management and SBP2-Systemsbased Management. For PC11, the panel assigned all students a Level 0.5 once it was discovered that students were never

21

Western Journal of Emergency Medicine


Promoting Achievement of Level 1 EM Milestones

Leung et al.

Table 1. Methods of assessment in a milestone-based Clinical Track in Emergency Medicine, as part of a fourth-year medical school curriculum. EM clerkship Clinical track assessments Milestone

CPA Quiz

Emergency stabilization

Simulation

Procedure checklist

Assessment shifts

Procedure logs

End of year simulation

x

x

x

x

Diagnostic studies

x

x

Differential diagnosis

x

Pharmacotherapy

x

Reassessment

x

x

Disposition

x

x

Multi-tasking

x

x

History and physical

x

Quiz

x

x x

x

Procedures

x

x

Airway

x

x

Anesthesia/pain Ultrasound

Other required experiences

x x

x

x

Wound care

x

x

Vascular access

x

x

x

Medical knowledge

*

Patient safety

Systems management

x

Technology

x

PBLI

x

Professional values

x

Accountability

x

Patient communication

x

‡ §

Team management x PBLI, practice-based learning and improvement; CPA, clinical performance assessment. *Passing score on USMLE and EM Advanced Clinical Exam. † Computer Based Learning Modules. ‡ Health Systems, Informatics and Quality Assignment (HSIQ); Patient follow up log. § Critical Appraisal of Topic assignment; Patient follow up log.

formally assessed on contraindications and complications of local anesthesia, a critical part of this milestone. Like PROF2, SBP-2 was affected by the failure of students to complete the required patient follow-up log assignment, which was also used to assess this milestone. DISCUSSION The Clinical Track in EM was designed to transition the traditional fourth-year medical school curriculum to one based on competencies defined by the ACGME and the EM milestones. This transition required more structure and more formal assessments than existed in the traditional course of study. The Clinical Track in EM relied on assessment data gathered longitudinally throughout the fourth year of medical school. However, the data provided integrated and comprehensive information sufficient for assigning EM milestone performance levels. Western Journal of Emergency Medicine

One of the strengths of this program involved numerous observations of student performance by multiple faculty evaluators. The information gathered in this manner helped to capture some of the contextual variability inherent in the more complex patient care-based milestones.13 Additionally, due to the longitudinal nature, faculty were able to document student growth over time. Another strength was that most evaluations were performed in realistic settings, either the actual or simulated ED. These assessment settings lend increased authenticity to the clinical performance assessments. Finally, milestonelevel determinations were made by consensus of a CCC in undergraduate medical education made up of faculty who were thoroughly involved with the clinical track students throughout the fourth year. Their familiarity with the students contributed to confident decisions about the student’s milestone-level assignments. 22

Volume XVIII, no. 1: January 2017


Leung et al.

Promoting Achievement of Level 1 EM Milestones

Table 2. Number and percentage (in parentheses) of 17 emergency medicine clinical track students by milestone level attained prior to graduation from medical school. Students in the missing category had incomplete information from assessments during the fourth year. Milestone level Milestones

<1

1

PC1

Emergency stabilization

1 (5.9)

PC 2

Focused history & physical

PC 3

Diagnostic studies

PC 4

Diagnosis

PC 5

Pharmacotherapy

PC 6

Observation & reassessment

PC 7

Disposition

PC 8

Multi-tasking

PC 9

Procedures

17 (100)

PC 10

Airway management

17 (100)

PC 11

Anesthesia/acute pain management

PC 12

Ultrasound

PC 13

Wound care

PC 14

Vascular access

2 12 (70.6)

2 (11.8)

15 (88.2)

2 (11.8)

2 (11.8)

13 (76.5)

1 (5.9)

4 (23.5)

12 (70.6)

14 (82.4) 2 (11.8)

1.5 4 (23.5)

Missing

3 (17.6) 3 (17.6)

1 (5.9)

11 (64.7)

12 (70.6)

5 (29.4) 17 (100)

17 (100) 17 (100) 17 (100) 6 (35.3)

MK

3 (17.6)

8 (47.1)

17 (100)

SBP1

Patient safety

SBP 2

Systems-based management

SBP 3

Technology

17 (100) 4 (23.5)

PBLI

8 (47.1)

PROF1

Professional values

PROF2

Accountability

4 (23.5)

16 (94.1)

9 (52.9)

9 (52.8) 1 (5.9)

9 (52.9) 16 (94.1)

1 (5.9)

15 (88.2)

1 (5.9)

8 (47.1)

ICS ICS1

Patient communication

1 (5.9)

ICS 2 Team management 16 (94.1) 1 (5.9) PC, patient care; SBP, systems-based practice; PROF, professionalism; MK, medical knowledge; PBLI, practice-based performance improvement; ICS, patient-centered communication.

LIMITATIONS The authors experienced several challenges in implementing the Clinical Track in EM. First, due to a technical problem with the electronic recording system adopted for entering assessment shift observations,11 32% of the assessment shift data was lost, affecting 10 of 17 students. Fortunately, because multiple observations and multiple methods of assessment were used, milestones were able to be assigned, even with missing data. In the future, however, the authors recommend a rigorous trial period for any software program used to gather high-stakes evaluation information. Additionally, some patient care-based milestones, such as emergency stabilization (PC1) and multitasking (PC8) as well as many of the procedural milestones, could not be evaluated in the ED due to a shortage of appropriate patient encounters. Medical students are lowest on the hierarchy for such opportunities, so these competencies had to be assessed solely through

Volume XVIII, no. 1: January 2017

simulation. Finally, the authors found it challenging to assess the systems-based practice and the practice-based performance improvement competencies in either an actual or simulated ED. As a result, they relied on information from other fourth-year curriculum projects and assessments. Although these activities were not carried out in the ED, the goals and objectives of these assessments were well aligned with the goal to provide competency-based assessment of EM milestones. Much of the assessment of medical students relies heavily on the direct observation that occurs on the assessment shifts. The program leaders at our institution were able to accomplish this due to the preexistence of required teaching shifts by core faculty (of our residency program). Assessment shifts might be difficult to achieve for medical schools where teaching shifts are not feasible. In preparation for the next academic year, the authors have already incorporated changes to the clinical track

23

Western Journal of Emergency Medicine


Promoting Achievement of Level 1 EM Milestones

Leung et al.

Figure. Median plot of milestone levels attained by 17 medical students who participated in a longitudinal emergency medicine clinical track (a series of required clerkships and electives) during their fourth year of medical school.

curriculum to improve our ability to assess and assign students to levels on the EM milestones. Changes include improvements to the pharmacotherapy curriculum to incorporate formal assessment of the student’s competency in applying knowledge of contraindications and complications in cases in which local anesthesia is required. A fully functional electronic assessment system for recording and saving direct observation data has been tested and implemented. And finally, the program leaders are implementing a “feedforward” process to residency directors modeled after that described by Sozener14 so that residency programs can make practical use of our efforts to document student performance during medical school. CONCLUSION The goal of the Clinical Track in EM was to contribute to a continuity of education, bridging the continuum of medical education from medical school through residency and on into early practice. Communication of progress and

Western Journal of Emergency Medicine

achievement through the milestone structure can contribute to establishing this continuity of education. Compared to the observations by Weizberg, et al. (who found that fewer than 75% of EM interns had achieved Level 1 on the eight patient care-based milestones assessed upon entry into residency), almost all of our graduates achieved at least a Level 1 designation for 20 of 23 milestones. The creation of a specialty-specific EM clinical track provided the structure necessary to prepare medical students for their intended specialty. Key to this program was efforts to customize assessments to measure the ACGME EM milestones. Eventually, this assessment data will certify the graduating medical student’s preparation to begin an EM residency program. ACKNOWLEDGMENTS The authors would like to thank Ms. Sharon Pfeil for her dedication to the Clinical Track Program and making it run like clockwork.

24

Volume XVIII, no. 1: January 2017


Leung et al.

Promoting Achievement of Level 1 EM Milestones

Address for Correspondence: Cynthia G. Leung, MD, PhD; The Ohio State University College of Medicine, Department of Emergency Medicine, 760 Prior Hall, 376 W 10th Ave., Columbus, OH 43210. Email: Cynthia.Leung@osumc.edu.

Medicine Residents Achieved Level 1 on Care-Based Milestones? J Grad Med Educ. 2015;7(4):589-94. 7. Emergency Medicine Milestone Project. Chicago, IL: Accreditation Council for Graduate Medical Education, & East Lansing, MI: The American Board of Emergency Medicine. 2012. Available at: https://

Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none.

www.abem.org/public/docs/default-source/migrated-documents-andfiles/em-milestones.pdf?sfvrsn=6. Accessed April 29, 2014. 8. Thompson L, Leung C, Green B, et. al. Assessment of Entrustable Professional Activity (EPA) 10 in a Mandatory Fourth Year

Copyright: © 2016 Leung et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

Emergency Medicine (EM) Clerkship. Presented at the AAMC Medical Education Meeting (Baltimore, MD 11/11/15). Available at: https://www.mededportal.org/icollaborative/resource/4199. Accessed January 28, 2016. 9. Gonsenhauser I, Clevenger A, Heacock A, et. al. Last But Not Least: EPA 13 - Entrusting the Clinicians of Tomorrow To Improve Healthcare Today. Presented at the AAMC Medical Education Meeting (Baltimore, MD 11/11/15). Available at: https://www.

REFERENCES

mededportal.org/icollaborative/resource/4186. Accessed June 14,

1. Reddy ST, Chao J, Carter JL, et al. Alliance for Clinical Education (ACE). Education perspective paper: recommendations for

2016. 10. Kman NE, Bernard AW, Martin DR, et. al. Advanced topics in

redesigning the “final year” of medical school. Teach Learn Med.

emergency medicine: curriculum development and initial evaluation.

2014;26(4):420-7. 2. Elnicki DM, Gallagher S, Willett L, et al. Course Offerings in the

West J Emerg Med. 2011;12(4):543-50. 11. MyKnowledgeMap, Myprogress (version 5.3). Web application and

Fourth Year of Medical School: How U.S. Medical Schools Are Preparing Students for Internship. Acad Med. 2015;90(10):1324-30.

mobile app. York, UK: MyKnowledgeMap Ltd.; 2015. 12. Council for Emergency Medicines Residency Directors. 2013 Model

3. Coates WC, Crooks K, Slavin SJ, et al. Medical school curricular

of the Clinical Practice of Emergency Medicine. Available at: http://

reform: fourth-year colleges improve access to career mentoring and

www.cordem.org/files/DOCUMENTLIBRARY/2013%20EM%20

overall satisfaction. Acad Med. 2008;83(8):754-60.

Model%20-%20Website%20Document(1).pdf. Accessed April 30,

4. Lyss-Lerman P, Teherani A, Aagaard E, et al. What training is needed in the fourth year of medical school? Views of residency program

2011. 13. van der Vleuten CP and Schuwirth LW. Assessing professional

directors. Acad Med. 2009;84(7):823-9.

competence: from methods to programmes. Med Educ.

5. Santen SA, Rademacher N, Heron SL, et al. How competent are emergency medicine interns for level 1 milestones: who is

2005;39(3):309-17. 14. Sozener CB, Lypson ML, House JB, et al. Reporting Achievement

responsible? Acad Emerg Med. 2013;20(7):736-9.

of Medical Student Milestones to Residency Program Directors: An

6. Weizberg M, Bond MC, Cassara M, et al. Have First-Year Emergency

Volume XVIII, no. 1: January 2017

Educational Handover. Acad Med. 2016;91(5):676-84.

25

Western Journal of Emergency Medicine


Educational Advances

The Council of Emergency Medicine Residency Directors’ (CORD) Academy for Scholarship in Education in Emergency Medicine: A Five-Year Update Joseph LaMantia, MD* Lalena M. Yarris, MD, MCR† Michele L. Dorfsman, MD‡ Nicole M. Deiorio, MD† Stephen Wolf, MD§

*North Shore University Hospital, Department of Emergency Medicine, Manhasset, New York † Oregon Health & Science University, Department of Emergency Medicine, Portland, Oregon ‡ University of Pittsburgh School of Medicine, Department of Emergency Medicine, Pittsburgh, Pennsylvania § University of Virginia School of Medicine, Department of Emergency Medicine, Charlottesville, Virginia

Section Editor: David P. Way, MEd Submission history: Submitted June 14, 2016; Revision received September 19, 2016; Accepted October 10, 2016 Electronically published November 15, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.10.31232

[West J Emerg Med. 2017;18(1)26-30.]

BACKGROUND In 1990, Ernest Boyer called on academic medicine to affirm its central role in education by expanding the scope of scholarship to include the domain of teaching.1 A decade later, Charles Glassick built on Boyer’s work by putting forth a description of six criteria for the “scholarship of teaching,” including clarity of goals, adequacy of preparation, appropriateness of methods, significance of results, effectiveness of presentation, and reflectiveness of critique.2 Largely in response to their work, institutionallybased academic communities, known as “academies of medical educators” came to exist in academic medicine as a means of supporting educators and promoting this newly described domain of scholarship.3,4 Over the past 15 years, these academies have grown in number with varying structures and focus, yet uniformly maintaining emphasis on the scholarship of teaching as defined by Boyer and Glassick.1,2,3,4 Currently, over 60 academies of medical educators exist in the United States, based largely at schools of medicine with some having primary affiliation with national organizations.4 In 2010, the Council of Emergency Medicine Residency Directors (CORD) established one of the first such specialty-based academies, the Academy for Scholarship in Education in Emergency Medicine, (EM). Details about the original inception of this academy have been described previously.5 This educational advance provides an update on the structure and activities of the CORD Academy for Scholarship in Education, reflecting on its first five years of service.

Western Journal of Emergency Medicine

OBJECTIVE We aim to describe activities of the CORD Academy for Scholarship in Education in Emergency Medicine over its first five years of existence. In doing so, we will highlight its revised organizational structure, the evolution of the application processes, and report on accomplishments and outcomes of the Academy’s three pillars – membership and recognition; faculty development and structured programs; and education research and scholarship. ACADEMY DESIGN AND STRUCTURE The CORD Academy for Scholarship in Education in Emergency Medicine was founded in 2010 as the first academy of medical educators developed within a medical education specialty society in the U.S.5 Since then, this model has been applied to other specialties.6,7,8 The mission of the CORD Academy is to promote and support excellence in education through the process of scholarship, as defined and described by Boyer and Glassick above.1,2,5 With this emphasis, the Academy strives to enhance, support, and complement the mission of CORD to advance education in EM. At its beginning, the Academy was a service organization with loose organizational structure. Members were solely “distinguished educators” (DE) chosen annually through a criterion-based, peer-reviewed selection process in one or more of four categories based on the scholarship of teaching: teaching and evaluation, enduring educational materials, educational leadership, and educational research.2 By way of service, DEs would contribute their expertise and skill through

26

Volume XVIII, no. 1: January 2017


LaMantia et al.

CORD Academy for Scholarship in Education in EM

a variety of programs and initiatives related to faculty development, mentoring, consultation, liaison work, and advocacy, striving to advance education through scholarship among both the Academy and the entire CORD membership. In 2014, as the Academy grew in membership a systematic organizational review process was undertaken, yielding a new organizational structure (Figure). This process reaffirmed three pillars of the Academy’s mission (i.e., membership and recognition; faculty development and structured programs; and educational research and scholarship) and laid the groundwork for tiered membership (i.e., DE, academy scholar, and academy member). All members are encouraged to align with a pillar, in part to meet their service obligation and assure progress on yearly pillar objectives. Furthermore, under this new structure, termed leadership positions were established. Currently, an academy director, immediate past-director, and three pillar leaders oversee the business of the Academy, determine and execute annual strategic objectives for the Academy, and act as liaisons to the CORD Board of Directors regarding the activities of the Academy. An Academy Advisory Committee provides longterm direction and vision, as well as advice and counsel to the Academy director on the administration and annual strategic objectives. This Advisory Committee is composed of the CORD Board of Directors president-elect, two at-large CORD Members, the chair of CORD’s Academic Assembly Advisory Committee, and Academy leadership. The Academy members,

Academy leadership, and the Advisory Committee each meet regularly to assure adherence to the Academy’s goal and mission5 and progression of annual strategic objectives. PILLAR ROLES AND EFFECTIVENESS The CORD Academy’s effectiveness can be measured in terms of accomplishments and success of each of its three pillars. Less quantifiable is its success as determined by its value to its members and the general CORD membership as a community of practice and network for educators seeking to promote the scholarship of teaching. This latter success will be borne out over time as the academy grows and matures. Membership & Recognition The Academy’s Membership & Recognition pillar is tasked with reviewing new applications to the Academy and tracking recurrent eligibility for existing members. Membership is criterion-based, and applicants must demonstrate they have met standards as embodied by a set of example applications. Example applications exist for various types of potential applicants, such as those with undergraduate medical education (UME) or GME-focused careers. The application preparation process is rigorous, but intended to objectively help the applicant demonstrate excellence in their selected area of focus (i.e., Teaching and Evaluation; Enduring Educational Materials; Educational Leadership; and Education Research). A complete application consists of eight parts: a

Academy Director Academy Advisory Committee

Faculty Development & Structured Programs Pillar

Education Research & Scholarship Pillar

Membership, Awards & Recognition Pillar

CORD Coaching Program

Faculty Development in Education Research

Distinguished Educator Award

Annotated Journal Club

Grants & Scholarships Program

Academy Scholars Award

Faculty Development Workshops

Education Research Consortium

Academy Membership

Center for Program Evaluation & Learner Assessment

Legacy Mentor

Figure. Council of Emergency Medicine Residency Directors’ (CORD) Academy for Scholarship organizational structure.

Volume XVIII, no. 1: January 2017

27

Western Journal of Emergency Medicine


LaMantia et al.

CORD Academy for Scholarship in Education in EM letter of submission, a match to standard setting examples, structured summary, personal statement, structured abstract, table of appendices, supporting documents, and a curriculum vitae.9 Potential applicants may ask for an Academy mentor to assist with the application process and periodic consultation sessions are offered at the CORD Academic Assembly. Applications are called for annually, and the pillar members use a systematic, criterion-based group review and scoring process to determine candidacy. Until recently, and similar to academies of medical educators across the country, membership in the Academy was conferred through a DE award process. However, in 2014, in order to reach a broader potential membership and create a more vibrant group of faculty, membership categories were expanded to now include three criterion-based tiers distinguished educator, Academy scholar and Academy member. Currently, DEs are selected as members who have demonstrated the highest level of commitment to and excellence in medical education as evidenced by significant quality, quantity, and breadth within their chosen area of focus. Selection of scholars is based upon their attaining recognition as remarkable educators who have demonstrated a significant level of commitment and excellence within a chosen area of focus. Finally, Academy members are selected by their stated commitment to education and scholarship and willingness to actively participate in the Academy as a service organization. A Legacy Mentor category was also introduced in order to include senior faculty, advanced in their academic career, who might not otherwise submit a comprehensive application. Those wishing to apply for this designation submit an abbreviated, yet still criterion-based, application. Once a part of the Academy, members are expected to affiliate with a pillar and contribute by participating in service to their pillar. Currently, members must renew their application every 3-5 years, depending on their level of membership. The effectiveness of the Membership & Recognition pillar is marked by the continued submission of high quality applications resulting in 30 current members. Faculty Development and Structured Programs Faculty development is the heart of the Academy’s mission, with the hopes of creating scholarship from each endeavor in collaboration with the Education Research and Scholarship pillar. The most well known program under this pillar is the CORD Coaching Program.9 This structured, three-step peer-to-peer mentoring program is designed to assist national speakers at all levels to improve their introspection, confidence, and teaching innovation. Following structured self-reflection and observation sessions, assigned academy DEs provide feedback and recommendations aimed at helping speakers to overcome identified challenges, improve their skills and develop their careers. To date, the CORD Coaching Program has conducted over 30 coaching sessions for CORD members, uniformly generating positive Western Journal of Emergency Medicine

feedback from participants. Future efforts will need to focus on formally assessing the impact of the program on teaching skills of participants. An additional Academy initiative currently underway is a collaboration between the Faculty Development Structured Programs and the Education Research and Scholarship pillars. Both groups are working to develop a critical appraisal and annotated journal club series. This series aims to identify key medical education topics that are relevant to EM educators, perform a critical appraisal of the literature to identify leading papers on that topic, and summarize current understanding. Examples of potential topics for this program include fundamental teaching skills, feedback, student remediation, and producing educational scholarship. The Faculty Development and Structured Programs pillar also strives to enrich the educational offerings of the CORD Academic Assembly and other national EM meetings. Sessions on educational portfolio development, coaching, mentoring, promotion and tenure, educational consults, promotion of scholarship at home institutions, and various teaching skills are presented annually by Academy members, scholars and DEs. To date, Academy members have sponsored over 20 faculty development and 10 one-on-one education portfolio development sessions for CORD members and other EM educators at national meetings. While these sessions and the above coaching activities have been well received, future efforts of the Academy will focus on explicitly measuring the effectiveness of these sessions. Lastly, this pillar is investigating ways to share educational materials, such as lectures and small group sessions, with a plan to allow for credit to be given to authors. This is similar to the Med-Ed Portal model as it applies to national didactics made portable. Education Research and Scholarship The Education Research and Scholarship pillar seeks to support educators in their scholarship endeavors, promote EM education research, and move the science of medical education forward. The pillar aims to serve as a virtual “community of practice” for EM education researchers, and has envisioned four main components: a. Grants & Scholarships – This pillar has worked with the CORD Board of Directors and the Emergency Medicine Foundation to develop the joint EMF/ CORD Emergency Medicine Education Research Grant, providing up to $25,000 funding to qualifying proposals to study medical education topics with direct relevance to the specialty of EM.10 b. Research Faculty Development Programs – In 2015-2016, the position of Director for Professional Development in Education Research was created. This leader will collaborate with the Faculty Development and Structured Programs pillar to specifically address CORD members’ faculty 28

Volume XVIII, no. 1: January 2017


LaMantia et al. development needs in education research and scholarship. This pillar also supports the Medical Education Research Certificate (MERC) at CORD Scholars Program (MCSP), as well as Academy sessions at national meetings that focus on education research skills. MCSP has experienced significant success in developing over 150 clinical-educators in education research methodology and skills. 11 c. Education Research Consortium – The Academy aims to build upon the prior work done to establish guidelines and a structure for a collaborative education research consortium, previously called Emergency Medicine Education Research Group (EMERGe).12 The pillar envisions that the consortium would serve as both a central resource for multicenter study administration and a structure to encourage collaboration. d. CORD Center for Program Evaluation and Learner Assessment – A future direction includes a proposed entity that would serve as a resource for design, methods, and statistical expertise to assist with program evaluation and learner assessment efforts that serve the CORD community. The center, for example, could contribute to national needs assessments, CORD educational program evaluation, and as a consult for EMERGe studies that aim to develop and test assessment tools. Existing CORD task forces and committees with aligned missions could fall under the Center for Program Evaluation and Learner Assessment, such as the Joint Milestone Task Force and the Systematic Evaluation Methods committee. The Education Scholarship pillar is in the process of completing a national workforce study of EM educators that intends to describe the current landscape of educational program administration and staffing, as well as a needs assessment to identify strategies that would most help EM educators reach their education research aspirations.13 CHALLENGES The CORD Academy for Scholarship in Education in Emergency Medicine worked to overcome multiple structural and functional challenges. Structurally, as an organization based within a medical education specialty society, the CORD Academy has had an opportunity to promote educational excellence on a national level, using the resources of its parent organization. However, its organizational affiliation may serve as a barrier to broad integration across all EM educators, some of whom may identify with other national EM organizations. In the planning and inception of the Academy, efforts were made to secure the strong support and commitment of CORD as one of the lead organizations in EM education. As envisioned, the CORD leadership has been able to advocate for the Academy, promoting its unique benefits to the CORD Volume XVIII, no. 1: January 2017

CORD Academy for Scholarship in Education in EM membership and other EM educators. Another challenge the Academy has faced is assuring that the development of our educational innovations, services and offerings do not outpace the available resources of our members’ service commitment or our available financial and administrative support. Academy members, scholars and DEs are accomplished and committed individuals. The Academy continuously works to balance member service obligation and resources utilization to provide added worth to its membership and the CORD organization as a whole. It is our hope that this usefulness will continue to support the Academy’s credibility and purpose both within CORD and to EM educators. Next, the ideal balance of inclusivity of membership criteria and selection has proven to be a challenge to the Academy’s growth. The Academy was rigorously selective in its early cycles of membership; however, feedback led to significant efforts to broaden membership and foster inclusivity so as to provide greater value to the whole of the CORD membership. Finally, it is important for the success and viability of the Academy to have continuity of purpose and process, and sound organizational memory. It is our hope that in future years, as the Academy continues to grow, the prescribed roles of the Academy Advisory Committee, Academy leadership positions and overall organizational structure of the Academy will provide this historical perspective and memory. SUMMARY After five years, the CORD Academy for Scholarship in Education in Emergency Medicine is gaining traction as a national academy for EM health professions educators. The Academy has worked to overcome structural and functional challenges in order to operate as a service organization and community of practice within the CORD. Future efforts will continue to focus on providing value to educators by recognizing excellence, promoting career development, and fostering the scholarship of teaching in emergency medicine.

Address for Correspondence: Stephen J. Wolf, University of Virginia School of Medicine, Department of Emergency Medicine, 1215 Lee St, Charlottesville, VA 22908. Email: sjw2v@virginia.edu. Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none. Copyright: © 2016 LaMantia et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

29

Western Journal of Emergency Medicine


LaMantia et al.

CORD Academy for Scholarship in Education in EM

8. Pfeil SA, Shields HM, Marks LN, et al. The American

REFERENCES

Gastroenterological Association Academy of Educators: Creating a

1. Boyer EL. Scholarship Reconsidered: Priorities of the Professoriate.

Home for Medical Educators in a National Specialty Organization.

Princeton, NJ: Carnegie Foundation for the Advancement of

Gastroenterology. 2015;149(6):1309.

Teaching, 1990.

9. CORD Academy for Scholarship in Education in Emergency

2. Glassick CE. Boyer’s Expanded Definition of Scholarship, the

Medicine. Available at: http://www.cordem.org/i4a/pages/index.

Standards of Assessing Scholarship, and the Elusiveness of the

cfm?pageid=3888. Accessed June 10, 2016.

Scholarship of Teaching. Acad Med. 2000;75(9):877. 3. Searle NS, Thompson BM, Friedland JA, et al. The Prevalence

10. CORD-EMF Education Research Grant. Available at: http://www. cordem.org/i4a/pages/index.cfm?pageid=3356. Accessed June 7,

and Practice of Academies of Medical Educators: A Survey of U.S.

2016.

Medical Schools. Acad Med. 2010;85:48. 4. Academies Collaborative. Member Organizations. Available at: http://

11. Love JN, Yarris LM, Santen SA, et al. Specialty-Specific, Collaborative Faculty Development Opportunity in Education

www.academiescollaborative.com. Accessed June 10, 2016.

Research: The MERC at CORD Scholars’ Program at 5 years. Acad

5. LaMantia J, Kuhn GJ,Searle NS. The CORD Academy for

Med. 2016;91:548-55.

Scholarship in Education in Emergency Medicine. Acad Emerg Med. 2010,17:S13–5.

12. Newgard CD, Beeson MS, Kessler CS, et al. 2012 Consensus Conference: Establishing an Emergency Medicine Education

6. Berman JR, Aizer J, Bass AR, et al. Creating an academy of medical

Research Consortium. Acad Emerg Med. 2012;19:1468–85.

educators: how and where to start. HSS J. 2012:8;165–8. 7. Collins JP and Gough IR. An academy of surgical educators:

13. Jordan J, Yarris L, Runde D, et al. Scholarship and the Emergency

sustaining education – enhancing innovation and scholarship. ANZ

Medicine Educator: A Workforce Study. Ann of Emerg Med.

Surg. 2010:80;13–7.

2015;66(4):S41-2.

Western Journal of Emergency Medicine

30

Volume XVIII, no. 1: January 2017


Brief Research Report

A Needs Assessment for a Longitudinal Emergency Medicine Intern Curriculum Eric Shappell, MD James Ahn, MD

University of Chicago, Department of Medicine, Section of Emergency Medicine, Chicago, Illinois

Section Editor: Sally A. Santen, MD, PhD Submission history: Submitted July 12, 2016; Revision received September 16, 2016; Accepted September 18, 2016 Electronically published November 8, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.9.31493

Introduction: A key task of emergency medicine (EM) training programs is to develop a consistent knowledge of core content in recruits with heterogeneous training backgrounds. The traditional model for delivering core content is lecture-based weekly conference; however, a growing body of literature finds this format less effective and less appealing than alternatives. We sought to address this challenge by conducting a needs assessment for a longitudinal intern curriculum for millennial learners. Methods: We surveyed all residents from the six EM programs in the greater Chicago area regarding the concept, format, and scope of a longitudinal intern curriculum. Results: We received 153 responses from the 300 residents surveyed (51% response rate). The majority of respondents (80%; 82% of interns) agreed or strongly agreed that a dedicated intern curriculum would add value to residency education. The most positively rated teaching method was simulation sessions (91% positive responses), followed by dedicated weekly conference time (75% positive responses) and dedicated asynchronous resources (71% positive responses). Less than half of respondents (47%; 26% of interns) supported use of textbook readings in the curriculum. Conclusion: There is strong learner interest in a longitudinal intern curriculum. This needs assessment can serve to inform the development of a universal intern curriculum targeting the millennial generation. [West J Emerg Med. 2017;18(1)31-34.]

INTRODUCTION A key task of emergency medicine (EM) training programs is to develop a consistent knowledge of core content in recruits with heterogeneous training backgrounds and variable gaps in education.1 The traditional model for delivering core content is lecture-based weekly conference; however, a growing body of literature finds this format less effective and less appealing than alternatives.2-6 As a result, some training programs have introduced new teaching methods such as shorter and more interactive lectures, small group sessions, and web-based asynchronous components.7,8 These advances herald the adaptation of conference design to meet the challenges of educating today’s millennial learners who have “little desire to read long texts,”9 value appropriate usage of technology, and seek interactive learning opportunities.10,11 Volume XVIII, no. 1: January 2017

Compounding the challenge of engaging millennials, the traditional model of delivering the same content to all training levels has limited educational return.12 Topics appropriate for interns are unlikely to be high yield for senior residents, whereas advanced topics can be inappropriate for novice learners. Finally, the traditional model may suffer from a limited audience during weekly conference; it is not uncommon for residents to miss conference due to clinical or other obligations.13 We sought to address these challenges by developing a novel longitudinal intern curriculum for millennial learners using the framework of Kern’s six-step model for curriculum development.14 The first steps in this model are problem identification and a targeted needs assessment. Within this framework, the problem was the absence of a longitudinal curriculum tailored for interns that was consistently available. 31

Western Journal of Emergency Medicine


Shappell et al.

Needs Assessment for Longitudinal EM Intern Curriculum We conducted a literature review that revealed two examples of intern boot camp development.15,16 However, we were unable to identify a longitudinal curriculum for intern-level learners, and there are no published reports of a needs assessment for an EM intern-level curriculum. We aim to fill this gap in the literature by conducting a targeted needs assessment of EM residents on the concept, format, and scope of a longitudinal intern curriculum. METHODS We surveyed all residents from the six EM programs in the greater Chicago area during the 2015-2016 academic year. These programs include two four-year training programs and four three-year training programs. Three are universityaffiliated programs, two are community programs, and one is a county program. The survey questions were developed iteratively by a working group of EM education experts with the goal of assessing resident attitudes toward the concept, format, and topics covered in a longitudinal intern curriculum (Appendix A). Example topics were chosen from core content representing three categories: emergent conditions, common complaints, and procedures/skills. We piloted the survey among a representative audience (17 residents at one institution), and established response process validity by reviewing feedback from the pilot, which resulted in the addition of six additional example topics.

This survey contained both multiple-choice and free-text items. Surveys were distributed to residents via email by their respective program leadership and participation was voluntary. The survey spanned from November 2015 to April 2016. Two follow-up emails were sent (February and April of 2016) prior to closing the survey. In order to characterize potential differences in opinion between intern respondents and the study group as a whole, we analyzed intern responses separately for questions regarding the perceived value of an intern curriculum and preferred teaching methods. Positive responses were defined as responses of “Agree” or “Strongly Agree.” Unanswered questions were treated as null. We compiled and analyzed data using Microsoft Excel (Microsoft, Redmond, WA). RESULTS We received 153 responses from the 300 residents surveyed (51% response rate). Of these responses were 58 (38%) interns, 40 (26%) second-year residents, 43 (28%) third-year residents and 12 (8%) fourth-year residents. The average number of residents responding from each program was 26 ± 5.9. Resident impressions of the educational value and preferred teaching methods in a longitudinal intern curriculum are depicted in Figures A and B, respectively. The majority of respondents (80%; 82% of interns) agreed

Figure ABC. Emergency medicine resident opinions on intern curriculum value and design. ECG, electrocardiogram; ACS, acute coronary syndromes; CXR, chest radiograph; GI, gastrointestinal.

Western Journal of Emergency Medicine

32

Volume XVIII, no. 1: January 2017


Shappell et al. or strongly agreed that a dedicated intern curriculum would add value to residency education. The most positively rated teaching method was simulation sessions (positive responses: 91% of all residents; 91% of interns) followed by dedicated weekly conference time (positive responses: 75% of all residents; 84% of interns) and dedicated asynchronous resources (positive responses: 71% of all residents; 69% of interns). Less than half of respondents (47%; 26% of interns) supported use of textbook readings in the curriculum. When asked how many hours of weekly conference time should be dedicated to an intern curriculum, the majority responded one hour (n = 100, 65%) followed by two hours (n = 39, 25%), no time (n = 8, 5%) and three or more hours (n = 6, 4%). Resident opinions on suggested topics to include in a longitudinal intern curriculum are illustrated in Figure C. All potential suggested topics surveyed received over 80% positive responses other than the topics of arterial line placement (63%) and thoracotomy (43%). Topics in the open-ended portion of the survey that were submitted by more than one resident included the following: dermatology and ultrasound (three responses each), documentation, orthopedics, and toxicology (two responses each). DISCUSSION This needs assessment illustrates a strong learner interest in a dedicated longitudinal intern curriculum as the more than 80% of respondents believed this type of curriculum would add value to their education. Learners primarily desire dedicated conference time that offers a “hands-on” experience. The desires of this learner group are, unfortunately, faculty and infrastructure intensive: both dedicated conference time and simulation sessions require significant effort on behalf of the program leadership responsible for organizing the curriculum and the educators running individual sessions. However, this dataset can offer an objective measure of perceived value in these investments and can be used to focus the efforts of residency leaders interested in developing an intern curriculum. Asynchronous resources were also favorably reviewed for inclusion in the curriculum. This finding coincides with the growing number and popularity of free open-access medical education (FOAM) resources now available online.17,18 In addition, the Accreditation Council for Graduate Medical Education now allows for up to 20% of conference time to be allotted to asynchronous learning, which underscores the legitimacy of this teaching modality. Resources such as ALiEM AIR,19 EM Fundamentals,20 and Emergency Medicine Foundations21 have been created with this concept explicitly in mind and are ripe for inclusion as asynchronous resources in this type of curriculum. Textbook readings were unpopular with this audience; less than half of all residents and only 26% of interns responded favorably to their inclusion in an intern curriculum. This tepid response to textbook reading is likely due to the educational needs represented by millennials. In Volume XVIII, no. 1: January 2017

Needs Assessment for Longitudinal EM Intern Curriculum fact, the call for increased asynchronous resources and decreased textbook usage is not surprising given recommendations for teaching this group of learners.9 Using these data as a guide, we can draw three conclusions: 1) Learners desire a level-specific curriculum; 2) learners desire a strong simulation experience in their intern curriculum; and 3) learners have eschewed the textbook for asynchronous resources. An ideal curriculum that maximizes learner interest could include dedicating one hour of weekly conference time to training level-specific topics. The topics covered should include those most highly recommended by learners in this study. A simulation experience (ranging from high-fidelity arrangements to oral boards-style cases) should play a significant role in these weekly intern conferences. Finally, requiring learners to review asynchronous resources prior to conference would allow for a “flipped classroom” design where more time could be dedicated to simulation, discussion, as well as higher learner satisfaction and knowledge acquisition.22,23 Using the framework of Kern, the next steps in building this curriculum would be the development of goals and objectives, educational methods, curricular implementation, learner assessment, and curriculum evaluation. Before widespread deployment of such a curriculum can be justified, pilot programs will at the very least need to establish evidence of outcome non-inferiority when compared to traditional methods. LIMITATIONS This survey was conducted across a small, heterogeneous group of residency programs, including both three- and fouryear programs, as well as a mix of university, community, and county settings. While this variety may speak to the generalizability of the findings, the pooled results may also wash out specific program-level attitudes and perceived strengths or weaknesses. Additionally, given our response rate, there is potential for nonresponse error, as those who chose not to respond may have different curricular needs than those who did.24 However, since this is a needs assessment, we felt that capturing 50% of the learners yielded important information about the educational needs of this generation’s EM interns. It is also important to note that these data represent the opinions of one group of stakeholders (i.e. learners) and such information should not be used in isolation to make curricular decisions. Additionally, while the initial problem was identified by program leadership and education experts at one institution and the survey design was informed by their assessment of learner needs, we also face coverage error as additional stakeholders (e.g. outside program leadership) were not asked about perceived needs. Future study should include this group to obtain this important perspective. Also, we did not address the preferred curricular structure for the minority of residents who indicated neutrality or negative opinions regarding this type of longitudinal intern curriculum. Further study to determine how best to approach these learners is warranted. Finally, we acknowledge that all institutions may not have the necessary 33

Western Journal of Emergency Medicine


Shappell et al.

Needs Assessment for Longitudinal EM Intern Curriculum resources to develop a longitudinal EM intern curriculum as described by this study. The development of a universal openaccess curriculum would be beneficial to programs that lack the infrastructure to create an intern curriculum locally.

for large-group continuing medical education activities. Fam Med. 2011;43(5):334-7. 6. De Lorenzo RA and Abbott CA. Effectiveness of an adult-learning, selfdirected model compared with traditional lecture-based teaching methods in out-of-hospital training. Acad Emerg Med. 2004;11(1):33-7.

CONCLUSION This study shows strong learner interest in a longitudinal intern curriculum. The preferred educational methods include dedicated conference and simulation time with corresponding asynchronous resources. This needs assessment can serve to inform the development of a universal longitudinal intern curriculum targeting the millennial generation. ACKNOWLEDGMENT The authors would like to acknowledge Margaret Sheey, Michael Gottlieb, Michael Gisondi, Marc Dorfman, and Elise Lovell for their assistance with this project.

7. Gottlieb M, Riddell J, Crager SE. Alternatives to the Conference Status Quo: Addressing the Learning Needs of Emergency Medicine Residents. Ann Emerg Med. 2016. 8. Kornegay JG, Leone KA, Wallner C, et al. Development and implementation of an asynchronous emergency medicine residency curriculum using a web-based platform. Intern Emerg Med. 2016. 9. Twenge JM. Generational changes and their impact in the classroom: teaching Generation Me. Med Educ. 2009;43(5):398-405. 10. Roberts DH, Newman LR, Schwartzstein RM. Twelve tips for facilitating Millennials’ learning. Med Teach. 2012;34(4):274-8. 11. Oblinger D and Oblinger J. Educating the Net Generation. Boulder, CO: EDUCAUSE; 2005. 12. Farrohki ET, Jensen AR, Brock DM, et al. Expanding resident conferences while tailoring them to level of training: a longitudinal study. J Surg

Address for Correspondence: Eric Shappell, MD, University of Chicago, Department of Medicine, Section of Emergency Medicine, 5841 S. Maryland Ave, MC 5068, Chicago, IL 60637. Email: shappell. eric@gmail.com. Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. Eric Shappell is the creator and editor-inchief of EM Fundamentals and has contributed work to the Academic Life in Emergency Medicine blog and Emergency Medicine Foundations. James Ahn is a senior advisor for EM Fundamentals and has contributed work to the Academic Life in Emergency Medicine blog. Copyright: © 2016 Shappell et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

Educ. 2008;65(2):84-90. 13. Austin PE, Brown CK, Dunn KA. Conference attendance: do we meet the new residency review committee requirements? Ann Emerg Med. 1995;25(3):325-7. 14. Kern DE, Thomas PA, Hughes MT. Curriculum Development for Medical Education: A Six-Step Approach. Baltimore, MD: Johns Hopkins University Press; 2009. 15. Lucas R, Choudhri T, Roche C, et al. Developing a curriculum for emergency medicine residency orientation programs. J Emerg Med. 2014;46(5):701-5. 16. Ataya R, Dasgupta R, Blanda R, et al. Emergency medicine residency boot Camp curriculum: a pilot study. West J Emerg Med. 2015;16(2):356-61. 17. Cadogan M, Thoma B, Chan TM, et al. Free Open Access Meducation (FOAM): the rise of emergency medicine and critical care blogs and podcasts (2002-2013). Emerg Med J. 2014;31(e1):e76-7. 18. Mallin M, Schlein S, Doctor S, et al. A survey of the current utilization of asynchronous education among emergency medicine residents in the United States. Acad Med. 2014;89(4):598-601.

REFERENCES

19. Academic Life in Emergency Medicine Approved Instructional Resources Series. https://www.aliem.com/aliem-approved-instructional-

1. Santen SA, Rademacher N, Heron SL, et al. How competent are emer-

resources-air-series/. Accessed July 7, 2016.

gency medicine interns for level 1 milestones: who is responsible?

20. EM Fundamentals. http://emfundamentals.com/. Accessed July 7, 2016.

Acad Emerg Med. 2013;20(7):736-9.

21. Emergency Medicine Foundations. https://emergencymedicinefounda-

2. Wolff M, Wagner MJ, Poznanski S, et al. Not another boring lecture: engaging learners with active learning techniques. J Emerg Med.

tions.com/. Accessed July 7, 2016. 22. Liebert CA, Mazer L, Bereknyei Merrell S, et al. Student perceptions

2015;48(1):85-93.

of a simulation-based flipped classroom for the surgery clerkship: A

3. Hart D and Joing S. The Millennial Generation and “the lecture”. Acad Emerg Med. 2011;18(11):1186-7.

mixed-methods study. Surgery. 2016. 23. Liebert CA, Lin DT, Mazer LM, et al. Effectiveness of the Surgery Core

4. Maddry JK, Varney SM, Sessions D, et al. A comparison of simulation-

Clerkship Flipped Classroom: a prospective cohort trial. Am J Surg.

based education versus lecture-based instruction for toxicology training in emergency medicine residents. J Med Toxicol. 2014;10(4):364-8.

2016;211(2):451-7e451. 24. Yarris LM and Deiorio NM. Education research: a primer for educators

5. Stephens MB, McKenna M, Carrington K. Adult learning models

Western Journal of Emergency Medicine

in emergency medicine. Acad Emerg Med. 2011;18 Suppl 2:S27-35.

34

Volume XVIII, no. 1: January 2017


Original Research

Development of an Assessment for Entrustable Professional Activity (EPA) 10: Emergent Patient Management Laura R. Thompson, MD* Cynthia G. Leung, MD, PhD* Brad Green, MD* Jonathan Lipps, MD‡ Troy Schaffernocker, MD† Cynthia Ledford, MD† John Davis, MD, PhD† David P. Way, MEd* Nicholas E. Kman, MD*

*The Ohio State University College of Medicine, Department of Emergency Medicine, Columbus, Ohio † The Ohio State University College of Medicine, Department of Internal Medicine, Columbus, Ohio ‡ The Ohio State University College of Medicine, Department of Emergency Anesthesiology, Columbus, Ohio

Section Editor: Jonathan Fisher, MD Submission history: Submitted July 7, 2016; Revision received September 14, 2016; Accepted October 27, 2016 Electronically published December 5, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.10.31479

Introduction: Medical schools in the United States are encouraged to prepare and certify the entrustment of medical students to perform 13 core entrustable professional activities (EPAs) prior to graduation. Entrustment is defined as the informed belief that the learner is qualified to autonomously perform specific patient-care activities. Core EPA-10 is the entrustment of a graduate to care for the emergent patient. The purpose of this project was to design a realistic performance assessment method for evaluating fourth-year medical students on EPA-10. Methods: First, we wrote five emergent patient case-scenarios that a medical trainee would likely confront in an acute care setting. Furthermore, we developed high-fidelity simulations to realistically portray these patient case scenarios. Finally, we designed a performance assessment instrument to evaluate the medical student’s performance on executing critical actions related to EPA-10 competencies. Critical actions included the following: triage skills, mustering the medical team, identifying causes of patient decompensation, and initiating care. Up to four students were involved with each case scenario; however, only the team leader was evaluated using the assessment instruments developed for each case. Results: A total of 114 students participated in the EPA-10 assessment during their final year of medical school. Most students demonstrated competence in recognizing unstable vital signs (97%), engaging the team (93%), and making appropriate dispositions (92%). Almost 87% of the students were rated as having reached entrustment to manage the care of an emergent patient (99 of 114). Inter-rater reliability varied by case scenario, ranging from moderate to near-perfect agreement. Three of five case-scenario assessment instruments contained items that were internally consistent at measuring student performance. Additionally, the individual item scores for these case scenarios were highly correlated with the global entrustment decision. Conclusion: High-fidelity simulation showed good potential for effective assessment of medical student entrustment of caring for the emergent patient. Preliminary evidence from this pilot project suggests content validity of most cases and associated checklist items. The assessments also demonstrated moderately strong faculty inter-rater reliability. [West J Emerg Med. 2017;18(1)35-42.]

Volume XVIII, no. 1: January 2017

35

Western Journal of Emergency Medicine


Development of an Assessment for EPA 10: Emergent Patient Management INTRODUCTION In 2014, the Association of American Medical Colleges (AAMC) published 13 Core Entrustable Professional Activities, or EPAs, considered essential competencies medical school graduates are expected to be able to perform prior to entry into residency.1 EPAs are considered foundational for all practicing physicians, regardless of specialty choice and describe sets of integrated competencies required for the care of specific patient types in specific patient settings. EPAs are multifaceted and integrated, making them more appropriate to assess holistically.2 As originally conceived, entrustment was a discrete standard that literally meant that a graduated medical student was prepared to perform a patient care activity without direct supervision. This has been debated as too ambitious for some patient types and some care settings. Chen et. al. have proposed an alternative framework of entrustment for undergraduate medical education (UME) to include a continuum of UME entrustment and supervision.3 If conceived as a discrete standard, EPA-10 poses substantial challenges for undergraduate medical educators, both logistically and ethically. This is the type of EPA Chen addresses when suggesting that entrustment should be considered a continuum rather than discrete. EPA-10 requires medical students to “recognize a patient requiring urgent or emergent care and initiate evaluation and management.”1 To earn entrustment, a student must recognize a patient’s clinical decompensation or abnormal vital signs, gather information to determine possible causes, begin initial stabilization, and call for assistance. Challenges involve the lack of opportunities students have to manage patients requiring emergent care. Even when a student does encounter an emergent patient, concern for patient safety often precludes their involvement in the patient’s evaluation and management. Consequently, alternative methods for assessing EPA-10 and perhaps the adoption of a continuum of entrustment and supervision are needed.3, 4 The purpose of this project was to develop an assessment method and associated instrumentation for evaluating medical students on EPA-10. The project involved the design of casescenarios representing patients in need of emergent care, the design of high-fidelity simulations to evaluate the student’s performance on these cases, and the design of an assessment instrument for faculty to document the student’s performance. METHODS Educational Program (Setting) Our population was fourth-year medical students (M4s) at The Ohio State University College of Medicine. Our class sizes average about 190 students per year. M4s at our institution must complete several required rotations, one of which is a one-month emergency medicine (EM) clerkship. We defined our study period as June-December 2015, which provided sufficient numbers of EM clerkship students to evaluate the EPA-10 assessment method. By selecting Western Journal of Emergency Medicine

Thompson et al.

this time period, we were also assured that we captured performance data for medical students who were most likely going into EM. The goals of the EM clerkship are to attain knowledge about the practice of emergency medicine and to build skills in the assessment and management of the undifferentiated patient. The EM clerkship enrolls an average of 20 students per month. During the clerkship students work 120 clinical hours at one of seven regional emergency departments. They participate in three hours of didactics; workshops on airway, suturing, IV placement, and ultrasound; and complete 10 online learning modules. Students prepare for the EPA-10 assessment through clinical work and the completion of study modules from the National (U.S.) EM Clerkship Curriculum.5 Case Development A team of experts in both emergent care and medical student education wrote scripts for five case scenarios involving an unstable patient requiring resuscitation. Cases were derived using the criteria set forth in the AAMC’s Core EPAs for Entering Residency: Curriculum Developers Guide.1 The five case scenarios were developed on the basis of their general prevalence, unstable presentations, and easily observed critical actions required for establishing a medical and/or surgical plan. Each scenario was designed to address two or more of the medical conditions recommended by the AAMC Core EPA document.1 The cases were written using classical illness scripts so diagnosis should have been relatively clear to a fourth-year medical student, leading to a diagnostic and therapeutic plan with which they are familiar. The cases are listed here and in Table 1: 1. Chest pain: unstable atrial fibrillation (Afib) 2. Abdominal pain: ruptured ectopic pregnancy (REP) 3. Confusion and fever: sepsis (SEP) 4. Headache: subarachnoid hemorrhage (SAH) 5. Trauma and shortness of breath: trauma-related tension pneumothorax (PTX) Simulation Development High-fidelity simulation was chosen as the modality for the EPA-10 assessment because it provided a balance between the realistic portrayal of an unstable patient and standardization across cases and assessment sessions. Cases were forwarded to local simulation experts: an emergency physician who was fellowship trained in simulation and three simulation technicians from our Clinical Skills Education and Assessment Center (CSEAC). These individuals crafted a simulation for each case. Simulations were designed to take place in replicas of ED resuscitation bays in our CSEAC. Each bay was equipped with a programmable simulation manikin, and staffed by a faculty facilitator/evaluator, a simulation technician, and a nurse confederate. Voices of the patient, family and other healthcare team members were provided through telephone or speakers. 36

Volume XVIII, no. 1: January 2017


Thompson et al.

Development of an Assessment for EPA 10: Emergent Patient Management

Table 1. Summary of five case scenarios used for assessment of entrustable professional activity (EPA 10) in medical students. Case Patient conditions Critical actions Arrhythmia, chest pain, hypotension

1. Obtain a 12-Lead EKG 2. Initiate medical management (Beta-blocker or CCB) 3. Cardiovert the unstable patient

Hypotension, tachycardia, mental status change

1. 2. 3. 4.

Start IV fluid bolus Transfuse O neg. blood Perform pelvic ultrasound or FAST exam Consult OB/Gyn

3. Confusion and fever: sepsis (SEP)

Hypotension, fever, mental status change

1. 2. 3. 4.

Order IV fluid bolus Order antibiotics Establish central line access Start pressors

4. Headache: subarachnoid hemorrhage (SAH)

Mental status change, hypertension

1. 2. 3. 4.

Order head CT Perform lumbar puncture Consult neurosurgery Administer IV anti-hypertensive medication

5. Trauma and shortness of breath: trauma related tension pneumothorax (PTX)

Chest pain, shortness of breath, hypotension, tachycardia

1. 2. 3. 4.

Perform primary survey (ABCs) Perform needle thoracostomy Order CXR Reassess the patient

1. Chest pain: Atrial fibrillation (Afib)

2. Abdominal pain: ruptured ectopic pregnancy (REP)

EKG, electrocardiogram; CCB, calcium channel blocker; IV, intravenous; CT, computed tomography; CXR, chest x-ray

Evaluation Instrument Development An expert panel of education faculty (three EM, one Anesthesia) was tasked with developing the assessment instrument for evaluating student performance on each simulated case. The AAMC EPA Curriculum Developers Guide was again used to identify expected performance tasks for a learner who would be entrusted to recognize a patient requiring emergent care; initiate evaluation and management; and seek help within the clinical contexts assessed.1 The performance tasks were designed to be highly observable and low inference. Accordingly, they were converted to three types of checklist items. First, a set of three universal critical actions were identified and applied to all cases: recognizes unstable vital signs; asks for help; and determines patient’s disposition. These were supplemented with case-specific critical actions related to 1) identification of underlying etiologies of the patient’s decompensation, 2) initiation of care plans, and 3) application of basic and advanced life support. Finally, each case included a global entrustment item that asked whether the evaluator would “entrust” the student to manage a similar case unsupervised. Checklist items were reviewed and revised by experts in critical care, simulation design and assessment (Table 2). The instruments were formatted for use in a web-based electronic assessment platform called MyprogressTM and were delivered for use by faculty through wireless tablet computers. Performance data were collected and stored until needed in the MyprogressTM cloud-based computer servers.7

Volume XVIII, no. 1: January 2017

Assessment Method Prior to the simulation, students were provided an orientation to the trauma bay. For each case, they were provided a chief complaint and instructed to treat the manikin as a real patient. Simulations were designed for teams of four students. Each team completed four 30-minute cases. Each student was designated as team leader for one case. As team leader, they were responsible for making all medical decisions, recognizing critical actions and assigning tasks to the other team members. Student performance was evaluated only during their turn as team leader. Faculty facilitators completed the EPA-10 evaluation checklists in real time during the simulation. All decisions made during the case, including medications administered or procedures performed, altered the course of the case based on pre-programmed simulator responses to each action. If a team leader failed to perform a critical action during the simulation, the nurse confederate provided prompts to move the case along. For example, if the student failed to initiate IV fluids in a hypotensive patient, the nurse might say “I am worried about this patient. His blood pressure seems very low.” The nurse would give additional prompts as needed until the critical action was performed. When the team leader gave an unanticipated order, the controllers improvised or altered the simulation to follow that directive. A log of improvised alterations was kept so that consistent responses could be programmed into the simulation for future assessments. During the study period, the EPA-10 assessment was considered a formative evaluation. Students were only

37

Western Journal of Emergency Medicine


Thompson et al.

Development of an Assessment for EPA 10: Emergent Patient Management Table 2. Inter-rater reliability, Tetrachoric correlations (Rtet ), and frequencies and (percentages) of judge ratings. Judge 1 K-alpha Rtet (n=41)

No

Yes

Judge 2 No

Yes

Case 1: Chest pain: atrial fibrillation (Afib) (30 subjects, 4 judges with 8 overlapping) 1.Obtains and recognizes patient status - unstable vital signs

NA

-

0

30 (100)

0

8 (100)

-.083

.75†

3 (10)

27 (90)

0

8 (100)

3. Determines patient disposition*

1.00

.29†

3 (10)

26 (87)

1 (13)

7 (88)

4. Provides stabilizing treatment: obtain 12 lead EKG

1.00

.40†

5 (17)

25 (83)

3 (38)

5 (63)

5. Provides stabilizing treatment: beta blocker or CCB

.762

.51†

15 (50)

15 (50)

6 (75)

2 (25)

6. Provides stabilizing treatment: when vitals change cardioversion w/o consent

.458

.81†

7 (23)

23 (77)

4 (50)

4 (50)

6 (20)

24(80)

5 (63)

3 (38)

2. Asks for help when needed

Global EPA-Afib: meets entrustment

.531

Case 3: Confusion and fever: sepsis (SEP) (26 subjects, 4 judges with 8 overlapping) 1. Obtains and recognizes patient status - unstable vital signs 2. Asks for help when needed* 3. Determines patient disposition* 4. Provides stabilizing treatment: IVF bolus 5. Provides stabilizing treatment: ABTCS 6. Provides stabilizing treatment: central line 7. Provides stabilizing treatment: pressor Global EPA-sepsis: meets entrustment

0.00

.80†

0

26 (100)

1 (14)

6 (86)

-.182

.19†

2 (8)

22 (92)

2 (29)

5 (71)

.571

.63†

1 (4)

22 (96)

2 (29)

4 (57)

NA

.00†

0

26(100)

0

7 (100)

1.00

.87†

1 (4)

25 (96)

1 (14)

6 (86)

-.222

.56†

9 (35)

17 (65)

5 (71)

2 (28)

.313

.51†

3 (12)

23 (89)

3 (43)

4 (57)

3(12)

23 (89)

2 (29)

5 (71)

1.00

Case 4: Headache: subarachnoid hemorrhage (SAH) (11 subjects, 4 judges with 7 overlapping) 1. Obtains and recognizes patient status - unstable vital signs

0.00

-

1 (9)

10 (91)

1 (14)

6 (86)

2. Asks for help when needed

1.00

-

2 (18)

9 (82)

1 (14)

6 (86)

3. Determines patient disposition

.606

-

3 (27)

8 (73)

1 (14)

6 (86)

4. Provides stabilizing treatment: pain control

1.00

-

5 (46)

6 (55)

NA

-

0

11 (100)

0

7 (100)

6. Provides stabilizing treatment: lumbar puncture

-.083

-

0

11 (100)

2 (29)

5 (71)

7. Provides stabilizing treatment: consult neurosurgeon*

-.167

-

1 (9)

8 (73)

1 (14)

3 (43)

NA

-

0

6 (55)

0

1(14)

NA

-

0

11(100)

0

7 (100)

NA

.68†

0

28 (100)

0

8 (100)

.350

.71†

10 (38)

18 (64)

3 (38)

5 (63)

3. Determines patient disposition*

-.083

.49†

9 (32)

19 (68)

5 (63)

3 (38)

4. Provides stabilizing treatment: airway and vitals

-.083

.87†

2 (7)

26 (93)

1 (13)

7 (88)

5. Provides stabilizing treatment: needle thoracostomy

1.00

.70†

8 (29)

20 (71)

2 (25)

6 (75)

6. Provides stabilizing treatment: x-ray and reassess

1.00

.52†

2 (7)

26 (93)

2 (25)

6 (75)

5 (18)

23 (82)

1 (13)

7 (88)

5. Provides stabilizing treatment: CT head*

8. Provides stabilizing treatment: admin IV antihypertensive* Global EPA-SAH: meets entrustment

5 (71)

Case 5: Trauma and shortness of breath: trauma/tension pneumothorax (PTX) (28 subjects, 4 judges with 8 overlapping) 1. Obtains and recognizes patient status - unstable vital signs* 2. Asks for help when needed

Global EPA-PTX: meets entrustment

.606

*Values were missing from this variable due to software problems. †Statistically significant. NA = Judges have perfect agreement using the same rating for all subjects. i.e. The Krippendorf’s alpha value is indeterminate when all judges rate all subjects with the same score. EKG, electrocardiogram; CCB, calcium channel blocker; EPA, entrustable professional activity; IVF, intravenous fluid; CT, computed tomography; IV, intravenous

Western Journal of Emergency Medicine

38

Volume XVIII, no. 1: January 2017


Thompson et al.

Development of an Assessment for EPA 10: Emergent Patient Management

required to participate and receive formative feedback on their performance. Students who performed poorly (did not attain the rating of “global entrustment”), were offered a coaching session during which they were provided a chance to perform additional cases. To evaluate inter-rater reliability for the assessments, we scheduled two faculty facilitator/evaluators for each trauma bay during the first two months of the project. After that, scheduling two faculty per trauma bay became cost prohibitive. Scoring Performance data was downloaded from MyprogressTM and scored. We scored each critical action item as “YES”, “NO” or “With prompting from the nurse confederate.” For analysis purposes, the “With prompting” rating was rescored as a “NO” since the performance did not meet the threshold of being executed autonomously. Global entrustment was assessed as “YES” or “NO.” Analysis Besides descriptive statistics, we conducted three primary analyses to investigate the psychometric properties of the EPA-10 instruments. We used the Krippendorf ‘s alpha (K-Alpha) statistic to evaluate inter-rater reliability among the faculty evaluators.8,9 The K-Alpha provides stable estimates of inter-rater reliability under the conditions of partially-crossed designs. (Partially-crossed designs occur when all subjects are not evaluated by all judges.)8 It has become the most recommended measure of inter-rater reliability with nominal level data like yes-no checklists.10-13 We calculated K-Alphas for each checklist item, including the global entrustment rating. We calculated tetrachoric correlations (Rtet) between each checklist item and its corresponding global entrustment item. The Rtet provides an indicator of internal consistency within the checklist. A high Rtet also implies that the item contributes to the global entrustment decision.

To evaluate inter-rater reliability of faculty pairs on their global entrustment ratings, we calculated the percent agreement and Cohen’s kappa coefficients across all subjects. In situations involving dichotomous data and pairs of raters, Uebersax recommends using the p-values from calculating Cohen’s kappa coefficients to assess whether agreement exceeds that which might be expected by chance.10,13 The results of this test informed us about which pair of raters had the best agreement and which require additional feedback or training. We did most computations using SPSS for Windows, V. 22.14 The Krippendorff’s alpha measures were calculated using an SPSS syntax module written by Hayes.15 We calculated the tetrachoric correlations using an SPSS syntax module called Tetra-Com.16 This project was determined to be exempt from humans subjects review by our institutional review board. RESULTS One hundred fourteen medical students, or 62% of the total class (114 of 185) participated in the EPA-10 assessment between June and December of 2015. Table 3 summarizes the number of students by rotation, case scenario, and number of evaluators. Three cases were used for every rotation: Afib (30, 26%), SEP (26, 23%), and PTX (28, 25%). A fourth case (SAH) was determined to be too easy and was subsequently replaced with the ruptured ectopic pregnancy case (REP). Due to a technical problem with the web-based assessment platform used for data collection, the data for the REP case were incomplete. Twenty-eight percent of students were evaluated by more than one faculty member (32 of 114). Almost 87% of the students were rated as having reached ad-hoc entrustment as defined by the EPA-10 criteria (86.8%; 99 of 114). Cohen’s kappa coefficients across the four pairs of judges who jointly assessed students on global entrustment ranged from 0.46-1.0, with three of the four pair’s agreement being significantly better than chance. Two of the Kappa coefficients show substantial agreement, while the other two show moderate agreement (Table 4).

Table 3. Number of medical student participants and faculty evaluators by rotation, along with number and percentage of those who attained entrustment (i.e., met EPA). Number of students evaluated with each case Students evaluated by how many faculty Rotation

Afib

REP

SEP

SAH

PTX

TOTAL

Met EPA 9 (75%)

One

Two

Three

0

0

12

1

3

0

3

3

3

12

2

4

0

3

4

4

15

11 (73%)

0

15

0

3

5

0

4

4

4

17

17 (100%)

17

0

0

4

5

6

5

0

6

22

20 (90%)

22

0

0

5

6

6

6

0

5

23

21 (92%)

18

3

0

6

4

4

3

0

3

14

10 (71%)

14

0

0

7

3

3

2

0

3

11

11 (100%)

11

0

0

TOTAL 30 19 26 11 28 114 99 (87%) 82 18 12 Afib, atrial fibrillation; REP, replaced; SEP, sepsis; SAH, subarachnoid hemorrhage, PTX, pneumothorax; EPA, entrustable professional activity.

Volume XVIII, no. 1: January 2017

39

Western Journal of Emergency Medicine


Development of an Assessment for EPA 10: Emergent Patient Management Table 4. Pairwise percentage agreement (upper diagonal) and Cohen’s kappa coefficients (lower diagonal) for judgments on 31 subjects on entrustment, or the student’s ability to manage an acutely decompensating/acutely ill patient with a life-threatening illness. (Note: The number of students rated jointly by the judge pair are in parentheses in the upper diagonal.) Faculty raters

A

B

A

C

100(8)

B

1.00**

C

-

D

Summary

-

-

100(8)

81(16)

85(13)

83(29)

90 (20)

90(20)

.46

D .57* .73*** * Significant at p<.05 **Significant at p<.01 ***Significant at p<0.001 Key: Cohen’s kappa coefficients: <0.0 = poor, less than chance agreement; 0.01 to 0.20 = slight agreement; 0.21 to 0.40 = fair agreement; 0.41 to 0.60 = moderate agreement; 0.61 to 0.80 = substantial agreement; 0.81 to 0.99 = almost perfect agreement.

The K-Alpha inter-rater reliabilities allowed us to look at faculty agreement on global entrustment for each case. The K-Alpha values were 0.53 for the Afib case, 0.61 for the PTX case and 1.00 for the SEP case. We were unable to calculate a K-Alpha value for the SAH case since all evaluators selected the same response; however, this implies perfect inter-rater reliability. Summary of Common Critical Action Items Three critical action items were common to all four cases: Obtains & recognizes patient status –unstable vital signs, Asks for help when needed and Determines patient disposition. All students were rated as entrusted by all raters for the SAH case. Consequently, we were unable to calculate the Rtet coefficients for these items. All but two students achieved entrustment on the first item “Obtains & recognizes patient status –unstable vital signs,” across all cases. For the students who were evaluated by two faculty, inter-rater agreement was near perfect. Accordingly, there was little information gleaned from the statistical analyses for this critical action item. We did observe, however, high positive and significant Rtet correlations with the global entrustment outcome for both SEP and PTX cases. The Asks for help item suffered from poor inter-rater agreement on the Afib and SEP cases. Raters demonstrated better agreement on the other two cases, SAH and PTX. Rtet correlations with the outcome can be considered strong for the Afib and PTX cases, and low but positive for the SEP case. Faculty raters generally agreed on whether students “Determined patient disposition” for three of the four cases. The exception was the PTX case, which suffered a negative K-Alpha value (-.083). Rtet correlations for this item were positive across three cases: 0.29 for Afib, 0.49 for PTX and 0.63 for the SEP case.

Western Journal of Emergency Medicine

Thompson et al.

Summary of Stabilizing Treatment Items The case instruments contained between three and five case-specific “stabilizing treatment items.” With a few exceptions, these items generally performed well, meaning there was positive and substantial inter-rater agreement and strong, positive Rtet correlations with the EPA-10 outcome rating of each case. Poor inter-rater agreement was observed on two of the items within the SAH case: lumbar puncture (-.083) and calling for a neurosurgery consult (-.167). For the SEP case, poor inter-rater agreement was observed for installation of a central line (-.222). Finally, there was also lack of inter-rater agreement on the PTX case for establishing an airway and rechecking vital signs (-.083). Summary of Cases Missing data posed a minor problem for this study. A complete evaluation of the REP case was not possible due to a technical problem. Evaluator ratings of the items on the SAH case lacked variability so that statistics were impossible to calculate, leaving it difficult to interpret item performance. The other three cases suffered some missing data, but were still able to be evaluated. For the Afib and PTX cases, all items were observed to have positive Rtet correlations with global entrustment. The SEP case, however, consisted of two items that did not have strong correlations with global entrustment. One was due to lack of variability in the ratings. (Every subject was scored as having achieved that critical action.) The other had a positive, but low Rtet correlation (.19) with global entrustment. DISCUSSION Entrustable professional activities represent an important addition to the framework of modern medical training. Measurement of these essential activities contributes to certifying a trainee’s ability to perform to accepted standards of care. Medical schools and residency programs have a responsibility to the public to assure that their graduates have been assessed for entrustment of these activities prior to unsupervised practice. To meet this responsibility, medical educators must integrate high-quality, formal EPA assessments into their training programs. EPA-10 is particularly important because it requires the medical student to recognize an unstable patient who requires life-saving, emergent care. Assessing a medical student’s ability to perform EPA-10 activities is difficult in the clinical setting. High-fidelity simulation (HFS) offers the opportunity to train and assess medical students on EPA-10 related competencies. Literature on the use of HFS for assessing EPA-10 is limited; however, residents at some Canadian institutions have been effectively assessed with checklists and HFS.17,18 Three critical actions were common across all of the

40

Volume XVIII, no. 1: January 2017


Thompson et al.

Development of an Assessment for EPA 10: Emergent Patient Management

cases: recognizing abnormal vital signs, asking for help when needed, and determining patient disposition. Reassuringly, only two students failed to recognize abnormal vital signs. Disconcerting, however, is that 20% of students (23 of 114) failed to ask for help when needed, and 22% (25 of 114) failed to accurately determine the patient disposition. The first common item achieved strong inter-rater reliability, probably because of well-established parameters and clearly defined values for vital signs. The other two common items had inconsistent inter-rater reliability across cases. They had good agreement on determining disposition plan in three of the four scenarios, excluding PTX. We believe that the low inter-rater reliability on the disposition item of the PTX case was primarily due to variable approaches to airway management across evaluators from two different specialties. For asking for help, the inconsistency in inter-rater reliability two of four cases (Afib, and SEP). This was likely due to inconsistency in how faculty interpreted the student behaviors. For the checklist rating scale instruments, we attempted to maximize inter-rater reliability by selecting performance tasks that were 1) highly observable (a rater would know “it” when they see “it”) and 2) low inference (easily interpreted). The prompting from the nurse confederate was needed to complete the simulation in the allotted time. However, for measurement purposes, the rating of “with prompting” became a source of unreliability. For all statistical tests, we recoded this value to a “NO” response, indicating that the student had not reached a measurement threshold of entrustment. We believe that the use of this rating scale option was a source of inconsistency among our raters (i.e., some raters used this rating frequently, and others used it not at all). In the future, this rating will have to be more clearly defined or eliminated from the instrument. A good example of inter-rater reliability measures affected by this problem occurred in the SAH case, items 6 and 7 and SEP case, item 6. Overall, we found that nearly 87% of students met our global assessment of ad hoc entrustment. Additionally, we observed good inter-rater reliability among the four pairs of established faculty raters on this global entrustment item. We did not specifically measure the impact of team support on the team leader’s entrustment; however, this most certainly affected determination of global entrustment for some students. For the Afib and PTX cases, all items were observed to have high, positive Rtet correlations with global entrustment. We interpret this to mean that these items contribute significantly to the entrustment decision and are important components of the measurement instrument. The SEP case, however, consisted of two items that did not have strong correlations with global entrustment, one due to lack of variability in this outcome. (Every subject was scored as having achieved that critical action.) The other had a positive but low correlation with global entrustment. We believe that these two items need to be revised or replaced to improve their ability to discriminate between high- and lowperforming students. Volume XVIII, no. 1: January 2017

LIMITATIONS We confronted several limitations. First, a complete evaluation of the REP case was not possible due to significant missing data points caused by a technical glitch in the electronic data collection platform. This case will have to be re-evaluated in the future. Second, we were unable to completely isolate an individual student’s performance from the performance of the team. Conversely, there was no way to recognize an underperforming team leader who performed well in their support role during another scenario. A third limitation is derived from the logistics of our assessment methods. Since students participated in more than one case but were only evaluated on the case they led, there could have been a cumulative practice benefit for the students who were last to lead. In the future we would like to measure the practice effect obtained by repeated participation in simulated case scenarios such as those used for this project. Limitations on generalizability to other medical schools may include equipment availability, time investment of faculty and support staff. HFS equipment and qualified technical support staff require a significant institutional monetary investment. For each student assessment we used 1-2 trained physician faculty raters, a trained simulator specialist, and a trained actor for the resuscitation bay nurse role. Each assessment lasted up to 30 minutes per student. Substantial cost-savings might be realized by the use of trained nonphysician evaluators. Future research is needed to establish how well ad hoc entrustment based on a single simulation case can predict entrustment in the care of actual patients. CONCLUSION We have designed an evaluation for EPA-10 that includes universal critical actions, case-specific critical actions, and a global rating of ad-hoc entrustment. The preliminary evidence suggests that inter-rater reliability and content validity were achieved for three of four case simulations and checklist instruments. Future studies are needed to establish generalizability across other patient cases and other institutions.

Address for Correspondence: Laura Thompson, MD, The Ohio State University Wexner Medical Center, Department of Emergency Medicine, 781 Prior Hall, 376 W. 10th Ave., Columbus, OH 43210; email: laura.thompson@osumc.edu. Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none.

41

Western Journal of Emergency Medicine


Thompson et al.

Development of an Assessment for EPA 10: Emergent Patient Management Copyright: Š 2016 Thompson et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

(2nd Ed). Thousand Oaks, CA: Sage Publications, 2004. (pgs 241-3.). 9. Viera AJ and Garrett JM. Understanding inter-observer agreement: The Kappa statistic. Fam Med. 2005;37(5):360-3. 10. Joyce M. Picking the best intercoder reliability statistic for your digital activism content analysis. Blog Entry posted on the Digital Activism Research Project Website. Dept. of Communication, University of Washington in Seattle. 2013. Available at: http://digital-activism. org/2013/05/picking-the-best-intercoder-reliability-statistic-for-your-

REFERENCES 1. Englander R, Flynn T, Call S, et al. Core entrustable professional

digital-activism-content-analysis/ Accessed Jan 14, 2016. 11. Geertzen, J. (2012). Inter-Rater Agreement with multiple raters and variables. Available at: https://nlp-ml.io/jg/software/ira/ Accessed Jan

activities for entering residency: Curriculum developers guide.

14, 2016.

Washington DC. Association of American Medical Colleges MedEdPORTAL iCollaborative. 2014; Resource ID 887. Available at:

12. Uebersax JS. Statistical methods for diagnostic agreement:

https://www.mededportal.org/icollaborative/resource/887. Accessed

Recommended methods. John Uebersax Enterprises LLC, 2015.

January 28, 2016.

Available at: http://www.john-uebersax.com/stat/agree.htm. Accessed Jan 14, 2016.

2. Ten Cate O, Chen HC, Hoff RG, et al. Curriculum development for the workplace using Entrustable Professional Activities (EPAs):

13. IBM Corp. Released 2013. IBM SPSS Statistics for Windows, Version 22.0. Armonk, NY: IBM Corp.

AMEE Guide No. 99. Med Teach. 2015;37(11):983-1002. 3. Chen et. al. The Case for Use of Entrustable Profession Activities in

14. Hayes AF, Krippendorff K. Answering the call for a standard reliability measure for coding data. Commun Methods Meas. 2007;1:77-89.

Under-graduate Medical Education. Acad Med. 2015;90(4):431-6. 4. McEvoy MD, Dewaay DJ, Vanderbilt A, et al. Are fourth-year medical

15. Lorenso-Seva U and Ferrando PJ. Tetra-Com, Version 1.00:

students as prepared to manage unstable patients as they are to

A comprehensive program for computer tetrachoric correlation

manage stable patients? Acad Med. 2014;89(4):618-24.

matrices. Rovira I Virgili University, Tarragona, Spain. 2011.

5. Manthey D, Ander D, Gordon D, et. al. Emergency medicine

16. Hall AK, Pickett W, Dagnone JD. Development and evaluation

clerkship curriculum: An updated and revision. Acad Emerg Med.

of a simulation-based resuscitation scenario assessment tool

2010;17(6)638-43.

for emergency medicine residents. Can J Emerg Med Care. 2012;14(3):139-46.

6. MyKnowledgeMap, Myprogress (version 5.3). Web application and mobile app. York, UK: MyKnowledgeMap Ltd.; 2015.

17. Hall AK, Dagnone JD, Lacroix L, et al. Queen’s Simulation Assessment Tool: Development and validation of an assessment tool

7. Krippendorff K. Agreement and information in the reliability of coding.

for resuscitation objective structured clinical examination stations in

Commun Methods Meas. 2011;5(2):93-112.

emergency medicine. Simul Healthc. 2015;10(2):98-105.

8. Krippendorff K. Content Analysis, An Introduction to its Methodology

Western Journal of Emergency Medicine

42

Volume XVIII, no. 1: January 2017


Original Research

“Let Me Tell You About My…” Provider Self-Disclosure in the Emergency Department Builds Patient Rapport Korie L Zink, MS* Marcia Perry, MD† Kory London, MD† Olivia Floto‡ Benjamin Bassin, MD† John Burkhardt, MD, MA†§ Sally A Santen, MD, PhD†§

*University of Michigan, University of Michigan Medical School, Ann Arbor, Michigan † University of Michigan, Department of Emergency Medicine, Ann Arbor, Michigan ‡ University of Michigan, Ann Arbor, Michigan § University of Michigan, Department of Learning Health Sciences, Ann Arbor, Michigan

Section Editor: Mark I. Langdorf, MD, MHPE Submission history: Submitted June 3, 2016; Revision received October 28, 2016; Accepted October 28, 2016 Electronically published November 23, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.10.31014

Introduction: As patients become increasingly involved in their medical care, physician-patient communication gains importance. A previous study showed that physician self-disclosure (SD) of personal information by primary care providers decreased patient rating of the provider communication skills. Objective: The objective of this study was to explore the incidence and impact of emergency department (ED) provider self-disclosure on patients’ rating of provider communication skills. Methods: A survey was administered to 520 adult patients or parents of pediatric patients in a large tertiary care ED during the summer of 2014. The instrument asked patients whether the provider self-disclosed and subsequently asked patients to rate providers’ communication skills. We compared patients’ ratings of communication measurements between encounters where self-disclosure occurred to those where it did not. Results: Patients reported provider SD in 18.9% of interactions. Provider SD was associated with more positive patient perception of provider communication skills (p<0.05), more positive ratings of provider rapport (p<0.05) and higher satisfaction with provider communication (p<0.05). Patients who noted SD scored their providers’ communication skills as “excellent” (63.4%) compared to patients without selfdisclosure (47.1%). Patients reported that they would like to hear about their providers’ experiences with a similar chief complaint (64.4% of patients), their providers’ education (49%), family (33%), personal life (21%) or an injury/ailment unlike their own (18%). Patients responded that providers self-disclose to make patients comfortable/at ease and to build rapport. Conclusion: Provider self-disclosure in the ED is common and is associated with higher ratings of provider communication, rapport, and patient satisfaction. [West J Emerg Med. 2017;18(1)43-49.]

INTRODUCTION Strong communication skills are crucial to effective interactions with patients in the emergency department (ED).1 As patient-centered care and shared decision-making become central to medicine, effective physician communication continues to gain importance. Successful communication increases patient and physician satisfaction.2 Several studies Volume XVIII, no. 1: January 2017

have demonstrated that high satisfaction levels correlate with medical compliance, return to the same ED for future care, and increased referral of others to that ED.3-6 Given the time constraints of ED interactions, some providers use selfdisclosure (SD), or the sharing of personal information and/or details of their experiences, to gain trust and build rapport with their patients. One article suggested that patients do not respond 43

Western Journal of Emergency Medicine


Zink et al.

Provider Self-Disclosure in the ED Builds Patient Rapport favorably to doctors who show vulnerability, which is a risk during physician SD.7 Other studies maintain that establishing robust physician-patient relationships are health-promoting and that SD may play a role.8 A recent perspective in New England Journal discussed the tension between developing rapport and the observation that “sharing personal experiences exposes both our biases and our vulnerabilities, which may not be socially, professionally, or emotionally safe.”9 Self-disclosure is somewhat controversial, as some patients may appreciate personal anecdotes while others find them irrelevant or intrusive to their care. Previous studies have found mixed results regarding the influence of SD on patient satisfaction in primary care and surgical settings.10-12 This suggests that clinical setting may have an impact on patient perception of SD. The effects of physician SD on patientdoctor relationships in the ED have not yet been studied. The objective of this study was to determine the incidence of provider SD and explore the impact of ED provider SD on patients’ assessment of physician communication skills. Specifically, we investigate whether provider SD increases or decreases patients’ assessment of providers’ communication and rapport.

those used by Beach.12 Demographic data were collected. Patients also indicated whether or not their providers selfdisclosed and, if so, the content of SD. Specifically, to indicate SD patients were asked “Did your doctor talk about herself/ himself today?” The instrument used a Likert-type five-point scale to rate communication skills, rapport building, and satisfaction with communication (Table 1, Appendix A). The remaining questions asked patients whether or not they would like to have their ED or primary care provider (PCP) talk about her/his educational background, family, social life, medical ailments or injuries. Finally, patients were asked how likely they were to follow their providers’ medical recommendations. The survey was piloted to 20 patients to collect validity evidence (response process and internal consistency) and discussed with patients to ascertain points of confusion. Two questions were subsequently revised. The outcomes were frequency of reported physician SD and patients’ ratings of provider communication, rapport, and satisfaction with communication skills. We compared patients’ ratings of these communication measurements between encounters where the provider self-disclosed and encounters where the provider did not.

METHODS Study Design, Population, and Setting We conducted this observational, cross-sectional, mixedmethods survey in the adult and pediatric EDs of an academic Level 1 tertiary hospital. The institutional review board determined this study to be exempt. Surveys were administered between April and July of 2014 by trained student research assistants. The study population consisted of a convenience sample of adult patients or parents of pediatric patients. We excluded patients if they could not communicate effectively in English, were critically ill, or cognitively impaired.

Data Analysis We performed descriptive statistics. The Likert-type ordinal data were analyzed using nonparametric KruskalWallis tests to investigate the relationship between the providers’ SD and patient ratings of provider communication, rapport, and satisfaction with communication (SPSS 19). We estimated a multinomial logistic regression with outcomes of below average to average (1-3), good (4), and very good (5) communications scores. Independent variables were provider role and whether the provider talked about her/himself (STATA 12). Answers to open-ended questions were coded for frequencies of response using qualitative analysis to develop categories.13 We used qualitative thematic analysis approach, in which a single author read iteratively through the comments. Codes were generated inductively according to a reading and rereading of the primary data. Once the primary codes were determined, all of the comments were coded accordingly.

Study Protocol Patients were approached for the study after they had been evaluated by a medical provider (an attending, a resident, or a physician assistant). Patients were enrolled after verbal consent and informed that their individual results would not be shared with providers or in any way affect their care. Patients were shown a picture of their care provider and completed the written survey for this provider. If patients were unable to complete the survey, either the research assistant or family member assisted in completion. Patients were not aware of the purpose of the study before their encounters with providers. Providers were not made aware of the purpose of the study until the data collection was complete. Measurements The survey is provided in the Appendix A. For validity purposes, researchers performed a literature review, and the survey was designed to cover topics frequently mentioned in previous studies.9-12 Further, we modeled the questions after Western Journal of Emergency Medicine

RESULTS During the study period, 520 patients completed the survey. The mean age was 44 years old; 55% were female, and 59% had an education level greater than a high school diploma/GED. Of the 520 patients surveyed, 18.9% indicated that their provider talked about her/himself during their ED visit, 69.8% said that there was not SD, and 11.3% were unsure whether or not their providers self-disclosed. When we examined SD of each provider, nearly half of 84 physicians (52.4%) self-disclosed information during at least one encounter. Further, patients felt it was important to build a good relationship with their ED care providers, with 96% responding “very important” or “somewhat important.” 44

Volume XVIII, no. 1: January 2017


Zink et al.

Provider Self-Disclosure in the ED Builds Patient Rapport

Table 1. Patient ratings of provider interaction in the emergency department (SD = Self-Disclosure). Presence of SD

Unsure of SD

No SD noted

n = 82*

n = 43

n = 314*

Excellent

52 (63.4%)

27 (63.8%)

148 (47.1%)

Very good

27 (32.9%)

15 (34.9%)

129 (41.1%)

Adequate

0 (0.0%)

1 (2.3%)

32 (10.2%)

Poor

2 (2.4%)

0 (0.0%)

3 (1.0%)

1 (1.2)

0 (0.0)

2 (0.6)

n = 82*

n = 43

n = 309*

Excellent

44 (53.7%)

19 (44.2%)

115 (37.2%)

Very good

30 (36.6%)

22 (51.2%)

139 (45.0%)

Adequate

6 (7.3%)

2 (4.7%)

50 (16.2%)

Poor

1 (1.2%)

0 (0.0%)

3 (1.0%)

Very poor

1 (1.2%)

0 (0.0%)

2 (0.6%)

n = 95

n = 57

n = 349

Very important

68 (71.6%)

36 (63.2%)

244 (69.9%)

Somewhat important

25 (26.3%)

19 (33.3%)

87 (24.9%)

2 (2.1%)

2 (3.5%)

18 (5.2%)

n = 91*

n = 54

n = 337*

Very satisfied

66 (72.5%)

39 (72.2%)

199 (59.1%)

Satisfied

21 (23.1%)

14 (25.9%)

108 (32.0%)

Neutral

0 (0.0%)

0 (0.0%)

25 (7.4%)

Dissatisfied

2 (2.2%)

0 (0.0%)

2 (0.6%)

Very dissatisfied

2 (2.2%)

1 (1.9%)

3 (0.9%)

Provider communication skills

Very poor Provider rapport, No. (%)

Importance of building good relationship with provider

Not at all important Satisfaction with provider communication skills

*p<0.05 comparing self-disclosure and no self-disclosure.

Table 1 shows patient ratings of provider communication skills, rapport, and satisfaction with communication, which are the outcome variables. Encounters with SD were rated more highly than encounters where the provider did not self-disclose. Provider SD was associated with more positive patient ratings of provider communication skills (p<0.05), more positive ratings of provider rapport (p<0.05), and higher satisfaction with provider communication (p<0.05) (Table 1). Patients who noted provider SD scored their providers’ communication skills as excellent 63.4% of the time compared to patients without SD 47.1% of the time. Patients who noted provider SD were “very satisfied” with the providers’ communication skills 72.5% of the time, compared to 59.1% without SD. Both pain and reason for presenting to the ED (with a new versus recurrent problem) were not statistically significant variables in single variable regression analysis with any of our outcomes (rapport, communication score, or satisfaction with provider communication skills). Volume XVIII, no. 1: January 2017

Patients were asked what the physicians disclosed. SDs followed several themes, including casual conversation (28.6%), rapport building (23.4%), reassurance (20.8%), humor (14.3%), counseling (6.5%), and extended narratives (6.5%). An example of a casual SD was: “I just bonked my head a few minutes ago.” One physician built rapport by sharing that she had family in the same state that the patient was from. An SD used to reassure a patient was: “I am 34 years old; I’ve been an emergency physician for seven years.” Several physicians used humor, such as: “I like your nail polish. You don’t want to see my toes after I do them, they look horrible!” Some physicians self-disclosed while counseling patients: “I used to have these premature heart beats a lot. I cut back on my caffeine intake...” The extended narratives SDs typically involved stories about the physician’s children. More examples can be found in Appendix B. Providers who talked about themselves were more likely to score very good (5) on patient perception of communication skills even when considering different provider role (p<0.05). 45

Western Journal of Emergency Medicine


Zink et al.

Provider Self-Disclosure in the ED Builds Patient Rapport Table 2. Communication score related to self-disclosure by emergency department provider. Very Poor to Adequate Constant: very good, attending, did self disclose

Logit coefficient

Physician assistant

Good

Relative risk ratio

95 % confidence interval

Logit coefficient

Relative risk ratio

95 % confidence interval

-13.66 (547.7)

0.00 (0.00)

-1,087, 1,060

-0.26 (0.48)

0.77 (0.37)

-1.20, 0.68

Resident

-0.75 (0.57)

0.47 (0.27)

-1.86, 0.37

-0.56 (0.33)

0.57 (0.19)

-1.20, 0.09

Did not self-disclose

1.43* (0.63)

4.16* (2.61)

0.19, 2.66

0.60* (0.29)

1.81* (0.51)

0.04, 1.15

2.50

0.08

0.54

0.58

constant

Standard errors in parentheses *p<0.05

Providers who did not self-disclose had an increased relative risk of 4.2 (95% CI 1.2, 14.2) to score as very poor to adequate (Table 2). This result means that the relative risk ratio of a “very poor to adequate” score relative to a “very good” score is expected to change by a factor of 4.2 when no SD was provided given the other variables in the model are held constant. The reported results are relative to each other and not absolute odds. There was no statistically significant difference between provider roles when included with SD as a factor (Table 2). When asked how likely respondents were to follow their

provider’s medical recommendations, 89.5% of patients indicated very likely, 9.3% somewhat likely, and 1.2% were not likely to follow recommendations. There were no significant differences in intentions to follow medical recommendations between the groups that did and did not experience SD. Of the patients who experienced SD, 61% said they liked it, 29% did not care, and 7% disliked the SD. For patients who did not experience SD, 27% said that they thought they would like it, 53% said they would not care, and 13% said they would have disliked it if their provider self-disclosed.

Figure . Patient Preferences Regarding Types of Self-Disclosure

Neutral

More Positive

0.5 0.4 0.3 0.2 0.1 0

More Negative

-0.1 -0.2 -0.3 -0.4 -0.5

Mean Standard Deviation

Education/Training Background

Family Life

Personal/Social Life

0.4 0.67

0.2 0.68

-0.1 0.73

Medical Ailment/Injury Unrelated to Patient’s -0.2 0.75

Medical Ailment/Injury Related to Patient’s 0.5 0.68

Figure. Patient preferences regarding types of self-disclosure. Multivariate tests of means completed by encoding categorical responses (1, 0, -1). P<0.05, meaning there is a statistically significant difference between at least one of the means compared to the others. Mean scores above 0 had more positive responses, mean scores below 0 had more negative responses.

Western Journal of Emergency Medicine

46

Volume XVIII, no. 1: January 2017


Zink et al.

Provider Self-Disclosure in the ED Builds Patient Rapport

Table 3. Preferred self-disclosure content for emergency department (ED) provider vs primary care provider (PCP). What information would you want to know about your doctor

ED (%)

PCP (%)

43.9%

58.6%*

Family life

7.6%

26.3%*

Personal/social life

6.1%

18.6%*

Medical ailment/injury unrelated to patient’s

13.1%

21.6%*

Medical ailment/injury related to patient’s

34.5%

39.6%*

Would not like to know anything about provider

36.9%

20.4%

Education/training background

*p<0.05

With regard to specific types of SD, patients indicated a preference of some types of information over others (Figure). Almost two-thirds of patients reported that they would like to hear about their providers’ experiences with a similar ailment/ injury to their chief complaint. Patients also were interested in hearing about their providers’ education and family. Patients were less interested in hearing about a provider’s personal life or about an injury/ailment unlike their own. A multivariate test of means demonstrated a significant difference between at least one of these question responses and the others (p<0.05). When asked the benefits of SD, patients responded that providers self-disclose to make patients comfortable/at ease and to build rapport. To gauge whether or not patients would want to know similar information about their ED providers as their PCPs, patients indicated the type of information they would want to know about both types of providers (Table 3). Patients indicated that they preferred to know more about

their PCP than their ED provider (p<0.05). Individual patient comments indicated that providers self-disclose to make patients comfortable/at ease and to build rapport (Table 4). DISCUSSION In social contexts, SD is often used to build rapport, make connections, and to try to relate to those around us. In the ED, provider SD is associated with significantly higher patient ratings of provider communication skills, rapport, and satisfaction with communication. Although relationships in the ED are often brief and usually without a previous ongoing relationship, the majority of patients in this sample think that it is very important to build a good relationship with their ED provider. This suggests that even when patients perceive their medical conditions to be acute, they often want to be taken care of by someone they feel they can trust and who will put effort into building rapport with them.

Table 4. Themes of open-ended responses in a study of the effect of provider self-disclosure on patient satisfaction. How would you/did you feel if your doctor talked about herself/ himself regarding other topics not n = 300 covered during your visit today? Why?* Comments Would/did like it Generally positive (makes patient feel better, more personal, humanizing, good communication) Rapport/relationship/trust building

129 (43.0%) 48 (16.0%)

Makes patient comfortable/more at ease

29 (9.7%)

Patient is interested to know more about provider

13 (4.3%)

Would/did not care Depends on nature of SD/don’t know

12 (4.0%)

Would/did dislike it Irrelevant/poor use of time

31 (10.3%)

Generally negative

23 (7.7%)

Why do you think a doctor might talk about her/himself?*

n = 369

To make patient comfortable/at ease

119 (32.2%)

To build rapport/relationship

80 (21.7%)

To connect/relate/empathize with patient

62 (16.8%)

To educate/share experiences

39 (10.6%)

To build trust/prove credibility

23 (6.2%)

Doctor arrogance/insecurity/just to chat

26 (7.1%)

Volume XVIII, no. 1: January 2017

47

Western Journal of Emergency Medicine


Zink et al.

Provider Self-Disclosure in the ED Builds Patient Rapport Though extensive research exists regarding communication patterns between physicians and patients, little is known about the influence of provider SD in the ED. Previous work by Beach found that physician SD in a primary care setting is negatively correlated with patient satisfaction, while SD in a surgical setting is positively correlated with patient satisfaction.11 The ED can be thought of as an environment encompassing features of both the PCP office and surgical clinic. Many patients present for acute care, while others use the ED as a resource to help manage chronic illness or minor medical problems. Our findings suggest that for patients, the experience of being in the ED may be more akin to those found in surgical clinic settings in terms of the communication expected of medical providers. Within the multivariate analysis, provider role was included and not found to be a significant factor when SD was considered. Prior research has demonstrated that strong communication skills are associated with effective ED interactions1 and that provider empathy has a positive relationship to medical outcomes.14 In our study, patients recognized that providers used SD to make the patient comfortable and build rapport. While a few patients thought that a provider might self-disclose because of arrogance or insecurity, the vast majority saw it as evidence of a provider trying to communicate more effectively with the patient by building a relationship. The high communication and rapport ratings indicate that ED patients are generally happy with their experience. When patients were asked how they felt if their provider talked about him/herself, the majority liked the experience. Patients whose physician did not self-disclose were asked the same question. Those who had conversations with doctors who disclosed information about themselves generally liked it, but those who did not have those conversations did not seem to miss the experience. Despite the acute or anxiety-provoking circumstances of many ED visits, patients are interested in hearing about providers’ personal experiences. The information that patients want to learn about their ED providers tends to fall into two main categories: (1) provider education and training background, and (2) a provider’s personal experience with a medical ailment/injury that is similar to that of the patient. Many patients felt that hearing a personal story of medical injury from their provider could help them make decisions about their care and demonstrates a more personal touch. Patients are not as interested in hearing about ED providers’ families, their personal/social life, or unrelated personal medical history. Based on these findings, we would encourage ED providers to think of SD as a potential tool to build rapport, put patients at ease and communicate effectively. However, not all personal topics may be received positively, and effective SD may include provider educational background or similar medical experiences to help build patient confidence Western Journal of Emergency Medicine

and comfort. Dr. Curran recommends that “by asking simple questions— What is my purpose in making this disclosure? How could it benefit my patient? Could it hurt our relationship? —and answering truthfully, we can weigh the risks and benefits within the context of the particular physician–patient relationship.”9 Further studies might investigate effective use of SD without over-sharing as well as when providers decide to SD. LIMITATIONS This study has several limitations. First, although patients were told that their providers would not receive results of their surveys, patient responses may have been affected by the fact that they were still cared for by the providers at the time of survey completion and thus may have felt uncomfortable responding. In addition, while we attempted to administer the survey towards the end of the encounter, it is possible that the provider interacted with the patient after the survey was administered and this interaction might have included further SD and affected the patients’ assessment of communication. We did not collect the number of patients who refused to participate. The rate was low, but these refusals may have provided bias. Further, it is unclear whether survey responses were an accurate reflection of providers’ interaction with patients or a composite evaluation of the entire care team or be related to other confounding variables. Attempts to mitigate this included using pictures of providers to specifically prompt patient recall and using questions focused on single providers. Additionally, there may be some recall bias with the patient not remembering exactly what was said. This is further demonstrated in some patients marking that they were not sure if there was SD. Finally, this study was performed at a single site and may not be representative of other EDs. CONCLUSION In summary, providers self-disclose in about 20% of encounters. Self-disclosure in the ED was associated with higher ratings of provider communication, rapport and higher patient satisfaction ratings. Patients are most interested in SDs that relate to their presenting ailment/injury.

Address for Correspondence: Sally Santen, MD, PhD, University of Michigan Medical School, Department of Emergency Medicine, 6114 Taubman Health Sciences Library SPC 5726, 1135 E. Catherine Street, Ann Arbor, MI 48109-2038. Email: ssanten@ umich.edu. Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none.

48

Volume XVIII, no. 1: January 2017


Zink et al.

Provider Self-Disclosure in the ED Builds Patient Rapport

Copyright: © 2016 Zink et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

6. Rydman RJ, Roberts RR, Albrecht GL, et al. Patient Satisfaction with an Emergency Department Asthma Observation Unit. Acad Emerg Med. 1999;6:178-83. 7. Nadelson C and Notman M. Boundaries in the Doctor–Patient Relationship. Theor Med Bioeth. 2002;23:191-201. 8. Beach M and Inui T. Relationship-centered care. J Gen Intern Med. 2006;21:3-8. 9. Curran KA. Too Much Information — The Ethics of Self-Disclosure. N

REFERENCES 1. Taylor C and Benger JR. Patient satisfaction in emergency medicine.

Engl J Med. 2014;371:8-9. 10. McDaniel SH, Beckman HB, Morse DS, et al. Physician selfdisclosure in primary care visits: Enough about you, what about me?

Emerg Med J. 2004;21:528-32.

Arch Intern Med. 2007;167:1321-6.

2. Makoul G and Curry RH. The value of assessing and addressing communication skills. JAMA. 2007;298:1057-9.

11. Beach M, Roter D, Larson S, et al. What do physicians tell patients about themselves? J Gen Intern Med. 2004;19:911-6.

3. Björvell H and Stieg J. Patients’ perceptions of the health care received in an emergency department. Ann Emerg Med.

12. Beach M, Roter D, Rubin H, et al. Is physician self-disclosure related to patient evaluation of office visits? J Gen Intern Med. 2004;19:905-10.

1991;20:734-8. 4. Hansagi H, Carlsson B, Brismar B. The urgency of care need and

13. Corbin J and Strauss A. Basics of qualitative research: Techniques

patient satisfaction at a hospital emergency department. Health Care

and procedures for developing grounded theory. Thousand Oaks,

Manage Rev. 1992;17:71-5.

CA: Sage. 2008.

5. Hostutler JJ, Taft SH, Snyder C. Patient needs in the emergency

14. Wasserman RC, Inui TS, Barriatua RD, et al. Pediatric Clinicians’

department: Nurses’ and patients’ perceptions. J Nurs Adm.

Support for Parents Makes a Difference: An Outcome-Based Analysis

1999;29:43-50.

of Clinician-Parent Interaction. Pediatrics. 1984;74:1047-53.

Volume XVIII, no. 1: January 2017

49

Western Journal of Emergency Medicine


Original Research

Who to Interview? Low Adherence by U.S. Medical Schools to Medical Student Performance Evaluation Format Makes Resident Selection Difficult Megan Boysen-Osborn, MD, MHPE* Justin Yanuck, MD, MS* James Mattson, MD*† Shannon Toohey, MD, MAEd* Alisa Wray, MD* Warren Wiechmann, MD, MBA* Shadi Lahham, MD, MS* Mark I. Langdorf, MD, MHPE*

*University of California, Irvine, Department of Emergency Medicine, Irvine, California † New York Presbyterian Hospital, New York, New York

Section Editor: Andrew W. Phillips, MD, MHPE Submission history: Submitted August 23, 2016; Accepted October 27, 2016 Electronically published November 29, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.10.32233

Introduction: The Medical Student Performance Evaluation (MSPE) appendices provide a program director with comparative performance for a student’s academic and professional attributes, but they are frequently absent or incomplete. Methods: We reviewed MSPEs from applicants to our emergency medicine residency program from 134 of 136 (99%) U.S. allopathic medical schools, over two application cycles (2012-13, 2014-15). We determined the degree of compliance with each of the five recommended MSPE appendices. Results: Only three (2%) medical schools were compliant with all five appendices. The medical school information page (MSIP, appendix E) was present most commonly (85%), followed by comparative clerkship performance (appendix B, 82%), overall performance (appendix D, 59%), preclinical performance (appendix A, 57%), and professional attributes (appendix C, 18%). Few schools (7%) provided student-specific, comparative professionalism assessments. Conclusion: Medical schools inconsistently provide graphic, comparative data for their students in the MSPE. Although program directors (PD) value evidence of an applicant’s professionalism when selecting residents, medical schools rarely provide such useful, comparative professionalism data in their MSPEs. As PDs seek to evaluate applicants based on academic performance and professionalism, rather than standardized testing alone, medical schools must make MSPEs more consistent, objective, and comparative. [West J Emerg Med. 2017;18(1)50-55.]

INTRODUCTION The Medical Student Performance Evaluation (MSPE), formerly the “Dean’s Letter,” is a critical part of a medical student’s application to residency. The Association of American Medical Colleges (AAMC) guidelines, released in 1989 and updated in 2002 and 2016, emphasize that the document is an evaluation.1,2 Specifically, the MSPE should Western Journal of Emergency Medicine

provide “an assessment of academic performance and professional attributes” that is “comparative, relative to [the student’s] peers.”1 According to the 2002 MSPE guidelines,1 the body of an MSPE highlights the student’s unique characteristics and narrative performance in basic sciences and clerkships, but it is difficult to extract tangible information from these sections 50

Volume XVIII, no. 1: January 2017


Boysen-Osborn et al.

Low Adherence to Medical Student Performance Makes Resident Selection Difficult

to rank or judge the candidates.3 The appendices are meant to provide a program director (PD) with a “graphic representation of the student’s performance, relative to his/her peers,” in areas of pre-clinical courses, clerkships, professional attributes, and overall performance (appendices A, B, C, and D respectively).1 Appendix E is the medical school information page (MSIP) and contains essential information about the school’s assessment methods and compliance with various standards. The MSPE appendices enable a PD to evaluate a candidate’s academic performance during medical school, because grading policies are variable across United States (U.S.) medical schools.4 On September 29, 2016, the AAMC published updated guidelines for the MSPE, which now integrate the content of the appendices into the body of the MSPE.5 In general, MSPEs are written with inconsistent methods.3,6-9 While Shea and colleagues assessed frequency of the appendices as part of a larger work, 3 no studies have done a detailed evaluation of MSPE appendix variability. The purpose of this study is to determine each medical school’s compliance with the five recommended MSPE appendices, more than 10 years after the 2002 guidelines.1 METHODS We collected this data as part of another study that evaluated the MSPE ranking practices, but the methodology for the current study differed slightly as described below.9 We reviewed MSPE documents from applicants to the University of California, Irvine emergency medicine (EM) residency program in 2012-13 and 2014-15. We did not have the 2013-2014 application cycle documents electronically. We included MSPEs from U.S. allopathic medical schools, including Puerto Rico. We reviewed one MSPE per institution for each application cycle, according to which name appeared first alphabetically in the Electronic Residency Application Service (ERAS). We reviewed an MSPE from the University of California, Irvine internal medicine (IM) residency program application files for schools for which we did not have a 2014-15 MSPE. After this, we contacted the associate dean for student affairs from any school for which we did not have an MSPE. A non-blinded, trained, single reviewer (JM or MBO) reviewed the MSPE in its entirety and completed a data abstraction form for the 2012-13 application cycle. A second, trained reviewer (JY or MBO) recorded data from the 2014-15 cycle on the same form. If data differed between years, we rechecked documents to ensure proper recording and used the practice pattern in the 2014-15 cycle for analysis. Just prior to data analysis, the primary study author (MBO) re-reviewed all data to ensure accuracy. We reviewed the MSPE and recorded the following: 1) if the required information was in the appendix or elsewhere in the MSPE; 2) if the appendix was appropriately labeled; 3) if the student’s performance was specifically noted on the appendix; 4) if each of the 10 suggested MSIP elements was Volume XVIII, no. 1: January 2017

present. These MSIP elements are listed in supplementary Addendum 1 and described in A Guide to the Preparation of the Medical Student Performance Evaluation.1 The emphases, strengths, mission, and goals of the medical school were frequently indistinguishable from unusual characteristics of the educational program, so we counted these as one item. We also recorded whether the school used a pass/fail grading system, without the possibility of honors, or other equivalent two-tier grading system in the basic sciences and clerkships. To meet criteria for professional attributes (Appendix C) the school needed a separate appendix discussing the school’s professionalism assessment or directing the reader to another area of the MSPE. Schools that mentioned generalities about their professionalism assessment in their MSIP did not meet criteria for the professional attributes appendix. To qualify as an MSIP, the school needed an appendix that mentioned at least one of the 10 suggested MSIP elements (e.g., average length of enrollment). We did not include cover letters, unless they were labeled as a “Medical Student Information Page,” but we did mention the number of nonMSIP cover letters in our results. To ensure that there was no variation between the IM MSPEs and the EM MSPEs, the primary study author (MBO) reviewed a portion (20% of the sample size) of IM MSPEs and calculated Cohen’s unweighted kappa.10 As a final measure of quality, the senior author (ML) reviewed a portion (20% of the sample size) of the EM study sample and calculated Cohen’s unweighted kappa.10 We calculated descriptive statistics for each question. The University of California, Irvine and the University of Illinois, Chicago, human subjects institutional review boards approved this study. RESULTS Subjects Enrolled There were 136 U.S. allopathic medical schools with graduating classes in 2015; there were 132 in 2013.11 For each application cycle, our EM program receives approximately 650 applications and our IM program receives 2,000. We analyzed MSPEs from 134 of the 136 (99%) U.S. allopathic medical schools. We had MSPEs for both application cycles (2012-13 and 2014-15) for 114 (85%) of these medical schools; we had only the 2012-13 MSPEs for one school (1%) and 2014-15 MSPEs for 19 schools (14%).9 We reviewed 27 charts from the IM program to measure correlation; kappa was greater than 0.83 for all study questions and was equal to 1.00 for most (16/26 questions). Kappa for correlation between reviewers was greater than 0.86 for all questions and was equal to 1.00 for most (15/26 questions). Pre-clinical Performance (Appendix A) Seventy-six (57%) schools had an appendix with comparative data for preclinical performance (Table 1) and four had the information in the MSPE body or a transcript. 51

Western Journal of Emergency Medicine


Boysen-Osborn et al.

Low Adherence to Medical Student Performance Makes Resident Selection Difficult

Table 1. Degree of compliance with each of the recommended medical student performance evaluation (MSPE) appendix items among U.S. medical schools (n = 134) Appendix A Appendix B Appendix C Appendix D Appendix E Number (percent) of schools that had the pre-clinical clerkship professional overall med school info following information: performance performance attributes performance page (MSIP) Had the information present in the appendices

76 (57%)

112 (82%)

24 (18%)

79c (59%)

114 (85%)

Had appendix present and it was appropriately labeled

46 (34%)

50 (37%)

22 (16%)

37c (28%)

68 (51%)

Information presented in graphic form (e.g. bar graph)

68 (51%)

102 (76%)

6 (6%)

46 (34%)

N/A

Schools that indicated the student’s performance on the appendix

51 (38%)

67 (50%)

11a (8%)

32 (24%)

N/A

4 (3%)

8 (6%)

3b (2%)

11c,d (8%)

8e (6%)

12 (9%)

2 (1%)

3 (2%)

2 (2%)

0 (0%)

Information found elsewhere in the MSPE or in the transcript Schools that indicated that the data could not be provided in the respective appendix

Schools fully compliant with this appendix 29 (22%) 34 (25%) 5 (4%) 19 (14%) 54f (40%) (appropriately labeled + comparative + in graphic form + student-specific) a: Some of these only mentioned that the student met the professionalism standards for the school, without other specific data. b: Found in the MSPE in a professionalism section or graph c: These particular values are similar and related to results for a separate related study that looked at different features of the MSPE (ranking methods).9 The number for the first and second row is larger in this study than in the previously published study,9 accounting for schools that had an appendix present that directed the reader to a part of the MSPE which contained the class rank, but did not fully explain the ranking system in appendix D. d: Six were found in a cover letter and five were found in the body of the MSPE. e: This number represents schools who had an opening cover letter that was not labeled as a medical student information page. f: Fully compliant for appendix E means that the MSIP contained 10 of 10 MSIP elements and was appropriately labeled.

Forty-six (34%) were appropriately labeled as Appendix A. Many (n=51, 38% of total MSPEs) indicated the student’s performance on the graph (e.g., bolding, arrows). Of the schools that did not provide comparative preclinical data in an appendix or MSPE body, 32 of 54 (59%) used a pass/fail or other two-tiered grading system (i.e. a system that could not provide comparative data). For all parts of Appendix A, 29 (22%) schools were fully compliant, having an “Appendix A” with comparative preclinical data in graph or chart form, indicating the student’s performance on the graph.

Clinical clerkship Performance (Appendix B) One hundred and twelve (82%) schools had graphic comparative data for the clerkships in the appendix and eight (6%) had this information in the body of the MSPE (Table 1). Two schools without comparative clerkship data used a twotiered grading system. Professional attributes (Appendix C) Twenty-four schools (18%, Table 1) had a professional attributes appendix and three (2%) had a

Table 2. Description of professionalism assessments used in U.S. medical schools’ medical school performance evaluations (MSPE) (includes those found in appendix C or the MSPE body, n = 27). Summary of professionalism assessment in MSPE n = (% of 27) 1. Refers reader to the MSPE clerkship narratives or summary paragraph

10 (37%)

2. Refers reader to the MSPE, which contains a professionalism score

3a (11%)

3. Provides Likert score for professionalism behavior(s), without comparative class data 4. Provides Likert score for professionalism behavior(s), with a class mean

2 (7%) 7 (26%)

b

5. Describes the school’s general assessment methods and states that the student met those expectations or gives a brief qualitative description of the student’s professional behaviors a : One school has a professionalism distinction for the top students only. b : The authors feel this is a best practice. c : One of these did not have a sentence stating that the student met those expectations.

Western Journal of Emergency Medicine

52

5c (19%)

Volume XVIII, no. 1: January 2017


Boysen-Osborn et al.

Low Adherence to Medical Student Performance Makes Resident Selection Difficult

similar professionalism section in the body of the MSPE. Table 2 categorizes how each school provided their professionalism assessments. Only 10 of these assessments (7% of total MSPEs) were both specific to the student and comparative to the class. The figure provides examples of specific, comparative professionalism assessments from representative U.S. medical schools. The following were some of the professionalism behaviors assessed or mentioned in the MSPE appendices: time-keeping, preparedness for activities, teamwork, appearance, respect, compassion, reliability, interprofessional relationships, altruism, honesty/integrity, response to feedback, patient interactions, responsibility, pursuit of excellence, medical ethics, confidentiality, punctuality, self-confidence, verbal and written communication. Overall Performance (Appendix D) Seventy-nine schools (59%) had information on overall comparative performance in their appendices (Table 1). This is not to be confused with the number of medical schools that provided comparative performance or rank (n=101, 75%) for their students at any point in the MSPE (for example, stating their student is in the “second quartile,� but not depicting the

comparative performance in an appendix), which we report in a separate study.9 Medical Student Information Page (Appendix E) One hundred and fourteen schools (85%) had an MSIP (Table 1). The majority of medical schools had at least seven of the 10 MSIP elements (n=103, 77%) and more than half had all 10 (n=76, 58%). (See supplementary addendum 1.) Among schools without an MSIP (n=20, 15%), eight had an opening cover letter, but only one of these had at least seven of the suggested MSIP elements. Overall Compliance by Medical Schools with the Appendices Twelve schools (9%) had five appendices present and 59 (44%) had four of five, not necessarily labeled correctly. Three schools (2%) were fully compliant with all appendices, having each one appropriately labeled, graphic, comparative, and student-specific; however, one of these schools was missing one of 10 MSIP elements. Grading Systems Overall, 42 (31%) medical schools use a two-tiered

Figure. Representative professionalism assessment from two U.S. medical schools.

Volume XVIII, no. 1: January 2017

53

Western Journal of Emergency Medicine


Low Adherence to Medical Student Performance Makes Resident Selection Difficult grading system (e.g. pass/fail) for the basic sciences and two (1%) use one for the clinical clerkships. DISCUSSION Despite the 2002 AAMC recommendations for better standardizations among the MSPE, there is still considerable variation. 3,6-9 The MSPE is the only comprehensive description of a student’s academic performance, personal qualities, and professionalism. Threats to the validity of the document, through inconsistency and lack of objectivity, compromise the value of the document in residency admissions. While only 2% of medical schools were fully compliant with all five appendices, most schools complied with at least one. Furthermore, most schools provided an MSIP and more than half of these had every necessary element. This suggests that student affairs officers are aware of the AAMC guidelines but have not modified their processes to comply. It is unclear why medical schools do not comply with the MSPE guidelines. Some possibilities are that medical schools want a PD to read the MSPE in its entirety, not focusing on comparative data alone. Student affairs officers may fear that students will not successfully match if the student’s comparative data falls below the class mean. Furthermore, schools may not want to provide both positive and negative information for students, unless every medical school agrees to do the same. PDs, however, must have some basis to judge candidates. Grade distributions vary tremendously between schools, with the number of students receiving an honors or equivalent top grade in the clinical clerkships ranging from 2-87% in one study.4 Furthermore, “honors” is a second-best grade at some schools.4 When the appendices are not present, a PD may find it difficult to extract concrete, comparative information from the MSPE. It is our opinion that narrative comments in the body of the MSPE are near-uniformly positive with little information to differentiate students. Without the appendices, a PD is unable to judge an applicant’s academic performance with respect to other candidates.4 Another possibility for lack of comparative performance is a two-tiered grading system (e.g. pass/fail), which inherently hinders discrimination in performance. We found that 41% of schools without comparative preclinical performance and 92% of schools without comparative clinical performance use a three- or moretiered grading system (e.g. honors/pass/fail). Therefore, these schools do differentiate among their students in their internal grading system, but do not provide a legend to interpret this system to PDs. While it is straightforward to provide comparative information for grades, extracting objective data for qualities such as professionalism can be more difficult. In our study, only a minority of schools provided a professionalism

Western Journal of Emergency Medicine

Boysen-Osborn et al.

appendix and fewer were student-specific and comparative. Understandably, it may be difficult for schools to provide a comparative professionalism assessment for each student, since the majority of U.S. medical students should meet or exceed expectations in this area. However, it would be useful to highlight students who stray from the mean positively or negatively, since PDs value this information.12 A professionalism assessment tool was developed by the AAMC in 2005 (and is used by one school in our study), but this tool was never widely distributed and is no longer available online.13 The 2002 MSPE guidelines depict a histogram for the professionalism attributes appendix,1 but few schools provided this. Of schools that did provide specific, comparative professionalism assessments, most provided the student’s Likert scale score for one or more professional behaviors, compared to a class mean and standard deviation; these scores were commonly derived from assessments during the clinical clerkships. It is our opinion that this is a best practice for professionalism assessments on the MSPE. There are many implications to the observations in this study. First, inconsistency in the MSPE decreases its value to PDs. As the MSPE is devalued, so is overall academic performance and professionalism, since the MSPE is largely the source for this information. As a result, PDs may overemphasize more objective data, such as United States Medical Licensing Examination scores, which could have negative consequences on medical education.14 Second, difficulty in interpreting the MSPE adds time to the already arduous job of screening over 800 applications each year.12 This takes a PD’s time away from other important aspects of the residency selection process, as well as from curriculum development and program administration. Furthermore, it undermines the time spent by all parties in the composition of the MSPE. It is crucial that medical schools and the AAMC act to preserve the value of the MSPE by increasing its objectivity, consistency, and usability. Lack of comparative, studentspecific assessments from the MSPEs does not force PDs to consider the entire document. Rather, it hinders the PD’s ability to compare applicants during the residency selection process. As a result, many of the qualities described in the MSPE are lost and may lead to overemphasis on standardized test scores. The 2016 AAMC’s MSPE guidelines emphasize the importance of graphic, comparative information regarding students’ academic performance.5 Comparative clerkship performance should now be integrated into the body of the MSPE. Comparative overall performance and comparative performance in the core competencies should now be included in the summary. These revised guidelines must be introduced systematically, with medical schools being held accountable for compliance with them. We recommend that future guidelines provide clear instructions on how medical schools should assess overall professional attributes.

54

Volume XVIII, no. 1: January 2017


Boysen-Osborn et al.

Low Adherence to Medical Student Performance Makes Resident Selection Difficult

LIMITATIONS This study relied on a convenience sample of MSPEs to our EM and IM residency programs, but our sample reflected 99% of U.S. allopathic medical schools. We analyzed one document per school per application year, but we minimized this limitation by analyzing two application years and comparing a portion of MSPEs between two specialties. We did not determine the degree to which the MSPE affected candidate interview or ranking.

REFERENCES 1. Association of American Medical Colleges. A guide to the preparation of the medical student performance evaluation. Available at: https:// www.aamc.org/download/64496/data/mspeguide.pdf. Accessed August 20, 2015. 2. Swanson AG. A guide to the preparation of the medical school dean’s letter. Association of American Medical Colleges; 1989. 3. Shea JA, O’Grady E, Morrison G, et al. Medical student performance evaluations in 2005: An improvement over the former dean’s letter? Acad Med. 2008;83(3):284-91

CONCLUSION The content of MSPE appendices (now within the body and summary of the MSPE) are designed to provide PDs with graphic, comparative, student-specific information regarding academic performance and professionalism. Medical schools have low overall compliance with the appendices, most notably in the professional attributes Appendix C. Low compliance in providing graphic, comparative performance information among medical schools decreases a PD’s ability to use the MSPE to compare candidates.

4. Alexander EK, Osman NY, Walling JL, et al. Variation and imprecision of clerkship grading in US medical schools. Acad Med. 2012;87(8):1070-6. 5. Recommendations for Revising the Medical Student Performance Evaluation. Available at: https://www.aamc.org/download/470400/ data/mspe-recommendations.pdf. Accessed November 4, 2016. 6. Kiefer CS, Colletti J, Bellolio MF, et al. The “good” dean’s letter. Acad Med. 2010;85(11):1705-8. 7. Naidich JB, Grimaldi GM, Lombardi P, et al. A program director’s guide to the medical student performance evaluation (former dean’s letter) with a database. J Am Coll Radiol. 2014;11:611-5. 8. Naidich JB, Lee JY, Hansen EC, et al. The meaning of excellence. Acad Radiol. 2007;14(9):1121-6. 9. Boysen-Osborn M, Mattson J, Yanuck J, et al. (2016) Ranking practice variability in the Medical Student Performance Evaluation: So bad, it’s “good.” Acad Med. 2016. 91(11):1540-5.

Address for Correspondence: Megan Boysen-Osborn, MD, MHPE, University of California, Irvine, Department of Emergency Medicine, 333 City Boulevard West, Suite 640, Orange, CA 92868. Email: mbo@uci.edu. Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. Dr. Boysen-Osborn is a residency program director. Her opinions are that of a residency program director. Copyright: © 2016 Boysen-Osborn et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http:// creativecommons.org/licenses/by/4.0/

Volume XVIII, no. 1: January 2017

10. VassarStats: Website for Statistical Computation. Available at: www. vassarstats.net. Accessed August 15, 2016. 11. Liaison Committee on Medical Education. Accredited MD Programs in the US. Available at: http://www.lcme.org/directory.htm. Accessed June 17, 2015. 12. National Resident Matching Program, Data Release and Research Committee: Results of the 2014 NRMP Program Director Survey. National Resident Matching Program, Washington, DC. 2014. 13. MSPE Advisory Committee Professionalism Assessment Form. Available at: https://www.aamc.org/download/105888/data/ professionalismassessmentform.pdf. Accessed December 2015. 14. Prober CG, Kolars JC, First LR, et al. A Plea to Reassess the Role of United States Medical Licensing Examination Step 1 Scores in Residency Selection. Acad Med. 2016;91(1):12-15.

55

Western Journal of Emergency Medicine


Original Research

Teaching the Emergency Department Patient Experience: Needs Assessment from the CORD-EM Task Force Kory S. London, MD* Jeffrey Druck, MD† Matthew Silver, MD‡ Douglas Finefrock, DO§

*Thomas Jefferson University, Sidney Kimmel Medical College, Department of Emergency Medicine, Philadelphia, Pennsylvania † University of Colorado, Denver, Colorado ‡ Kaiser Permanente, San Diego Medical Center, San Diego, California § Hackensack University Medical Center, Hackensack, New Jersey

Section Editor: Sally A. Santen, MD, PhD Submission history: Submitted April 18, 2016; Revision received September 30, 2016; Accepted September 30, 2016 Electronically published November 8, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.9.30667

Introduction: Since the creation of Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) patient satisfaction (PS) scores, patient experience (PE) has become a metric that can profoundly affect the fiscal balance of hospital systems, reputation of entire departments and welfare of individual physicians. While government and hospital mandates demonstrate the prominence of PE as a quality measure, no such mandate exists for its education. The objective of this study was to determine the education and evaluation landscape for PE in categorical emergency medicine (EM) residencies. Methods: This was a prospective survey analysis of the Council of Emergency Medicine Residency Directors (CORD) membership. Program directors (PDs), assistant PDs and core faculty who are part of the CORD listserv were sent an email link to a brief, anonymous electronic survey. Respondents were asked their position in the residency, the name of their department, and questions regarding the presence and types of PS evaluative data and PE education they provide. Results: We obtained 168 responses from 139 individual residencies, representing 72% of all categorical EM residencies. This survey found that only 27% of responding residencies provide PS data to their residents. Of those programs, 61% offer simulation scores, 39% provide third-party attending data on cases with resident participation, 37% provide third-party acquired data specifically about residents and 37% provide internally acquired quantitative data. Only 35% of residencies reported having any organized PE curricula. Of the programs that provide an organized PE curriculum, most offer multiple modalities; 96% provide didactic lectures, 49% small group sessions, 47% simulation sessions and 27% specifically use standardized patient encounters in their simulation sessions. Conclusion: The majority of categorical EM residencies do not provide either PS data or any organized PE curriculum. Those that do use a heterogeneous set of data collection modalities and educational techniques. American Osteopathic Association and Accreditation Council for Graduate Medical Education residencies show no significant differences in their resident PS data provision or formal curricula. Further work is needed to improve education given the high stakes of PS scores in the emergency physician’s career. [West J Emerg Med. 2017;18(1)56-59.]

Western Journal of Emergency Medicine

56

Volume XVIII, no. 1: January 2017


London et al. INTRODUCTION In 1976, Ware, Snyder and Wright published the first rigorous and validated patient satisfaction (PS) healthcare questionnaire, the PSQ.1,2 Within a decade, two Notre Dame professors, Irwin Press and Rod Ganey, founded Press Ganey Associates whose mission of “improving the patient experience through compassionate, connected care” became the basis of a healthcare revolution.3 Hospitals saw the competitive advantage that could be gained by measuring their patients’ satisfaction and comparing these scores to other similar organizations. Service quality, as measured through PS scores, became a key component of measuring the quality and value of healthcare.4 As the single largest payer of healthcare dollars in the United States, the federal government followed suit. In 2002, through a partnership with the Agency for Healthcare Research and Quality (AHRQ), the Centers for Medicare and Medicaid (CMS) first developed and then implemented the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey. As part of the Deficit Reduction Act of 2005, and further through the Patient Protection and Affordable Care Act of 2010, hospitals received financial incentives for participating in the HCAHPS survey. The HCAHPS data are not only used to provide financial incentives to hospitals, but are also publicly reported on the CMS’ consumer-oriented website,5 further emphasizing the import of these scores to hospital systems and their administrators. Several studies have linked PS to improved outcome measures,6-10 but physicians are still skeptical of the link between satisfaction and quality. A well-publicized trial published by Fenton et al in 2012, further sparked the controversy, revealing that higher PS scores were associated with higher overall healthcare and prescription drug expenditures, and increased mortality.10 Despite the conflicting evidence, PS scores have become a key component in the metric-driven environment in which physicians practice today. The Accreditation Council for Graduate Medical Education (ACGME), through the Next Accreditation System and Milestones, developed a framework for the assessment of residents in each of several core competency areas.11 Included in the milestones are several competencies relating to how well residents connect with their patients, including professionalism, interpersonal and communication skills, and system-based practice. Residency programs will need to train their residents in effective communication strategies, educate them on the importance of PS scores and prepare them for a practice where metrics drive hospital reimbursement and physician performance assessment. The objective of this study was to determine the education and evaluation landscape for patient experience in categorical emergency medicine (EM) residencies in the U.S. METHODS The needs assessment survey was created using plain Volume XVIII, no. 1: January 2017

Teaching the ED Patient Experience language and consensus questions developed by the authors and task force. In the interest of acquiring a large dataset, we kept the number of questions to a minimum to respect the varied duties of the respondents. Survey questions were tested for content and response process issues by the authors’ own departmental leadership prior to survey release. Further validity evidence was not collected. We collected data about participants’ departmental role and residency name, but that information was solely used to assist in culling duplicate program responses and to analyze ACGME vs. American Osteopathic Association (AOA) differences respectively. All data relating to identity were strictly separated from program responses. The institutional review board reviewed this study and deemed it exempt. We obtained access to the faculty through the use of the Council of Residency Directors for Emergency Medicine (CORD-EM) faculty listserv. The CORD-EM membership includes the departments of categorical U.S. residencies, prospective U.S. residencies and select international EM residency programs. Specifically, the membership is restricted to program directors (PDs), assistant PDs and core faculty of the departments’ education divisions. While patient experience is an international movement, we decided to limit participation to categorical U.S. residencies that already exist. The only inclusion criteria were that respondents had to work at currently running U.S. categorical residencies and participate in the CORD-EM faculty listserv. Exclusion criteria included international faculty and those of residencies not yet currently in operation. Given the likelihood of multiple responses from some institutions, it was decided that in the case of heterogeneity, the most senior respondent’s data would be used (PD>APD>core faculty). The listserv contains 194 residencies that split into 30 AOA or joint AOA/ACGME accredited programs and 164 ACGME accredited programs. The AOA and joint accredited programs were combined for analysis given AOA accreditation was the variable being studied. The surveying itself was performed using the online survey service SurveyMonkey®. An initial attempt at data collection was made by a form email sent through the listserv. When responses began to decrease, we sent a second form email through the listserv to encourage those who had overlooked the first request. Finally, individual program directors from non-responsive departments were sent targeted emails asking for participation during the third and final round of data collection. The authors analyzed data using the built-in tools from SurveyMonkey and Microsoft Excel. We performed comparison between AOA and ACGME programs using chisquare testing with p values set a 0.05. RESULTS We received a total of 168 individual responses from 139 programs. This represents a program participation rate of 72%. Of the 139 programs that provided data, 15 were AOA 57

Western Journal of Emergency Medicine


London et al.

Teaching the ED Patient Experience accredited, 119 were ACGME and five were joint AOA/ ACGME. This represents 62.5% of AOA residencies that participate in CORD-EM, 72% of ACGME residencies and 83% of joint accreditation programs. There was no significant difference in rates of response between AOA/joint and ACGME programs (p=0.51). Of those 168 responses, 107 were by PDs, 46 by APDs and 15 by academic core faculty. Given multiple responses by 29 programs, the final participant count was 107 PDs (77%), 24 APDs (17%) and eight academic core faculty (6%). No program had >2 responses. Categorical EM programs exist in 43 states and Puerto Rico. We obtained responses from 41 of those. This survey found that only 27% of responding residencies provide any PS data to their residents. Of those programs, most offer multiple modalities; 37% provide internally acquired quantitative data, 21% provide internally acquired anecdotal data, 37% provide third-party metrics specifically about residents, 39% provide third-party attending metrics about resident cases, 61% provide simulation scores (quantitative data taken from simulation encounters), and 21% use other modalities. Only 35% of residencies provided any organized patient experience (PE) curriculum. Of these programs, again, most offer multiple modalities: 96% provide didactic lectures, 49% small group sessions, 47% simulation sessions. and 27% specifically use standardized patient encounters in their simulation sessions. Finally, 35% provide online or asynchronous resources for their residents. There was no significant difference in numbers of AOA and ACGME programs providing curicula (p=0.32). Of the programs that do provide PE education, 47% describe the differences between different PS surveys. Again, there was no significant difference between AOA and ACGME programs (p=0.27). Finally, 100% of all programs who provide PE curiculum describe methods to improve PS scores. DISCUSSION Our study demonstrates that residency programs do not have a uniform approach to resident instruction on PE training or satisfaction measurement, with 65% of all residency programs having no formal curriculum on PE at all. Other aspects of communication have also been assessed in resident education and seem to occur more consistently than those focused on the patient experience. In a recent study by Hern et al, 57% of residency programs had curriculum focused on transitions of care. Hern et al recently found 57% of residency programs have curriculum focused on handoffs, a much higher percentage than PE.12 Another study found 93% of residency programs had curriculum focused on operations and administration. 13 AOA and ACGME rates were similar and suboptimal. There were insignificant trends showing AOA as better at

Western Journal of Emergency Medicine

providing scores/educating their residents. This will likely only fall farther down the list of AOA program priorities given the preparation required for their merger with the ACGME, due in 2020. Why is PE training a neglected area of medical education? Although a relatively new topic in medical care, private practice emphasis and incentive-based compensation have skewed dramatically towards focusing on PS scores.14 It is possible that as academic institutions have been slower to emphasize this, it has taken longer to introduce this critical element to residency education. Only 37% of programs provided resident-specific survey information about PE data; in private practice, almost all facilities provide providerspecific patient data in the form of PS scores. It is also possible that academic practitioners may discount the value of patient satisfaction, as there is controversy as to the usefulness of PS scores as a corollary for excellent care. Alternatively, as PE is a relatively new field, there is less definitive evidence regarding the elements that contribute to a successful patient experience, possibly making educators less willing to teach on a subject they know little about and believe has been inadequately studied. LIMITATIONS Our study does have a number of limitations. First, our response rate was not universal. Most likely, the bias associated with this response rate would be towards responders being more likely to have curriculum, and as a result, we expect that our results overestimate the implementation of curriculum and data collection for residents. In addition, we had 29 instances where two faculty members of the same residency program responded. Of the 29 programs, 13 had concordant responses (45%) and another four had the same responses except with respect to a single question (14%). This leaves 12 others with large and varied degrees of disagreement (41%). This variance has a minimal effect on the overall statistics, but it does deserve further evaluation. While the ultimate cause for this discordance is unclear, this likely represents evidence of a paucity of focus on PE in EM GME. CONCLUSION The overall message of our study is the need for a more robust emphasis on patient experience education for EM residents. As PS is an element that physicians are being judged upon and penalized for, EM residencies are doing their residents a disservice by not preparing them adequately for clinical practice. We hope future research on PS will demonstrate best practices in resident education and further national standardization on curricular elements that help to improve the EM patient experience and EM physician patientsatisfaction scores.

58

Volume XVIII, no. 1: January 2017


London et al.

Teaching the ED Patient Experience

Address for Correspondence: Kory S. London, MD, Thomas Jefferson University, Sidney Kimmel Medical College, Department of Emergency Medicine, 1020 Sansom Street, Suite 1651T Philadelphia, PA 19107. Email: kory.london@jefferson.edu.

5. Medicare.gov. Hospital Compare. The Official U.S. Government Site for Medicare. http://www.medicare.gov/hospitalcompare. 6. Zolnierek KB and Dimatteo MR. Physician communication and patient adherence to treatment: a meta-analysis. Med Care. 2009;47(8):826-34.

Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none.

7. Glickman SW, Boulding W, Manary M, et al. Patient satisfaction and its relationship with clinical quality and inpatient mortality in acute myocardial infarction. Circ Cardiovasc Qual Outcomes. 2010;3(2):188-95.8

Copyright: Š 2016 London et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

8. Boulding W, Glickman SW, Manary MP, et al. Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17(1):41-8. 9. Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013:3;3(1).

REFERENCES

10. Fenton JJ, Jerant AF, Bertakis KD, et al. The cost of satisfaction:

1. Ware JE, Snyder MK, Wright WR. Development and validation of

a national study of patient satisfaction, health care utilization,

scales to measure patient satisfaction with Medical Care Services.

expenditures, and mortality. Arch Intern Med. 2012;172(5):405-11.

Vol I, Part A: review of literature, overview of methods, and results

11. Nasca TJ, Philibert I, Brigham T, et al. The next GME accreditation

regarding construction of scales. (NTIS Publication No. PB 288-329).

system--rationale and benefits. N Engl J Med. 2012;15;366(11):1051-

Springfield, VA: National Technical Information Service; 1976. 2. Ware JE, Snyder MK, Wright WR. Development and validation of

6. 12. Hern HG Jr, Gallahue FE, Burns BD, et al. Representing the Council

scales to measure patient satisfaction with Medical Care Services.

of Residency Directors, Transitions of Care Task Force. Handoff

Vol I, Part B: results regarding scales constructed from the patient

Practices in Emergency Medicine: Are We Making Progress? Acad

satisfaction questionnaire and measures of other health care perceptions. (NTIS Publication No. PB 288-300). Springfield, VA:

Emerg Med. 2016;23(2):197-201. 13. Watase T, Yarris LM, Fu R, et al. Educating Emergency Medicine

National Technical Information Service; 1976.

Residents in Emergency Department Administration and Operations:

3. Press Ganey. History and Mission. Available at: http://www. pressganey.com/about/history-mission.

Needs and Current Practice. J Grad Med Educ. 2014;6(4):770-3. 14. Zgierska A, Rabago D, Miller MM. Impact of patient satisfaction

4. Graff L, Stevens C, Spaite D, et al. Measuring and improving quality

ratings on physicians and clinical care. Patient Prefer Adher.

in emergency medicine. Acad Emerg Med. 2002;9(11):1091-107.

Volume XVIII, no. 1: January 2017

2014;3;8:437-46.

59

Western Journal of Emergency Medicine


Original Research

An Analysis of the Top-cited Articles in Emergency Medicine Education Literature Brendan W. Munzer, MD* Jeffery Love, MD† Barbara L. Shipman, MA‡ Brendan Byrne, MD*§ Stephen J. Cico, MD, MEd¶ Robert Furlong, MD* Sorabh Khandelwal, MD|| Sally A. Santen, MD, PhD#

*University of Michigan, Department of Emergency Medicine, Ann Arbor, Michigan † Georgetown University Hospital/Washington Hospital Center, Department of Emergency Medicine, Washington, D.C. ‡ University of Michigan, Alfred Taubman Health Sciences Library, Ann Arbor, Michigan § Naval Medical Center Portsmouth, Department of Emergency Medicine, Portsmouth, Virginia ¶ Indiana University, Department of Emergency Medicine, Indianapolis, Indiana || Ohio State University, Department of Emergency Medicine, Columbus, Ohio # University of Michigan, Department of Learning Health Sciences, Ann Arbor, Michigan

Section Editor: David A. Wald, DO Submission history: Submitted July 8, 2016; Accepted October 27, 2016 Electronically published November 23, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.10.31492

Introduction: Dissemination of educational research is critical to improving medical education, promotion of faculty and ultimately patient care. The objective of this study was to identify the top 25 cited education articles in the emergency medicine (EM) literature and the top 25 cited EM education articles in all journals, as well as report on the characteristics of the articles. Methods: Two searches were conducted in the Web of Science in June 2016 using a list of educationrelated search terms. We searched 19 EM journals for education articles as well as all other literature for EM education-related articles. Articles identified were reviewed for citation count, article type, journal, authors, and publication year. Results: With regards to EM journals, the greatest number of articles were classified as articles/reviews, followed by research articles on topics such as deliberate practice (cited 266 times) and cognitive errors (cited 201 times). In contrast in the non-EM journals, research articles were predominant. Both searches found several simulation and ultrasound articles to be included. The most common EM journal was Academic Emergency Medicine (n = 18), and Academic Medicine was the most common non-EM journal (n=5). A reasonable number of articles included external funding sources (6 EM articles and 13 non-EM articles.) Conclusion: This study identified the most frequently cited medical education articles in the field of EM education, published in EM journals as well as all other journals indexed in Web of Science. The results identify impactful articles to medical education, providing a resource to educators while identifying trends that may be used to guide EM educational research and publishing efforts. [West J Emerg Med. 2017;18(1)60-68.]

INTRODUCTION Dissemination of educational research evidence is critical to improving medical education and ultimately patient care. One reasonable measure of the impact of a Western Journal of Emergency Medicine

publication is the number of citations a particular work receives. This number is indicative of the dissemination of its results and serves as a measure of the work’s service as a foundation for supporting further research. 60

Volume XVIII, no. 1: January 2017


Munzer et al. Publications and the number of citations also serve as important criteria on which faculty are promoted at some institutions, though other institutions do not use number of citations as a promotional criterion. The number of citations does not solely reflect the work of an individual or team, but also serves as a metric for evaluating the research performed at a departmental, institutional, or even national level.1 The number of citations of publications may provide one measure by which to determine the impact of work. Additionally, other factors, such as funding, are often used to assess productivity in the research community. Examination of citations is rarely performed except for the purpose of putting together a promotion package. Azer performed bibliometric analyses evaluating the top-cited articles in medical education;2 however, there is currently no literature describing the top-cited education articles within the field of emergency medicine (EM). This type of intentional examination can have a number of benefits. First, when examining which articles have received the most citations, it becomes possible for researchers to more easily familiarize themselves with the landmark articles within a field. Second, it provides researchers with information on which topic areas, journals, and research methods tend to be more highly cited. This indicates not only the quality of the research but also the translational impact of the work.3 Third, as funding for medical education research is difficult, by evaluating the funding sources of highly cited articles, this type of examination allows researchers to identify sources of potential funding. The objective of this study was to identify the top 25 cited education articles in the EM literature and report on their characteristics, as well as the top 25 cited EM education articles in all other indexed journals. We sought to provide clinicians, educators, and researchers with resources for identifying the highest-impact literature in emergency medical education and a database of options to explore when looking to publish within the field of medical education. METHODS Within two distinct searches in the Web of Science index, we identified the top 25 articles related to education and EM. The EM journal search was limited to 19 EM journals (Appendix 1). EM-related journals eligible for inclusion were English-language journals indexed within PubMed. Exclusion criteria for these journals included non-English language journals, prehospital journals, and journals with a nonphysician focus. We ran the second search within all other indexed journals, excluding the EM journals that were excluded in the first search in Web of Science. Inclusion criteria for each individual article were the following: it had to be primarily EM-related or include emergency physician subjects and be relevant to EM education; its content was deemed educational; it had to be published in English and it had to have subjects that included physicians or future physicians. Among the exclusion criteria for individual Volume XVIII, no. 1: January 2017

Top Cited Articles in EM Education Literature articles were these: the subjects did not include EM residents or physicians or medical students in an emergency department setting; or research subjects were non-hospital based (such as EMS or community-based teaching). From May 18-June 2, 2016, the authors used keywords and search tools within the Web of Science database to retrieve the top-cited articles in both categories. The aim of this search was to identify not only the most highly cited education articles published in EM journals but also the most highly cited EM-based articles related to education that been published in other literature. The keywords were for the large part those used by Azer.2 (See Appendix 1 for search strategy) Articles were placed in descending order of number of citations in an Excel spreadsheet. Two of the authors then independently assessed both search categories and applied inclusion and exclusion criteria Inter-rater reliability among assessors for selection of the top-cited EM articles, calculated using Cohen’s kappa, was acceptable (0.69). The top 25 cited articles involving EM and education were identified and placed into a final list (“Top Cited Education Articles in EM Literature” and “Top Cited EM-related Education Articles”). Articles were assigned for review and divided evenly between the author group. Two authors independently reviewed the full text for each article and recorded the following information: (1) article name; (2) first author; (3) source journal; (4) year of publication; (5) number of citations; funding source (if applicable); (6) journal impact factor; (7) journal discipline; (8) article type; (9) educational content; (10) subjects; and (11) research method (if applicable). Each author-pair discussed the outcomes of this data collection to create a consensus. If any discrepancies arose, a third author evaluated the article in question and provided a tiebreaker. Finally, findings were discussed in conference with all authors. RESULTS Table 1 summarizes the 25 top-cited medical education articles in EM journals.4-28 Table 2 summarizes the 25 topcited medical education articles involving EM in all other journals.29-52 Articles are listed in descending order with a rank from 1-25 based upon the number of citations, as found in Web of Science at the time of the search. With regards to EM journals, the greatest number of articles were classified as articles/reviews. The most frequently cited article was “Deliberate Practice and Acquisition of Expert Performance: A General Overview” by Ericsson, published in Academic Emergency Medicine in 2008 and based on a consensus preconference.12 It had been cited 266 times. Six articles were research papers and seven were curriculum, four of which included a research methodology. One article was a needs assessment. The topics included simulation; learning theory; ultrasound; assessment; learner retention; and interprofessional education. The top three most-cited articles all exhibited a focus on learning theory. Table 3 summarizes these results. 61

Western Journal of Emergency Medicine


Munzer et al.

Top Cited Articles in EM Education Literature Table 1. Most cited education articles from emergency medicine journals. Rank

First author; year

Title

Journal: impact factor

Category

Funding (if present)

Number of citations

1

Ericsson, KA; Deliberate Practice and Acquisition of Academic Emergency Article/ 2008 Expert Performance: A General Overview Medicine; 2.0 Review

2

Croskerry, P; 2002

Achieving Quality in Clinical Decision Making: Cognitive Strategies & Detection of Bias

Academic Emergency Article/ Medicine; 2.0 Review

Combined (AHQR Grant)

201

3

Croskerry, P; 2003

Cognitive Forcing Strategies in Clinical Decisionmaking

Annals of Emergency Article/ Medicine; 4.7 Review

External (AHQR Grant)

132

4

Mateer, J; 1994

Model Curriculum for Physician Training in Emergency Ultrasonography

Annals of Emergency Curriculum Medicine; 4.7 No Data

127

5

Small, SD; 1999

Demonstration of High-fidelity Simulation Team Training for Emergency Medicine

Academic Emergency Curriculum - External Medicine; 2.0 No Data (Medsim-Eagle Simulation, Inc./ Army Research Laboratory)

123

6

Rudolph, JW; Debriefing as Formative Assessment: 2008 Closing Performance Gaps in Medical Education

Academic Emergency Article/ Medicine; 2.0 Review

100

7

Vozenilek, J; 2004

See one, Do one, Teach one: Advanced Technology in Medical Education

Academic Emergency Article/ Medicine; 2.0 Review

89

8

Reznek, M; 2003

Emergency Medicine Crisis Resource Management: Pilot Study of a Simulation-based Crisis Management Course for Emergency Medicine

Academic Emergency Curriculum Medicine; 2.0 Yes Data

86

9

Swing, SR; 2002

Assessing the ACGME General Competencies: General Considerations and Assessment Methods

Academic Emergency Article/ Medicine; 2.0 Review

10

Campbell, JC; 2001

An Evaluation of a System-change Training Model to Improve Emergency Department Response to Battered Women

Academic Emergency Curriculum - External (Centers 81 Medicine; 2.0 Yes Data for Disease Control)

11

Perkins, GD; 2007

Simulation in Resuscitation Training

Resuscitation; 4.2

12

Mower, WR; 1999

Evaluating Bias and Variability in Diagnostic Test Reports

Annals of Emergency Article/ Medicine; 4.7 Review

67

13

McLaughlin, SA; 2002

Human Simulation in Emergency Medicine Training: A Model Curriculum

Academic Emergency Curriculum Medicine; 2.0 No Data

64

14

Mandavia, DP; 2000

Ultrasound Training for Emergency Physicians - A Prospective Study

Academic Emergency CurriculumMedicine; 2.0 Yes data

63

15

Kuhn, GJ; 2002

Diagnostic Errors

Academic Emergency Article/ Medicine; 2.0 Review

62

16

Cooper, S; 2010

Rating Medical Emergency Teamwork Performance: Development of the Team Emergency Assessment Measure (TEAM)

Resuscitation; 4.2

57

17

Bond, WF; 2007

The Use of Simulation in Emergency Medicine: A Research Agenda

Academic Emergency Needs Medicine; 2.0 Assessment

56

18

Blaivas, M; 2003

Short-axis Versus Long-axis Approaches for Teaching Ultrasound-guided Vascular Access on a New Inanimate Model

Academic Emergency Research Medicine; 2.0

56

Western Journal of Emergency Medicine

62

266

External (Robert Wood Johnson Foundation)

Article/ Review

External (DH [NIHR] Clinician Scientist Award)

Research

86

73

Volume XVIII, no. 1: January 2017


Munzer et al.

Top Cited Articles in EM Education Literature

Table 1 Cont’d. Most cited education articles from emergency medicine journals. 19

Jabbour, M; 1996

Life Support Courses: Are They Effective?

Annals of Emergency Article/ Medicine; 4.7 Review

54

20

Jones, AE; 2003

Focused Training of Emergency Medicine Residents in Goal-Directed Echocardiography: A Prospective Study

Academic Emergency Research Medicine; 2.0

52

21

Counselman, The Status of Bedside Ultrasonography FL; 2003 Training in Emergency Medicine Residency Programs

Academic Emergency CurriculumMedicine; 2.0 Yes data

50

22

Kovacs, G; 1999

Clinical Decision Making: An Emergency Medicine Perspective

Academic Emergency Article/ Medicine; 2.0 Review

50

23

Burdick, WP; 1995

Observation of Emergency-Medicine Residents at the Bedside - How often Does It Happen?

Academic Emergency Research Medicine; 2.0

50

24

Gisondi, MA; 2004

Assessment of Resident Professionalism using High-fidelity Simulation of Ethical Dilemmas

Academic Emergency Research Medicine; 2.0

48

25

Santora, TA; 1996

Video Assessment of Trauma Response: Adherence to ATLS Protocols

American Journal of Research Emergency Medicine; 1.3

48

ATLS, advanced trauma life support

Articles were published most commonly in Academic Emergency Medicine (n = 18; 72%), Annals of Emergency Medicine (n = 4; 16%); and Resuscitation (n = 2; 8%). The majority of articles (n = 19; 76%) listed no funding, while five articles (20%) received external funding alone. One article received both internal and external funding. With regards to the articles in other journals, the majority were research papers. There were seven curriculum articles and two articles/reviews. The topics included simulation; professionalism; management practice; ultrasound; assessment; cross-cultural care; error in diagnosis; learner retention; specialty choice; and supervision. Articles in the second literature search were from a wide variety of journals including Academic Medicine (n = 5; 20%); Medical Education (n = 3; 12%); British Medical Journal (n = 2; 8%); Journal of the American Medical Association; and Pediatrics (n = 2; 8%). The most frequently cited article was “Prospective Analysis of a Rapid Trauma Ultrasound Examination Performed by Emergency Physicians” by Ma, published in Journal of Trauma in 1995 and cited 193 times. Eight articles had no funding (32%), 13 (52%) were funded by external awards, one article (4%) was funded solely through internal grants, while three (12%) received both internal and external funding. Table 4 summarizes these results. We calculated the Pearson correlation coefficient (r) to determine if the age of the article was correlated to the number of citations received. For the articles in the non-EM journals, there was a negative correlation between the year of publication and the number of citations (r = 0.42), meaning that the more recently published articles were cited less often. Volume XVIII, no. 1: January 2017

For the EM journals, however, this correlation was not seen (r = 0.2). Articles from Table 2 (Non-EM) published in higher impact journals were cited more often (r = 0.46). This was not the case for articles in the EM journal search (r = 0.03). DISCUSSION This study identified the top 25 most frequently cited EM education articles in both EM literature as well as the remainder of journals based in the Web of Science index. The findings of this study provide information regarding pertinent trends and topics in EM education, as noted in Table 3 and Table 4, while providing an accessible location to identify some of the highest-impact literature within this field. Additionally, it allows us to take note of the journals in which EM education is most often recognized and published, serving as an historical perspective for those seeking to publish work. It is apparent that there are trends both with regards to the overall field of EM education, as well as the journals in which these articles are published. Non-EM journals have, on average, a higher impact factor (up to 35 for JAMA), indicating that they have a higher number of cited articles and therefore are likely distributed to a wider audience. It then makes sense that, when appropriate, authors would seek to submit articles to a wider-reaching journal. For instance, the largest number of highly cited articles in both groups of journals was simulation. This suggests that simulation is a topic that has both specialty-specific and wide-reaching interest. The top two cited articles focused on simulation located in non-EM journals (Shapiro et al. and Barsuk et al.) are both more highly cited than the top ranked simulation 63

Western Journal of Emergency Medicine


Munzer et al.

Top Cited Articles in EM Education Literature Table 2. Most-cited articles from other (non-emergency medicine) journals. First author; Journal: Rank year Title impact factor

Category

Funding (if present)

Research

Number of citations

1

Ma, OJ; 1995

Prospective Analysis of a Rapid Trauma Journal of Trauma Ultrasound Examination Performed by Injury Infection and Emergency Physicians Critical Care; 2.7

193

2

Shapiro, MJ; 2004

Simulation-based Teamwork Training for Emergency Department Staff: Does It Improve Clinical Team Performance when Added to an Existing Didactic Teamwork Curriculum?

Quality & Safety in Curriculum Healthcare; 2.2 (2012, Yes Data no longer active, title changed to BMJ Quality & Safety)

3

Barsuk, JH; 2009

Use of Simulation-Based Education to Reduce Catheter-Related Bloodstream Infections

Archives of Internal Medicine; 17.3

Curriculum - Combined Yes Data (Excellence in Academic Medicine Act)

4

Stiell, I; 1995

Multicenter Trial to Introduce the Ottawa Ankle Rules for use of Radiography in Acute Ankle Injuries

British Medical Journal; 17.4

Curriculum - External (Institute 147 Yes Data for Clinical Evaluative Sciences)

5

Weissman, JS; Resident Physicians’ Preparedness to 2005 Provide Cross-Cultural Care

Journal of the American Medical Association; 35.3

Research

External (The California Endowment, The Commonwealth Fund)

123

6

Papp, KK; 2004

The Effects of Sleep Loss and Fatigue on Resident-physicians: A Multiinstitutional, Mixed-method Study

Academic Medicine; 3.1

Research

External (National Heart, Lung and Blood Institute)

115

7

Weller, JM; 2004

Simulation in Undergraduate Medical Education: Bridging the Gap between Theory and Practice

Medical Education; 3.2

Curriculum Yes Data

83

8

Larsen, DP; 2009

Repeated Testing Improves Long-term Retention Relative to Repeated Study: A Randomized Controlled Trial

Medical Education; 3.2

Curriculum - Internal Yes Data

78

9

Wright, RJ; 1997

Response to Battered Mothers in the Pediatric Emergency Department: A Call for an Interdisciplinary Approach to Family Violence

Pediatrics; 5.5

Research

10

Kennedy, TJT; 2007

Clinical Oversight: Conceptualizing the Relationship Between Supervision and Safety

Journal of General Article/ Internal Medicine; 3.4 Review

11

Bond, WF; 2004

Using Simulation to Instruct Emergency Academic Medicine; Medicine Residents in Cognitive 3.1 Forcing Strategies

Research

12

Wallin, CJ; 2007

Target-focused Medical Emergency Team Training using a Human Patient Simulator: Effects on Behaviour and Attitude

Medical Education; 3.2

Curriculum - External Yes Data (Wallenberg Global Learning Network)

13

Baraff, LJ; 1991

Management of the Febrile Child - A Survey of Pediatric and EmergencyMedicine Residency Directors

Pediatric Infectious Research Disease Journals; 5.5

52

14

Isaacson, JH; 2000

A National Survey of Training in Substance use Disorders in Residency Programs

Journal of Studies on Research Alcohol; 2.8

48

External (Army Research Laboratory Contract, AHRQ grants)

170

164

Combined 62 (Centers for Disease Control and Prevention, NIH training grant) External 60 (Canadian Institutes of Health Research) External (Leonard 56 Parker Pool Healthcare Trust) 54

AHRQ, Agency for Healthcare Research and Quality; ACEP, American College of Emergency Physicians

Western Journal of Emergency Medicine

64

Volume XVIII, no. 1: January 2017


Munzer et al.

Top Cited Articles in EM Education Literature

Table 2 Cont’d. Most cited education articles from emergency medicine journals. 15

Thomas, EJ; 2010

Team Training in the Neonatal Resuscitation Program for Interns: Teamwork and Quality of Resuscitations

Pediatrics; 5.5

Research

16

Vaidya, NA; 2004

17

Baernstein, A; 2003

Promoting Reflection on Professionalism: A Comparison Trial of Educational Interventions for Medical Students

18

Kennedy, TJT; 2009

19

Combined (NIH)

47

Relationship between Specialty Choice Teaching and and Medical Student Temperament and Learning in Medicine; Character Assessed with Cloninger 0.7 Inventory

Research

46

Academic Medicine; 3.1

Research

45

Preserving Professional Credibility: Grounded Theory Study of Medical Trainees’ Requests for Clinical Support

British Medical Journal; 17.4

Research

Hobgood, C; 2005

The Influence of the Causes and Contexts of Medical Errors on Emergency Medicine Residents’ Responses to their Errors: An Exploration

Academic Medicine; 3.1

Research

20

Gogalniceanu, P; 2010

Is Basic Emergency Ultrasound Training Feasible as Part of Standard Undergraduate Medical Education?

21

Harvey, A; 2010

Threat and Challenge: Cognitive Appraisal and Stress Responses in Simulated Trauma Resuscitations

22

Kennedy, TJT; 2009

23

External (Canadian Institute for Health Research)

45

41

Journal of Surgical Curriculum Education; 1.38 Yes Data

External (Siemens Ultrasound)

37

Medical Education; 3.2

Research

External (Physicians Services Inc. Foundation)

37

‘It’s a Cultural Expectation...’ The Pressure on Medical Trainees to Work Independently in Clinical Practice

Medical Education; 3.2

Research

External (Canadian Institute for Health Research)

37

Revicki, DA; 1993

Organizational Characteristics, Perceived Work Stress, and Depression in Emergency-Medicine Residents

Behavioral Medicine; 1

Research

External (ACEP Grant)

36

24

Kennedy, TJT; 2008

Point-of-Care Assessment of Medical Trainee Competence for Independent Clinical Work

Academic Medicine; 3.1

Research

External (Canadian Institute for Health Research)

34

25

Kendall, JL; 2007

History of Emergency and Critical Care Ultrasound: The Evolution of a New Imaging Paradigm

Critical Care Medicine; 6.3

Article/ Review

33

Table 3. Papers by topic in emergency medicine journals. Topic by list

Article/review

Curriculum

Research paper

Other

Total number (%)

Simulation

3

3

2

1

9 (36%)

Learning theory

6 3

2

5 (20%)

2

3 (12%)

Ultrasound Assessment

1

Learner retention

1

Interprofessional education Totals

Volume XVIII, no. 1: January 2017

6 (24%)

1 (4%) 1

11

7

65

1 (4%) 6

1

25

Western Journal of Emergency Medicine


Munzer et al.

Top Cited Articles in EM Education Literature Table 4. Papers by topic in non-EM journals. Topic by list Article/review

Curriculum

Research paper

1

5

6 (24%)

1

2

4 (16%)

Management practice

1

2

3 (12%)

Stress response

1

2

3 (12%)

2

3 (12%)

1

1 (4%)

Simulation Professionalism

Ultrasound

1

1

Assessment

Other

Total number (%)

Cross-cultural care

1

1 (4%)

Error in diagnosis

1

1 (4%)

Learner retention

1

1 (4%)

Specialty choice

1

1 (4%)

Supervision

1

1 (4%)

16

25

Totals

2

article in EM journals. An author looking to publish an article involving simulation would therefore need to balance the benefits of publishing within the field of EM, with associated peer recognition, against the benefits of publishing in a journal with higher impact. It is interesting to note that the top two cited articles in this study are located in EM journals. This confounds the idea that a wider audience will provide a greater number of citations overall and is possibly related to specialty association and peer recognition. Table 1 and Table 2 provide information regarding the type of article, with the finding that the vast majority of articles in the non-EM journals were found to be research articles, while EM journals tended to have more articles/ reviews that were highly cited. This suggests that there is a preference in publication for research-driven articles in non-EM journals. In contrast, Ericsson’s theory-based article was significant in EM, as was Croskerrv’s article indicating that publishing a key learning theory paper in EM may also provide a meaningful foundation.11, 12 The most common EM journal in which highly cited medical education articles were published was Academic Emergency Medicine. Medical education articles previously considered for publication in Academic Emergency Medicine will now be directed to submit to the new journal Academic Emergency Medicine Education and Training. It should be noted that because this journal will not be indexed for several years, articles published in this journal would not have been considered for this ranking list. Within non-EM journals, roughly half were specialty focused (i.e., Pediatrics, Journal of Trauma: Injury, Infection, and Critical Care) and half were general medical journals. Articles included within specialty journals tended to have a focus that was less specific to the specialty of EM. It was of note that some of these articles chose a generalizable topic Western Journal of Emergency Medicine

7

such as simulation or depression and used a subject population that included EM residents as well as other specialties. The articles published within non-EM journals had a larger number of authors who received funding, whether internal or external funding, than articles published in EM journals. It should be noted, however, that the top-cited articles in both the EM journals and the non-EM journals did not have any funding. While it can be helpful to have the support that funding provides, this finding suggests that unfunded work is worthwhile and can still be impactful. One goal of education research is to disseminate educational practices.3 Many articles have been widely disseminated despite not being highly cited. For instance, all EM residencies use the “Standardized Letter of Evaluation” (SLOE), as one way of reviewing potential applicants; however, the paper describing its predecessor, the “Standardized Letter of Recommendation (SLOR) and subsequent SLOE papers would not appear on the top 25 cited list in this article.53, 54 This suggests that citation numbers alone do not provide all information regarding the reach of research being performed. This study provides a repository for some of the most impactful literature in EM medical education. For example, the articles on deliberate practice and cognitive strategies for debiasing are important foundations for EM education. Additionally, some research articles form the basis for further research and educational development. By collecting these articles in one location, it allows others to discover landmark articles within the field of medical education. It also allows others to identify trends in EM education research, note common funding sources, and advance the field of medical education. LIMITATIONS Limitations of this study include the fact that articles were 66

Volume XVIII, no. 1: January 2017


Munzer et al.

Top Cited Articles in EM Education Literature

searched in only one database, the Web of Science. It is possible that a search performed in a different database, such as SCOPUS, may have provided additional articles or slightly different search findings. For instance, the EM journals were identified a priori of the search. The Journal of Trauma – Injury Infection and Critical Care was not originally identified in that list but could be considered an EM journal. We chose to leave it in the non-EM list based on the a priori listing of journals. Additionally, we excluded articles and journals if they were not English language. This may have skewed search results to favor a Western viewpoint while neglecting articles that may have had additional global influence. Another limitation was our attempt to define what constitutes education research as related to education and training. This may have added a measure of subjectivity, although our inter-rater kappa was acceptable. One final limitation is that this article did not identify where articles were cited, and the subsequent reach of these articles, as well as self-citations. As related above, the ability to determine impact based upon citation count alone is difficult as there are widely read articles that are not highly cited. Citation counts do provide a foundation; further research could identify what qualities make an article more likely to be disseminated.

Educational Research Productivity: The Medical Education Research Group. West J Emerg Med. 2015;16:947-51. 2. Azer SA. The top-cited articles in medical education: a bibliometric analysis. Acad Med. 2015;90:1147-61. 3. Santen SA, Deiorio NM, Gruppen LD. Medical education research in the context of translational science. Acad Emerg Med. 2012;19:1323-7. 4. Blaivas M, Brannam L, Fernandez E. Short-axis versus long-axis approaches for teaching ultrasound-guided vascular access on a new inanimate model. Acad Emerg Med. 2003;10:1307-11. 5. Bond WF, Lammers RL, Spillane LL, et al. The use of simulation in emergency medicine: a research agenda. Acad Emerg Med. 2007;14:353-63. 6. Burdick WP and Schoffstall J. Observation of emergency medicine residents at the bedside: how often does it happen? Acad Emerg Med. 1995;2:909-13. 7. Campbell JC, Coben JH, McLoughlin E, et al. An evaluation of a system-change training model to improve emergency department response to battered women. Acad Emerg Med. 2001;8:131-8. 8. Cooper S, Cant R, Porter J, et al. Rating medical emergency teamwork performance: development of the Team Emergency Assessment Measure (TEAM). Resuscitation. 2010;81:446-52. 9. Counselman FL, Sanders A, Slovis CM, et al. The status of bedside ultrasonography training in emergency medicine residency programs.

CONCLUSION This study identified the most frequently cited medical education journals in the field of emergency medicine, published in EM journals as well as all other journals indexed in Web of Science. The results identify impactful articles that are collected in one location, providing a resource to others while identifying trends that may be used to guide emergency medicine educational research and publishing efforts.

Acad Emerg Med. 2003;10:37-42. 10. Croskerry P. Achieving quality in clinical decision making: cognitive strategies and detection of bias. Acad Emerg Med. 2002;9:1184-204. 11. Croskerry P. Cognitive forcing strategies in clinical decisionmaking. Ann Emerg Med. 2003;41:110-20. 12. Ericsson KA. Deliberate practice and acquisition of expert performance: a general overview. Acad Emerg Med. 2008;15:988-94. 13. Gisondi MA, Smith-Coggins R, Harter PM, et al. Assessment of resident professionalism using high-fidelity simulation of ethical dilemmas. Acad Emerg Med. 2004;11:931-7. 14. Jabbour M, Osmond MH, Klassen TP. Life support courses: are they

Address for Correspondence: Brendan W. Munzer, MD, University of Michigan, Department of Emergency Medicine, 3440 Wexford Ct., Ann Arbor, MI 48108. Email: Bmunzer@med.umich.edu.

effective? Ann Emerg Med 1996;28:690-8. 15. Jones AE, Tayal VS, Kline JA. Focused training of emergency medicine residents in goal-directed echocardiography: a prospective study. Acad Emerg Med. 2003;10:1054-8.

Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none.

16. Kovacs G and Croskerry P. Clinical decision making: an emergency

Copyright: Š 2016 Munzer et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

emergency physicians--a prospective study. Acad Emerg Med.

medicine perspective. Acad Emerg Med. 1999;6:947-52. 17. Kuhn GJ. Diagnostic errors. Acad Emerg Med. 2002;9:740-50. 18. Mandavia DP, Aragona J, Chan L, et al. Ultrasound training for 2000;7:1008-14. 19. Mateer J, Plummer D, Heller M, et al. Model curriculum for physician training in emergency ultrasonography. Ann Emerg Med. 1994;23:95-102. 20. McLaughlin SA, Doezema D, Sklar DP. Human simulation in emergency medicine training: a model curriculum. Acad Emerg Med.

REFERENCES 1. Perry M, Hopson L, House JB, et al. Model for Developing

Volume XVIII, no. 1: January 2017

2002;9:1310-8. 21. Mower WR. Evaluating bias and variability in diagnostic test reports.

67

Western Journal of Emergency Medicine


Munzer et al.

Top Cited Articles in EM Education Literature Ann Emerg Med. 1999;33:85-91.

medical trainee competence for independent clinical work. Acad Med.

22. Perkins GD. Simulation in resuscitation training. Resuscitation. 2007;73:202-11.

2008;83:S89-92. 39. Kennedy TJ, Regehr G, Baker GR, et al. Preserving professional

23. Reznek M, Smith-Coggins R, Howard S, et al. Emergency medicine

credibility: grounded theory study of medical trainees’ requests for

crisis resource management (EMCRM): pilot study of a simulationbased crisis management course for emergency medicine. Acad

clinical support. BMJ. 2009;338:b128. 40. Kennedy TJ, Regehr G, Baker GR, et al. ‘It’s a cultural expectation...’

Emerg Med. 2003;10:386-9.

The pressure on medical trainees to work independently in clinical

24. Rudolph JW, Simon R, Raemer DB, et al. Debriefing as formative assessment: closing performance gaps in medical education. Acad

practice. Med Educ. 2009;43:645-53. 41. Larsen DP, Butler AC, Roediger HL, 3rd. Repeated testing improves

Emerg Med. 2008;15:1010-6.

long-term retention relative to repeated study: a randomised

25. Santora TA, Trooskin SZ, Blank CA, et al. Video assessment of trauma response: adherence to ATLS protocols. Am J Emerg Med.

controlled trial. Med Educ. 2009;43:1174-81. 42. Ma OJ, Mateer JR, Ogata M, et al. Prospective analysis of a rapid

1996;14:564-9.

trauma ultrasound examination performed by emergency physicians.

26. Small SD, Wuerz RC, Simon R, et al. Demonstration of high-fidelity simulation team training for emergency medicine. Acad Emerg Med.

J Trauma. 1995;38:879-85. 43. Papp KK, Stoller EP, Sage P, et al. The effects of sleep loss and

1999;6:312-23.

fatigue on resident-physicians: a multi-institutional, mixed-method

27. Swing SR. Assessing the ACGME general competencies: general considerations and assessment methods. Acad Emerg Med.

study. Acad Med. 2004;79:394-406. 44. Revicki DA, Whitley TW, Gallery ME. Organizational characteristics,

2002;9:1278-88.

perceived work stress, and depression in emergency medicine

28. Vozenilek J, Huff JS, Reznek M, et al. See one, do one, teach one: advanced technology in medical education. Acad Emerg Med.

residents. Behav Med. 1993;19:74-81. 45. Shapiro MJ, Morey JC, Small SD, et al. Simulation based teamwork

2004;11:1149-54.

training for emergency department staff: does it improve clinical

29. Baernstein A and Fryer-Edwards K. Promoting reflection on

team performance when added to an existing didactic teamwork

professionalism: a comparison trial of educational interventions for medical students. Acad Med. 2003;78:742-7.

curriculum? Qual Saf Health Care. 2004;13:417-21. 46. Stiell I, Wells G, Laupacis A, et al. Multicentre trial to introduce the

30. Baraff LJ. Management of the febrile child: a survey of pediatric

Ottawa ankle rules for use of radiography in acute ankle injuries.

and emergency medicine residency directors. Pediatr Infect Dis J. 1991;10:795-800.

Multicentre Ankle Rule Study Group. BMJ. 1995;311:594-7. 47. Thomas EJ, Williams AL, Reichman EF, et al. Team training in the

31. Barsuk JH, Cohen ER, Feinglass J, et al. Use of simulation-based

neonatal resuscitation program for interns: teamwork and quality of

education to reduce catheter-related bloodstream infections. Arch Intern Med. 2009;169:1420-3.

resuscitations. Pediatrics. 2010;125:539-46. 48. Vaidya NA, Sierles FS, Raida MD, et al. Relationship between

32. Bond WF, Deitrick LM, Arnold DC, et al. Using simulation to instruct

specialty choice and medical student temperament and character

emergency medicine residents in cognitive forcing strategies. Acad

assessed with Cloninger Inventory. Teach Learn Med. 2004;16:150-

Med. 2004;79:438-46. 33. Harvey A, Nathens AB, Bandiera G, et al. Threat and challenge:

6. 49. Wallin CJ, Meurling L, Hedman L, et al. Target-focused medical

cognitive appraisal and stress responses in simulated trauma

emergency team training using a human patient simulator: effects on

resuscitations. Med Educ. 2010;44:587-94. 34. Hobgood C, Hevia A, Tamayo-Sarver JH, et al. The influence of

behaviour and attitude. Med Educ. 2007;41:173-80. 50. Weissman JS, Betancourt J, Campbell EG, et al. Resident

the causes and contexts of medical errors on emergency medicine

physicians’ preparedness to provide cross-cultural care. JAMA.

residents’ responses to their errors: an exploration. Acad Med. 2005;80:758-64.

2005;294:1058-67. 51. Weller JM. Simulation in undergraduate medical education: bridging

35. Isaacson JH, Fleming M, Kraus M, et al. A national survey of training in substance use disorders in residency programs. J Stud Alcohol.

the gap between theory and practice. Med Educ. 2004;38:32-8. 52. Wright RJ, Wright RO, Isaac NE. Response to battered mothers in

2000;61:912-5.

the pediatric emergency department: a call for an interdisciplinary

36. Kendall JL, Hoffenberg SR, Smith RS. History of emergency and critical care ultrasound: the evolution of a new imaging paradigm. Crit

approach to family violence. Pediatrics. 1997;99:186-92. 53. Keim SM, Rein JA, Chisholm C, et al. A standardized letter of

Care Med. 2007;35:S126-30.

recommendation for residency application. Acad Emerg Med.

37. Kennedy TJ, Lingard L, Baker GR, et al. Clinical oversight: conceptualizing the relationship between supervision and safety. J

1999;6:1141-6. 54. Love JN, Smith J, Weizberg M, et al. Council of Emergency Medicine

Gen Intern Med. 2007;22:1080-5.

Residency Directors’ standardized letter of recommendation: the

38. Kennedy TJ, Regehr G, Baker GR, et al. Point-of-care assessment of

Western Journal of Emergency Medicine

program director’s perspective. Acad Emerg Med. 2014;21:680-7.

68

Volume XVIII, no. 1: January 2017


Original Research

Supplemental Milestones for Emergency Medicine Residency Programs: A Validation Study Andrew R. Ketterer, MD, MA*† David H. Salzman, MD, MEd*† Jeremy B. Branzetti, MD‡ Michael A. Gisondi, MD*†

*Northwestern University Feinberg School of Medicine, Department of Emergency Medicine, Chicago, Illinois † Feinberg Academy of Medical Educators, Department of Medical Education, Chicago, Illinois ‡ University of Washington School of Medicine, Division of Emergency Medicine, Seattle, Washington

Section Editor: David P. Way, MEd Submission history: Submitted July 8, 2016; Revision received September 22, 2016; Accepted October 10, 2016 Electronically published November 15, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.10.31499

Introduction: Emergency medicine (EM) residency programs may be 36 or 48 months in length. The Residency Review Committee for EM requires that 48-month programs provide educational justification for the additional 12 months. We developed additional milestones that EM training programs might use to assess outcomes in domains that meet this accreditation requirement. This study aims to assess for content validity of these supplemental milestones using a similar methodology to that of the original EM Milestones validation study. Methods: A panel of EM program directors (PD) and content experts at two institutions identified domains of additional training not covered by the existing EM Milestones. This led to the development of six novel subcompetencies: “Operations and Administration,” “Critical Care,” “Leadership and Management,” “Research,” “Teaching and Learning,” and “Career Development.” Subject-matter experts at other 48-month EM residency programs refined the milestones for these subcompetencies. PDs of all 48-month EM programs were then asked to order the proposed milestones using the Dreyfus model of skill acquisition for each subcompetency. Data analysis mirrored that used in the original EM Milestones validation study, leading to the final version of our supplemental milestones. Results: Twenty of 33 subjects (58.8%) completed the study. No subcompetency or individual milestone met deletion criteria. Of the 97 proposed milestones, 67 (69.1%) required no further editing and remained at the same level as proposed by the study authors. Thirty milestones underwent level changes: 15 (15.5%) were moved one level up and 13 (13.4%) were moved one level down. One milestone (1.0%) in “Leadership and Management” was moved two levels up, and one milestone in “Operations and Administration” was moved two levels down. One milestone in “Research” was ranked by the survey respondents at one level higher than that proposed by the authors; however, this milestone was kept at its original level assignment. Conclusion: Six additional subcompetencies were generated and assessed for content validity using the same methodology as was used to validate the current EM Milestones. These optional milestones may serve as an additional set of assessment tools that will allow EM residency programs to report these additional educational outcomes using a familiar milestone rubric. [West J Emerg Med. 2017;18(1)69-75.]

Volume XVIII, no. 1: January 2017

69

Western Journal of Emergency Medicine


Ketterer et al.

Supplemental Milestones for EM Residency Programs INTRODUCTION The Accreditation Council for Graduate Medical Education (ACGME) has fully implemented the Next Accreditation System, a framework of continuous accreditation that uses outcomes-based, specialty-specific milestones for resident assessment.1,2 The ACGME, the Residency Review Committee for Emergency Medicine (RRC-EM), and the American Board of Emergency Medicine (ABEM) co-convened the Emergency Medicine (EM) Milestones Working Group to create the EM Milestones.3,4 As described by ABEM, “the EM Milestones are a matrix of the knowledge, skills, abilities, attitudes, and experiences that should be acquired during specialty training in EM.”5 Validated and published in 2013, the EM Milestones are used to track and report residents’ progress in 23 different content domains described as subcompetencies.3,4,6 Residency programs in EM are configured in 36-month or 48-month formats. The EM Milestones are used by all ACGME-accredited EM residency programs, regardless of program length.1 However, EM residency programs with a 48-month training format are expected to provide the RRCEM with “justification describing the additional educational goals and outcomes to be achieved by residents in the incremental 12 months of education.”1 Different programs use this additional training time in different ways, including extra elective time, built-in mini-fellowships, scholarly tracks, and other means of academic and professional development.7 The EM Milestones were not designed to reflect the “additional educational goals and outcomes” of 48-month residency programs, but rather were intended to evaluate resident progress during training irrespective of program length.1 The authors of this study developed supplemental milestones to track their residents’ progress within domains not reflected in the current EM Milestones (Appendix). Importantly, these supplemental milestones were designed to augment the self-study process by providing a concrete means of resident assessment using the already-familiar EM Milestone format.8 The goals of this study were to assess the content validity of these supplemental milestones, and to refine them using the same methodology established by the EM Milestone Working Group to create the current EM Milestones.3,4 METHODS Study Design, Setting and Population This was a cross-sectional survey of program directors (PD) at ACGME-accredited, allopathic, 48-month EM residency programs during the study period. Associate/ assistant program directors (APD) were excluded from this survey. This study was considered exempt by the institutional review board of Northwestern University. Supplemental Milestone Development A seven-person panel of EM educators at two 48-month Western Journal of Emergency Medicine

training programs, including an active PD and multiple APDs, convened to create supplemental milestones that described educational domains common to many, but not all, 48-month training programs, that are not otherwise reflected in the EM Milestones. Using an iterative process, six new subcompetencies were drafted by consensus, each with its own set of defining milestones, which were sequentially reviewed and refined by the authors. Next, four subjectmatter experts consisting of experienced APDs at other 48-month EM training programs were tasked with reviewing content, survey format, quality and clarity of instructions, and usability of these supplemental milestones. Their comments were incorporated into the final version of our proposed supplemental milestones. Our subject-matter experts were asked to keep the content of this study confidential from the intended study subjects (i.e. their respective PDs). Survey Administration and Content The validation phase used a computer-based survey platform powered by Qualitrics© LLC (Provo, Utah). We emailed the survey to all eligible subjects between February 6, 2015, and May 31, 2016, during which a total of five interval reminders were sent to nonresponders. We de-identified all data, and individuals’ responses were kept confidential from the study authors. For each of the individual six subcompetencies proposed, respondents reviewed a complete list of corresponding milestones, presented in randomized order. Respondents were asked to click and drag each milestone to an area on their screen corresponding to one of five levels. Like the original EM Milestones project,4 we used the Dreyfus model of skill acquisition9 to define levels of competency from novice (Level 1) to expert (Level 5, indicative of aspirational performance). Detailed instructions for this task were included in the survey instrument, providing a functional description of the Dreyfus model to survey respondents. The option to mark individual milestones as inappropriate for inclusion was also provided as part of the survey, as was a free-text area for comments. The primary outcome of this study was the frequency of milestone assignment into a specific level designation. The authors then used the survey results to amend the inclusion or assignment of milestones within a level using a set of predefined decision rules described in the validation study of the current EM Milestones.4 The decision rules included the following: • Milestones were not altered if 50% or more of respondents assigned a milestone to the same level as was proposed by our study team. • Milestones were deleted if more than 50% of respondents recommended deletion. • The assignment of a milestone level was changed when 50% or more of respondents assigned a milestone to a different level than was proposed by our study team. • If a milestone was not assigned to a single level by more 70

Volume XVIII, no. 1: January 2017


Ketterer et al.

Supplemental Milestones for EM Residency Programs

Table 1. Geographic data of emergency medicine program directors of 48-month programs who responded to a survey regarding proposed new milestones created to supplement existing EM milestones. Number of Society for Academic 48-month Emergency Medicine Number of Programs in Region Respondents Region % Total New England

5

13

38.5

Mid-Atlantic

4

4

100

Southeastern

1

1

100

Midwest

2

2

100

Great Plains

0

3

0

Western

8

11

72.7

than 50% of respondents, the milestone was assigned to the level at which a cumulative 50% of respondents chose that level or below. Data Analysis We tallied response rates for each of the milestones using the Qualtrics© survey software and entered their allocations into an Excel (version 15.14, Microsoft©) spreadsheet. Frequencies and cumulative frequencies were calculated and charted, decision rules applied, and final milestone levels assigned. RESULTS Of the 34 eligible subjects, one was excluded because of his authorship on this paper and involvement in developing the proposed supplemental milestones. Twenty of the remaining 33 recipients (58.8%) completed the survey within the study period. Responses were received from five of six Society for Academic Emergency Medicine (SAEM) geographic regions (Table 1).10 Of the 97 proposed supplemental milestones, 67 (69.1%) were kept at the same level as proposed by the study authors, without further editing. Three of the proposed subcompetencies demonstrated high rates of agreement between the survey respondents and the proposed milestones: Eleven of 15 milestones (73.3%) in “Operations and

Administration” were kept at the same level as proposed by the study authors (s 1 and 2); for “Critical Care,” 11 of 14 milestones (78.6%) were unchanged, and for “Leadership and Management” 13 of 16 milestones (81.3%) were unchanged. The remaining three subcompetencies showed moderate levels of agreement: For “Research,” 9 of 17 milestones (52.9%) were kept at the same level as proposed by the authors; for “Teaching and Learning,” 12 of 18 milestones (66.7%) were unchanged, and for “Career Development,” 11 of 17 milestones (64.7%) were unchanged. In all, 30 milestones underwent level changes based on survey responses; 15 (15.5%) were moved one level up and 13 (13.4%) were moved one level down. One milestone (1.0%) in “Leadership and Management” was moved two levels up, and one milestone in “Operations and Administration” was moved two levels down. One milestone in “Research” met decision rules criteria to be moved one level up; however, this milestone was ultimately kept at its original level after review by the study authors (Table 2). No milestones met criteria for deletion. The final distribution of milestones for the six supplemental subcompetencies are: 9 in Level 1 (9.3% of 97 total milestones), 25 in Level 2 (25.8%), 26 in Level 3 (26.8%), 22 in Level 4 (22.7%), and 15 in Level 5 (15.5%). DISCUSSION This is the first study to replicate the methodology used by the EM Milestones Working Group to create a set of supplemental milestones.3,4 In our study, the content validity of these milestones was assessed specifically for potential use within a cohort of 48-month EM residency programs. Similar to the development of the EM Milestones, this study shows that a set of “objective, observable actions” can be assigned by PDs “into progressive levels of competency acquisition” for the assessment of residents in distinct educational domains.4 The ACGME Program Requirements for EM, as recently updated by the RRC-EM, mandate that 48-month EM programs provide an educational justification for the additional training time in their programs.1 The existing EM Milestones were meant to capture clinical competency for all EM residents, and therefore they may not reflect added educational goals and objectives for the additional 12 months

Table 2. Frequency of supplemental milestone-level changes based on survey results. Subcompetency

No. milestones

Two-level change (% total)

Deleted milestones

Operations and Administration

15

3 (20.0%)

1 (6.7%)

0

Research

17

8 (47.1%)

0 (0.0%)

0

Critical Care

14

3 (21.4%)

0 (0.0%)

0

Teaching and Learning

18

6 (33.3%)

0 (0.0%)

0

Career Development

17

6 (35.3%)

0 (0.0%)

0

Leadership and Management

16

2 (12.5%)

1 (6.3%)

0

Volume XVIII, no. 1: January 2017

Single-level change (% total)

71

Western Journal of Emergency Medicine


Ketterer et al.

Supplemental Milestones for EM Residency Programs

Figure 1. Original matrix of the supplemental milestone “Operations and Administration” as proposed by the study authors. *Emergency Medical Treatment and Labor Act; †Emergency Department; ††Plan-Do-Study-Act; §American College of Emergency Physicians.

of training in 48-month residency programs. The subcompetencies developed in this study reflect six potential content domains that could be used to meet the aforementioned educational justification required of 48-month EM programs. We may now assess residents’ skill acquisition in these domains as a progression to competence, using a reporting framework similar to the standard EM Milestones.8,11 These supplemental milestones are not meant to replace, direct, or alter the existing curricula of any of the other 48-month EM residency programs. Each 48-month program may use their additional training time to meet their own unique program-specific aims.7,8,11 The supplemental milestones described in this study were developed with the intention that they might serve as a potential tool to assess and track already-existing curricula. We chose subcompetency domains thought to be common to 48-month EM programs, thereby performing this validation study within our cohort of programs. We recognize that topics such as “Critical Care” might have appeal to most but not all 48-month EM programs, while topics such as “Research” might be common to many but not the majority of these programs. Certainly there are other domains that could be explored using similar methodology to this study, for example, the validation of

Western Journal of Emergency Medicine

milestones in areas such as global health, emergency medical services, or ultrasound. Obviously not all programs aim to train residents in these additional domains, but for those that do, the option to assess and report residents’ skill acquisition in these content areas may be appealing. Residency programs may choose to use this methodology to create similar selfstudy assessment tools to track their residents’ progress within specific elements of their current curricula. An alternative approach would be to use a similarly robust development process to generate these tools, and forego the content validation phase by external experts. This is particularly appealing in light of the labor-intensive nature of external content validation, as well as the relatively low frequency of level reassignment by survey respondents. However, we felt that the high stakes of assessment imparted by the milestones suggests a need for the robust content validation process described in this study, and we would recommend a similar approach to the development of such tools in the future. The ACGME Next Accreditation System includes a sequence of eight steps intended to guide programs in conducting a self-study. Their recommended fourth step is to “Aggregate and analyze data to generate a longitudinal assessment of the program’s improvement.”8 In addition to

72

Volume XVIII, no. 1: January 2017


Ketterer et al.

Supplemental Milestones for EM Residency Programs

Figure 2. Revised matrix for the proposed supplemental milestone, “Operations and Administration, “based on survey results. 1. This milestone was originally at level 1 and was moved up 1 level. 2. This milestone was originally at level 5 and was moved down 2 levels. 3. This milestone was originally at level 3 and was moved up 1 level. 4. This milestone was originally at level 4 and was moved up 1 level. *Emergency Medical Treatment and Labor Act; †Emergency Department; ††Plan-Do-Study-Act; §American College of Emergency Physicians.

tracking individual resident progress, our proposed supplemental milestones may be used to assess the impact of longitudinal curricular changes. The impact of changes in program curricula may be monitored by tracking cumulative resident progress as class cohorts using these supplemental milestones year after year. We believe this satisfies the ACGME’s recommended fourth step of self-study by generating data that track how well a program’s own specific educational goals are being met by its trainees, and how this progress changes in response to curriculum modification over time. The data generated by these tools could also be used to strengthen a program’s presentation for internal review or ACGME site visit. Volume XVIII, no. 1: January 2017

With the exception of one milestone, our final set of supplemental milestones reflects the positions assigned according to the aforementioned set of predefined decision rules. The single exception was milestone 2.6 in the “Research” subcompetency (“Identifies and explains methods of statistical analysis commonly used in the medical literature”). The survey respondents assigned that milestone to level 3 by a slight majority; however, we felt it represented a stepwise progression in sophistication of research skills between “describing common research designs” (milestone 1.1) and “leading critical discussions of medical literature” (milestone 3.2). Finally, we felt justified in leaving this milestone at its originally assigned level because only a very 73

Western Journal of Emergency Medicine


Ketterer et al.

Supplemental Milestones for EM Residency Programs slight majority of respondents indicated that it should be assigned to level 3; if a single respondent had assigned it to level 2 rather than level 3, the majority would have agreed with the study authors. LIMITATIONS Our study includes some important limitations. Our overall response rate represents just over half of the potential subjects, which could indicate results that are biased and not representative of our intended population. However, multiple factors may mitigate this potential bias. Our response rate of 58.8% is higher than that in the original EM Milestone validation study, which had a response rate of 36.6%.4 While the original study sampled a larger population of “key faculty,”4 we focused specifically on PDs in this study. Therefore, while there may be fewer overall respondents, the potential for higher quality respondent data from directors who are experienced with the milestone process may be less prone to bias than the more general population of key faculty surveyed by Korte et al.4 Moreover, the survey respondents had a broad geographical distribution, suggesting that the final version of the supplemental milestones accurately represents the attitudes of PDs at a variety of EM training programs. This protects against potential bias from the attitudes of any one geographic region, although this protection is limited by the response of only a minority of programs in New England, and none of the eligible programs in the Great Plains region. Finally, the methodology of our study included multiple layers of data acquisition and review beyond simple collection of survey responses. This includes our initial solicitation of expert opinions to generate the new milestone domains and content, subject-matter experts to review of the drafted milestones, survey respondents’ assignments of milestones to specific levels, and final study group review of all generated data to ensure that each milestone was an appropriate match for the level to which our validation cohort assigned. The original proposals for the subcompetencies and milestones in this study were all written and edited by faculty from two institutions. While each faculty member had content expertise pertinent to their tasked subcompetency (APD, PD, operations directors, etc.), the content selected for inclusion may be biased by specific institutional strengths, norms, or expectations. We believe the use of external subject matter experts for review of the proposed milestones, as well as the use of directors of 48-month EM programs as subjects, mitigates this potential bias. As this validation study was conducted among programs of a similar length of training, we chose to title this manuscript, “Supplemental milestones for EM residency programs.” It is likely that these tools could also be used by 36-month EM programs that have program-specific aims and curricula that are similar to those in our validation cohort, much like the current EM Milestones themselves. Similarly, there may be other potential educational domains that are Western Journal of Emergency Medicine

more relevant to specific programs than the six options resulting from this study. Programs may choose to adopt one or more of the subcompetencies that we developed, or instead create ones that are more ideally suited to their needs. This study demonstrates a process that can be followed by any cohort of similar residency programs. As with the current EM Milestones, no editing or review of our proposed milestones based on real-world implementation has been performed. It is conceivable that such post-hoc experiential data may prove valuable enough to necessitate editing of the milestone content, phrasing, or level assignment, as is planned in future iterations of the Milestones by the ACGME and RRC-EM.12,13 CONCLUSION This study resulted in the development of six supplemental subcompetencies and corresponding milestones for EM that were assessed for content validity among a cohort of 48-month EM residency program directors, using the methodology of the EM Milestones Working Group. These optional tools may be used to track residents’ skill acquisition in educational domains that are distinct from those of the original EM Milestones. Further study will be needed to assess the implementation and longitudinal utility of these new milestones by residency programs in EM. ACKNOWLEDGMENTS Special thanks to Peter Pang, Emilie Powell, Christopher Beach, Dave Lu, and Colleen Hickey for their help in drafting the milestone language, as well as Sonal Batra, Douglas Franzen, Erin McDonough, Joel Moll, Leslie Oyama, Scott Sherman, and Taku Taira for their review of the draft milestones prior to survey distribution.

Address for Correspondence: Michael A. Gisondi, MD, Northwestern University Feinberg School of Medicine, Department of Emergency Medicine, 211 East Ontario Street, Suite 200, Chicago IL 60611. Email: mgisondi@northwestern.edu. Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none. Copyright: © 2016 Ketterer et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

REFERENCES 1. Program Requirements for Graduate Medical Education in Emergency Medicine. Accreditation Council for Graduate

74

Volume XVIII, no. 1: January 2017


Ketterer et al.

Supplemental Milestones for EM Residency Programs

Medical Education website. Available at: http://www.acgme.org/

7. Regan L, Stahmer S, Nyce A, et al. Scholarly Tracks in Emergency

acgmeweb/portals/0/pfassets/2013-pr-faq-pif/110_emergency_

Medicine. Acad Emerg Med. 2010;17(S2):S87-S94.

medicine_07012013.pdf. Accessed August 3, 2015.

8. Eight Steps for Conducting the ACGME Program Self-Study.

2. Milestones. Accreditation Council for Graduate Medical Education

Accreditation Council for Graduate Medical Education website.

website. Available at: http://www.acgme.org/acgmeweb/tabid/430/

Available at: http://www.acgme.org/acgmeweb/tabid/473/

ProgramandInstitutionalAccreditation/NextAccreditationSystem/

ProgramandInstitutionalAccreditation/Self-Study.aspx. Accessed

Milestones.aspx. Accessed February 1, 2016.

October 13, 2015.

3. Beeson MS, Carter WA, Christopher TA, et al. The Development

9. Dreyfus SE and Dreyfus HL. A five-stage model of the mental

of the Emergency Medicine Milestones. Acad Emerg Med.

activities involved in directed skill acquisition. Berkeley, CA:

2013;20(7):724-9. 4. Korte RC, Beeson MS, Russ CM, et al. The Emergency Medicine

Operations Research Center, University of California, Berkeley; 1980. 10. SAEM Regional Meetings. Society for Academic Emergency

Milestones Working Group, Reisdorff, EJ. The Emergency Medicine

Medicine website. Available at: http://www.saem.org/meetings/

Milestones: A Validation Study. Acad Emerg Med. 2013;20(7):730-5.

regional-meetings. Accessed March 5, 2016.

5. Emergency Medicine Milestones. American Board of Emergency

11. Philibert I and Lieh-Lai M. A Practical Guide to the ACGME Self-

Medicine website. Available at: https://www.abem.org/public/publications/ emergency-medicine-milestones. Accessed February 1, 2016.

Study. J Grad Med Ed. September 2014;6(3):612-14. 12. Holmboe E, Yamazaki K, Edgar L, et al. Reflections on the First Two

6. The Emergency Medicine Milestones Project. Accreditation

Years of Milestone Implementation. J Grad Med Ed. September

Council for Graduate Medical Education. Available at: https:// www.acgme.org/acgmeweb/Portals/0/PDFs/Milestones/

2015;7(3):506-11. 13. Philibert I, Brigham T, Edgar L, et al. Organization of the Educational

EmergencyMedicineMilestones.pdf. Published December 2012.

Milestones for Use in the Assessment of Educational Outcomes. J

Updated July 2015. Accessed February 1, 2016.

Grad Med Ed. March 2014;6(1):177-82.

Volume XVIII, no. 1: January 2017

75

Western Journal of Emergency Medicine


Original Research

Are All Competencies Equal in the Eyes of Residents? A Multicenter Study of Emergency Medicine Residents’ Interest in Feedback Suzanne Bentley, MD, MPH* Kevin Hu, MD† Anne Messman, MD‡ Tiffany Moadel, MD§ Sorabh Khandelwal, MD¶ Heather Streich, MD|| Joan Noelker, MD#

* Icahn School of Medicine at Mount Sinai, Elmhurst Hospital Center, Department of Emergency Medicine, Department of Medical Education, New York, New York † Icahn School of Medicine at Mount Sinai, Department of Emergency Medicine, New York, New York ‡ Wayne State University School of Medicine, Department of Emergency Medicine, Detroit, Michigan § Yale School of Medicine, Department of Emergency Medicine, New Haven, Connecticut ¶ The Ohio State University, Department of Emergency Medicine, Columbus, Ohio || University of Virginia, Department of Emergency Medicine, Charlottesville, Virginia # Washington University in St. Louis, Department of Medicine, Division of Emergency Medicine, St. Louis, Missouri

Section Editor: David P. Way, MEd Submission history: Submitted September 23, 2016; Accepted November 30, 2016 Electronically published December 15, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.11.32626

Introduction: Feedback, particularly real-time feedback, is critical to resident education. The emergency medicine (EM) milestones were developed in 2012 to enhance resident assessment, and many programs use them to provide focused resident feedback. The purpose of this study was to evaluate EM residents’ level of interest in receiving real-time feedback on each of the 23 competencies/sub-competencies. Methods: This was a multicenter cross-sectional study of EM residents. We surveyed participants on their level of interest in receiving real-time on-shift feedback on each of the 23 competencies/subcompetencies. Anonymous paper or computerized surveys were distributed to residents at three fouryear training programs and three three-year training programs with a total of 223 resident respondents. Residents rated their level of interest in each milestone on a six-point Likert-type response scale. We calculated average level of interest for each of the 23 sub-competencies, for all 223 respondents and separately by postgraduate year (PGY) levels of training. One-way analyses of variance were performed to determine if there were differences in ratings by level of training. Results: The overall survey response rate across all institutions was 82%. Emergency stabilization had the highest mean rating (5.47/6), while technology had the lowest rating (3.24/6). However, we observed no differences between levels of training on any of the 23 competencies/sub-competencies. Conclusion: Residents seem to ascribe much more value in receiving feedback on domains involving high-risk, challenging procedural skills as compared to low-risk technical and communication skills. Further studies are necessary to determine whether residents’ perceived importance of competencies/ sub-competencies needs to be considered when developing an assessment or feedback program based on these 23 EM competencies/sub-competencies. [West J Emerg Med. 2017;18(1)76-81.]

Western Journal of Emergency Medicine

76

Volume XVIII, no. 1: January 2017


Bentley et al.

EM Residents’ Interest in Milestone Feedback

INTRODUCTION Real-time feedback during a clinical shift in the emergency department is an important component of a resident physician’s medical education and can have a profound impact on clinical practice.1-4 Despite this, many residents feel they do not get adequate or useful feedback during clinical shifts. Specific, tailored, learner-initiated feedback is crucial but rarely performed.1-4 Valid selfassessment strategies are recognized as fundamental to continuing professional competence and developing lifelong learning and improvement practices but these skills are understudied skill for development of resident physicians.5,6 The Accreditation Council for Graduate Medical Education (ACGME) introduced the Next Accreditation System (NAS) in 2012, which includes 23 emergency medicine (EM) competency / sub-competency domains, each comprised of five levels of specific developmental milestones. This model is the main assessment framework of the NAS. Physicians are expected to progress through the milestone levels of each competency / subcompetency from novice intern to expert.2,7-10 Various EM studies have revealed widespread dissatisfaction with feedback despite the employment of a wide variety of feedback methods. Most studies on feedback involve attending- or program leader-initiated feedback. Few have explored the theme of learner-initiated feedback. 1-4,9-11 To date, few studies have explored EM resident interest in feedback on specific competencies/sub-competencies despite the widespread use of this structured feedback mechanism. The objective of this research project was to evaluate EM residents’ level of interest in receiving real-time feedback on each of the 23 competencies/sub-competencies. Identifying the areas of most importance to learners may be the first step in helping mitigate issues with poor feedback and giving learners more autonomy over desired feedback. METHODS This was a multicenter cross-sectional study of EM residents at six ACGME-accredited academic EM residency programs in the United States. The programs span various

regions of the country with three three-year and three fouryear residency programs in both urban and suburban settings (Table 1). Participants were surveyed on their level of interest in receiving real-time feedback on each of the competencies/subcompetencies. Anonymous paper or computerized surveys using SurveyMonkey (a commercially available online survey creation and distribution program: http://www.surveymonkey.com) were distributed to residents of all postgraduate year (PGY) levels at each of the six training programs with a total of 272 possible resident respondents. The project was deemed exempt by the IRB at the Icahn School of Medicine at Mount Sinai followed by review at the remaining institutions. We surveyed all residents at the six academic EM residency programs regarding their interest levels in receiving feedback by the EM attending during a clinical shift on specific topic areas covering the 23 ACGME EM competencies/sub-competencies. Surveys were distributed at each institution during the middle of the academic calendar year via paper survey and then subsequently via email to capture residents who were not able to complete paper forms. Completion of the survey was considered consent for the study. Study participation was anonymous and voluntary. We provided residents the survey questionnaire (Appendix 1a) along with milestone descriptions (Appendix 1b). For content validity, the survey was designed to include all 23 competencies/sub-competencies. To optimize content and internal structure evidence, we created the survey instrument using an iterative editing approach. This included extensive testing among the authors for item generation, survey functionality, matching of item content to the construct, optimal item phrasing, and overall quality control. For response process validity, the survey was piloted by six EM attending physicians and six EM senior resident physicians and subsequently revised. Residents rated their level of interest in receiving on-shift feedback on each competency/ sub-competency using a six-point Likert-type response scale (1=no interest; 2=minimal interest; 3=mild interest; 4=moderate interest; 5=very interested; 6=maximal interest). We calculated

Table 1. Demographic information on six emergency medicine residency programs and survey return rates for 272 emergency medicine residents from those programs. Program

Residents/ year

Total resident number

Number & percent survey return

Geographic region

Program setting

Program length

Annual patient volume

1

15

60

49 (81.7)

Northeast

Urban

4

100,000

2

12

48

35 (72.9)

Midwest

Urban

4

95,000

3

13-15

56

51 (91.1)

Northeast

Urban

4

100,000

4

12

36

25 (69.4)

Midwest

Urban

3

105,000

5

10

30

28 (93.3)

Midatlantic

Suburban

3

61,000

6

16

42

35 (83.3)

Midwest

Urban

3

80,000

Volume XVIII, no. 1: January 2017

77

Western Journal of Emergency Medicine


Bentley et al.

EM Residents’ Interest in Milestone Feedback

Table 2. Descriptive statistics and results of one-way analysis of variance comparing 217 emergency medicine residents on their ratings of interest in feedback on 23 competencies/sub-competencies. Mean ratings (std. dev. in parentheses) Competencies/sub-competencies

All (N=217)

PGY1 (N=60)

PGY2 (N=62)

ANOVA results

PGY 3&4 (N=95)

F

df

p

5.47 (.82)

5.48 (.77)

5.48 (.84)

5.44 (.85)

0.10

2, 214

0.90

Airway management

5.35 (0.87)

5.43 (0.87)

5.48 (0.74)

5.23 (0.94)

1.63

2, 214

0.20

Medical knowledge

5.07 (1.05)

5.09 (1.13)

5.08 (0.87)

5.06 (1.11)

0.02

2, 214

0.98

Diagnosis

4.90 (1.03)

5.17 (0.91)

4.75 (0.99)

4.83 (1.10)

2.88

2, 214

0.06

Approach to procedures

4.85 (1.13)

4.95 (1.15)

4.93 (0.92)

4.72 (1.24)

1.00

2, 214

0.37

Pharmacotherapy

4.83 (1.03)

4.86 (1.22)

4.80 (1.01)

4.85 (0.93)

0.08

2, 210

0.93

Goal-directed focused ultrasound

4.76 (1.17)

5.03 (1.13)

4.65 (1.18)

4.67 (1.16)

2.24

2, 214

0.11

Team management

4.74 (1.21)

4.50 (1.27)

4.80 (1.10)

4.82 (1.23)

1.38

2, 214

0.25

Diagnostic studies

4.60 (1.05)

4.78 (1.02)

4.54 (1.07)

4.45 (1.07)

0.92

2, 215

0.40

Multi-tasking/task-switching

4.60 (1.26)

4.57 (1.13)

4.43 (1.34)

4.71 (1.29)

0.80

2, 215

0.45

Anesthesia & pain management

4.58 (1.16)

4.78 (1.12)

4.44 (1.18)

4.57 (1.16)

1.41

2, 214

0.25

Disposition

4.53 (1.19)

4.65 (1.11)

4.46 (1.22)

4.51 (1.23)

0.46

2, 213

0.64

Practice-based improvement

4.26 (1.36)

4.19 (1.32)

4.08 (1.48)

4.43 (1.30)

1.22

2, 214

0.30

Vascular access

4.17 (1.29)

4.36 (1.29)

3.98 (1.32)

4.18 (1.26)

1.72

2, 214

0.18

Wound management

4.11 (1.28)

4.16 (1.43)

4.11 (1.19)

4.06 (1.25)

0.12

2, 214

0.89

Patient safety

4.00 (1.31)

3.81 (1.33)

4.02 (1.25)

4.11 (1.33)

0.92

2, 212

0.40

Systems-based practice

3.96 (1.27)

3.83 (1.26)

3.77 (1.31)

4.15 (1.24)

1.94

2, 214

0.15

Observation-reassessment

3.84 (1.26)

3.88 (1.39)

3.66 (1.21)

3.92 (1.20)

0.75

2, 213

0.47

Patient-centered communication

3.83 (1.35)

3.84 (1.40)

3.67 (1.35)

3.95 (1.32)

0.66

2, 214

0.52

Accountability

3.80 (1.47)

3.78 (1.63)

3.62 (1.39)

3.88 (1.41)

0.43

2, 213

0.65

Performance of H&P

3.69 (1.41)

3.90 (1.27)

3.41 (1.53)

3.74 (1.41)

1.67

2, 214

0.19

Professional values

3.60 (1.46)

3.74 (1.53)

3.34 (1.50)

3.71 (1.38)

1.20

2, 214

0.30

Technology / EHR

3.24 (1.44)

3.45 (1.38)

2.95 (1.45)

3.27 (1.46)

1.59

2, 214

0.21

Emergency stabilization

*Bonferroni adjustment is used to control for Type 1 error rates. The adjusted p value for considering a mean difference statistically significant is equal to 0.05/23 = 0.002. ANOVA, one-way analysis of variance; PGY, post-graduate year; H&P, history and physical; EHR, electronic health records

average levels of interest for each of the 23 competencies/ sub-competencies for all respondents and by PGY level of training. A one-way analysis of variance (ANOVA) was conducted to determine whether differences in desire for feedback existed by level of training (PGY level). To control for Type-1 error rates from multiple comparisons, we adjusted the p-value for significance using the Bonferroni correction suggested by Bland, 1995 (p=.05/23 tests= .002).12 RESULTS Â The overall survey response rate was 82% (223/272). Return rates and residency characteristics are detailed in Table 1. The number of survey participants was almost equivalent

Western Journal of Emergency Medicine

for PGY levels 1-3 (60 or 27% for PGY-1s, and 62 or 27.8% for both PGY-2s and 3s). The number of PGY-4 participants was considerably lower at 34 (15.2%). One-way ANOVA analyses (Table 2) showed no statistical differences between residents at different levels of training for any of the 23 competencies/sub-competencies after adjustment with the Bonferroni correction. When looking at the differences in average ratings from all residents combined, we noticed considerable variability across the 23 competencies/ sub-competencies (see Table 2). The competencies/subcompetencies with highest average ratings were received by emergency stabilization (rating: 5.47), airway management (5.35), and medical knowledge (5.08). These ratings indicate

78

Volume XVIII, no. 1: January 2017


Bentley et al.

EM Residents’ Interest in Milestone Feedback

Figure. Resident feedback interest by competencies/sub-competencies.

that residents are very or maximally interested in receiving feedback on these competencies/sub-competencies. Ratings on an additional nine competencies/sub-competencies would indicate that residents are very interested in feedback. These mean ratings ranged from 4.54 and 4.90. Residents indicated that they would be moderately interested in feedback on 10 competencies/sub-competencies (rated 3.61 to 4.27). Only one competency/sub-competency received a rating that would indicate that residents had mild interest: technology/EHR (3.24). DISCUSSION The EM Milestones project, developed by the ACGME and the American Board of Emergency Medicine, provides residency programs with descriptive, objective criteria by which to assess a resident’s progress throughout his or her training. While program directors and academic faculty in residency programs are familiar with the milestone sub-competencies, it is less clear if residents have similar investment in the tools being used to evaluate them. Some residents may have little to no knowledge about each of the individual competencies/subcompetencies and the criteria used to differentiate various levels of performance on the milestones scale. Residents may also not internalize feedback on competencies/sub-competencies for which they feel are not relevant to them at a given time. This study aimed to assess EM residents’ interest in receiving realtime feedback on each of the 23 different EM competencies/ sub-competencies.

Volume XVIII, no. 1: January 2017

Of the 23 competencies/sub-competencies, residents were most interested in receiving feedback on three: emergency stabilization, airway management, and medical knowledge. Compared to the other milestones, these seem to reflect the core values of the practice of EM – complicated skill sets that are high reward, if done well, and have significant impact on patient outcomes. Of these, emergency stabilization and medical knowledge encompass broad content areas covered during residency education. There was one outlier competency on which residents were least interested in receiving feedback: technology and electronic health records. This competency had the lowest average interest rating at 3.24 out of 6, reflecting mild interest in receiving feedback. Possible explanations for why this milestone was least interesting to residents include lack of understanding of its importance in their future career, lack of perceived relevance to direct patient outcome, difficulty in receiving feedback on this work, or even perceived adequacy of prior or current feedback on this competency. All other competencies/sub-competencies received ratings between 3.6-4.9, reflecting significant resident interest in receiving feedback on these topics. By rating all of the competencies/sub-competencies as at least mildly interesting regarding feedback, residents are validating the idea that the competencies/sub-competencies accurately represent relevant learning objectives throughout residency that are perceived as applicable to their future practice. There were no statistically significant differences between residents based on PGY

79

Western Journal of Emergency Medicine


Bentley et al.

EM Residents’ Interest in Milestone Feedback level regarding their interest in milestone-based feedback, suggesting that feedback on any of the competencies/subcompetencies would be appreciated at any learner level. Prior work suggests that a trainee’s prior experiences, confidence level, fear of appearing incompetent, and biases in cognitive reasoning processes can affect their responsiveness to feedback.13 Those who are learning goal-oriented may aim to prioritize feedback on topics that they feel weaker in, as they are more likely to use unsatisfactory performance as an impetus for improvement. On the contrary, learners with performancebased goals may seek to validate their own competency over their peers by seeking out favorable judgments and avoiding negative comments about one’s competence.14,15 Understanding the subtle differences in a resident’s interest in receiving feedback on each competency and the motivation behind these differences will be useful for programs going forward in their quest to provide desired, well-rounded, relevant, actionable feedback to further the development of their residents.

Address for Correspondence: Suzanne Bentley, MD, MPH, Elmhurst Hospital Center, Emergency Department, B1-27, 79 01 Broadway, Elmhurst, NY 11373. Email: Suzanne.bentley@gmail. com.

LIMITATIONS A limitation of this study is the variability in response rates across the participating institutions. The lowest survey response rate at a site was 69% while the site with the highest response rate was 93%. However, such a diverse subject population is important for allowing generalizability of aggregate resident survey responses across the larger group of EM trainees across the country. To obtain the highest possible response rate, some residents were given a paper survey while others participated in the online survey. The different vehicles by which certain residents responded may have affected the responses given.

1. Yarris LM, Linden JA, Gene Hern H, et al. Attending and resident satis-

CONCLUSION Providing effective feedback to residents is essential to their education and professional growth. Residents frequently report discontent with the feedback they receive, and thus a better understanding of feedback and residents’ preferences regarding feedback may allow attending physicians to provide more useful feedback. We observed no differences between resident levels of training, suggesting that preference for feedback is unrelated to PGY level. Future areas of research in this domain include elucidating whether feedback is more effective if it involves a sub-competency of particular interest to the resident, and if sub-competencies deemed “less interesting” require particular attention to reinforce their importance in a resident physician’s career. ACKNOWLEDGMENT This project was undertaken as part of the MERC at CORD Scholars Program and the authors would like to acknowledge the support and mentorship from the MERC at CORD leadership. The authors would also like to acknowledge the support and oversight from Mr. David Way in finalization of this manuscript.

Western Journal of Emergency Medicine

Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none. Copyright: © 2016 Bentley et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

REFERENCES faction with feedback in the emergency department. Acad Emerg Med. 2009;16(2):S76-81. 2. Yarris LM, Jones D, Kornegay JG, et al. The Milestones Passport: A Learner-Centered Application of the Milestone Framework to Prompt Real-Time Feedback in the Emergency Department. J Grad Med Educ. 2014;6(3):555-60. 3. Ende J. Feedback in clinical medical education. JAMA. 1983;250(6):777–81. 4. Yarris LM, Fu R, LaMantia J, et al. Effect of an educational intervention on faculty and resident satisfaction with real-time feedback in the emergency department. Acad Emerg Med. 2011;18(5):504-12. 5. Sadosty AT, Bellolio MF, Laack TA, et al. Simulation-based emergency medicine resident self-assessment. J Emerg Med. 2011;41(6):679-85. 6. Evans AW, McKenna C, Oliver M. Self-assessment in medical practice. J R Soc Med. 2002;95(10):511-3. 7. The Emergency Medicine Milestone Project: A joint initiative of the Accreditation Council for Graduate Medical Education and the American Board of Emergency Medicine. Available at: http://www.acgme.org/ acgmeweb/Portals/0/PDFs/Milestones/EmergencyMedicineMilestones. pdf. Copyright 2012. Accessed May 26, 2016. 8. Lewiss RE, Pearl M, Nomura JT, et al. CORD-AEUS: consensus document for the emergency ultrasound milestone project. Acad Emerg Med. 2013;20(7):740-5. 9. Beeson MS, Carter WA, Christopher TA, et al. The development of the emergency medicine milestones. Acad Emerg Med. 2013;20(7):724-9. 10. Goldflam K, Bod J, Della-Giustina D, et al. Emergency Medicine Residents Consistently Rate Themselves Higher than Attending Assessments on ACGME Milestones. West J Emerg Med. 2015;16(6):931-5. 11. Rose JS, Waibel BH, Schenarts PJ. Disparity between resident and faculty surgeons’ perceptions of preoperative preparation, intraoperative teaching, and postoperative feedback. J Surg Educ. 2011;68(6):459-64. 12. Bland JM. Multiple significance tests: the Bonferroni method. BMJ.

80

Volume XVIII, no. 1: January 2017


Bentley et al.

EM Residents’ Interest in Milestone Feedback

1995;310(21):170.

ing feedback-seeking behaviour: a literature review. Med Educ.

13. Eva KW, Armson H, Holmboe E, et al. Factors influencing responsiveness to feedback: on the interplay between fear, confidence, and rea-

2013;47(3):232-41. 15. VandeWalle D and Cummings LL. A test of the influence of goal

soning processes. Adv Health Sci Educ Theory Pract. 2012;17(1):15-26.

orientation on the feedback-seeking process. J Appl Psychol.

14. Crommelinck M and Anseel F. Understanding and encourag-

Volume XVIII, no. 1: January 2017

1997;82(3):390-400.

81

Western Journal of Emergency Medicine


Brief Educational Advances

Clinical Assessment of Medical Students in the Emergency Department, a National Consensus Conference Katherine M. Hiller, MD, MPH* Douglas Franzen, MD, MEd† Luan Lawson, MD, MAEd‡ David Manthey, MD§ Jonathan Fisher, MD, MPH* Marianne Haughey, MD¶ Matthew Tews, DO, MS|| Nicole Dubosh, MD# Joseph House, MD** Arleigh Trainor, MD†† David Wald, DO‡‡ Julianna Jung, MD§§

*University of Arizona, Department of Emergency Medicine, Tucson, Arizona † University of Washington, Department of Medicine, Division of Emergency Medicine, Seattle, Washington ‡ East Carolina University, Department of Emergency Medicine, Greenville, North Carolina § Wake Forest University, Department of Emergency Medicine, Winston-Salem, North Carolina ¶ St. Barnabas Medical Center, Department of Emergency Medicine, Bronx, New York || Medical College of Wisconsin, Department of Emergency Medicine, Milwaukee, Wisconsin # Harvard University, Department of Emergency Medicine, Cambridge, Massachusetts **University of Michigan, Department of Emergency Medicine, Ann Arbor, Michigan †† University of South Dakota, Department of Emergency Medicine, Vermillion, South Dakota ‡‡ Lewis Katz School of Medicine, Philadelphia, Pennsylvania §§ Johns Hopkins University, Department of Emergency Medicine, Baltimore, Maryland

Section Editor: Andrew Phillips, MD Submission history: Submitted September 28, 2016; Accepted November 1, 2016 Electronically published November 23, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.11.32686 [West J Emerg Med. 2017;18(1)82-83.]

BACKGROUND The clinical assessment of medical students in the emergency department (ED) is a highly variable process in which clerkship directors (CD) create and use institution-specific tools, many with unproven reliability or validity, to assess students of differing experience and from different institutions.1,2 OBJECTIVES Standardization of assessment practices and tools of assessment could enhance grading, improve the reliability and validity of information on the standardized letter of evaluation (SLOE) for program directors, and most importantly, provide consistent, valid and reliable formative feedback for students. DESIGN A consensus conference on end-of-shift assessment of medical students in the ED was held in the Clerkship Directors in Emergency Medicine (CDEM) track of the Council of Emergency Medicine Residency Directors (CORD) Academic Assembly in Nashville, TN, in March 2016. Themes surrounding the practice of end-of-shift assessment of medical students were derived from small-group discussions among Western Journal of Emergency Medicine

the executive committee and refined at a large-group planning meeting at the 2015 CORD Academic Assembly (Table). In May 2015, theme leaders were identified and tasked with recruiting relevant stakeholders to their respective small groups, synthesizing the background literature and articulating key issues surrounding their theme. Simultaneously, the executive committee derived “building blocks” of assessment from foundational source materials.1,3-9 Each contained the following: name, background and definition, benefits/ drawbacks/alternatives to use in the clinical setting, areas of Table. Themes of assessment discussed at the CDEM national end-of-shift consensus conference. Themes Criterion vs norm-referenced assessment Learners at different levels of learning Translation of assessment data into other products Utilization of clinical assessment tools Ensuring post-implementation validation/research CDEM, Clerkship Directors in Emergency Medicine.

82

Volume XVIII, no. 1: January 2017


Hiller et al. overlap with other domains of assessment, examples of how an assessment of this domain would appear on an assessment form in three formats (narrative, dichotomous, and an anchored ratings scale), and references. On Day 1 of the conference, participants were divided into small groups. Each theme leader met with each small group providing background and guiding further discussion. Predetermined questions with discrete responses were asked within each small group. During the second morning of the conference, the “building blocks” were discussed. Participants voted using an electronic audience response system (www.polleverywhere.com). IMPACT/EFFECTIVENESS Sixty people participated on Day 1 and 70 participated on Day 2 of the conference. Participants agreed on 63.4% of the theme questions and 87.5% of the domains of assessment. The group felt that both norm- and criterion-based assessment should be incorporated, EM faculty and senior residents should be allowed to complete the form, the unit of observation should be a single shift, and that 6-10 shifts would be adequate to accurately assess a student. Medical students (MS3) and (MS4) should be assessed using the same tools, but grading should differ. Learners with varying experience within a year present a challenge; however, this is not prohibitive to using a common form or grading rubric. Clinical assessment data should be translated into a grade and onto the SLOE. Of 16 domains of assessment presented, nine were included, five omitted, and two did not reach consensus. All domains should be assessed via rating scale except professionalism, for which a combined narrative/dichotomous approach was preferred. Based on the variability of assessment forms currently in use, we anticipated a large range of opinion on the topics presented. Instead, we were surprised by the strength of consensus on most topics. Limitations to this process include that only approximately half of the CDEM Academy membership was present, despite extensive advertisement about the conference. Additionally, voting may have been affected by the order in which the building blocks of assessment were presented. Participants may have been more apt to comment later once they had a better understanding and more familiarity with how the materials were presented and referenced. We attempted to mitigate this effect by providing the materials to participants beforehand and providing preparatory background material in discussion groups. Finally, participants were able to change their vote while group discussion occurred. Large-group discussion did sway votes; however, we feel this culminated in a better representation of the group’s actual opinions. Discussion helped guide the decision in real time, and allowed minority opinions to be heard and considered. This conference was a critical first step in the development of national guidelines and a standardized clinical assessment tool in EM. The education and discussion that the conference provided elevated the level of conversation around assessment Volume XVIII, no. 1: January 2017

Clinical Assessment of Medical Students in the ED in our specialty. The creation of a reliable and valid assessment tool will provide a critical method for measuring outcomes in educational innovations and research in the future. Please see Appendix for CDEM Consensus Conference on Endof-shift Assessment of Medical Students: Executive Summary.

Address for Correspondence: Katherine M. Hiller, MD, MPH, University of Arizona, Department of Emergency Medicine, 1501 N Campbell Ave, Tucson, AZ 85724. Email: khiller@aemrc. arizona.edu. Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none. Copyright: © 2016 Hiller et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

REFERENCES 1. Hiller K, Lawson L, Franzen D, et al. End-of-shift Emergency Medicine Medical Student Clinical Evaluation Forms: A Taxonomy. Acad Emerg Med. 2015;22(S1):S196-7. 2. Lawson L, Jung J, Franzen D, et al. Clinical Assessment of Medical Students in Emergency Medicine Clerkships: A Survey of Current Practice. J Emerg Med. Sep 7 2016. 3. Santen SA, Peterson WJ, Khandelwal S, et al. Medical student milestones in emergency medicine. Acad Emerg Med. Aug 2014;21(8):905-11. 4. AAMC. Core Entrustable Professional Activities for Entering Residency--Curriculum Developers Guide. Available at: https:// members.aamc.org/eweb/upload/core%20EPA%20Curriculum%20 Dev%20Guide.pdf. Accessed Feb 11, 2016. 5. Tews MC, Ditz Wyte CM, Coltman M, et al. Implementing a ThirdYear Emergency Medicine Medical Student Curriculum. J Emerg Med. Mar 27 2015. 6. Manthey DE, Ander DS, Gordon DC, et al. Emergency medicine clerkship curriculum: an update and revision. Acad Emerg Med. 2010;17(6):638-43. 7. Englander R, Cameron T, Ballard AJ, et al. Toward a common taxonomy of competency domains for the health professions and competencies for physicians. Acad Med. Aug 2013;88(8):1088-94. 8. CORD. Standardized Letter of Evaluation in Emergency Medicine. 2016. Available at: http://www.cordem.org/files/ DOCUMENTLIBRARY/SLOR/SLOE%20Standard%20Letter%20 of%20Evaluation%202015.pdf. Accessed May 25, 2016. 9. Beeson MS, Carter WA, Christopher TA, et al. Emergency medicine milestones. J Grad Med Educ. Mar 2013;5(1 Suppl 1):5-13.

83

Western Journal of Emergency Medicine


Brief Educational Advances

Incorporation of a Graduate Student Writer into a Productive Research Team Jonathan P. Fischer, MPH*‡ Joseph B. House, MD† Laura R. Hopson, MD† Marcia A. Perry, MD† Nikhil Theyyuni, MD† Margaret S. Wolff, MD† Cemal B. Sozener, MD† Sally A. Santen, MD, PHD† §

*Alumnus of University of Michigan, School of Public Health, Ann Arbor, Michigan † University of Michigan, Department of Emergency Medicine, Ann Arbor, Michigan ‡ University of Chicago Pritzker School of Medicine, Chicago, Illinois § University of Michigan, Department of Learning Health Sciences, Ann Arbor, Michigan

Section Editor: David P. Way, MEd Submission history: Submitted June 14, 2016; Revision received July 25, 2016; Accepted September 15, 2016 Electronically published November 8, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.9.31253

[West J Emerg Med. 2017;18(1)84-85.]

BACKGROUND An academic physician is faced with the unique challenge of balancing clinical practice with demanding education and research obligations. These competing tasks often result in a lack of time dedicated to research, which can result in incomplete and unpublished projects. One study found that abstracts presented at emergency medicine conferences were subsequently published only 23%-47% of the time.1 Another study found similar results of only 33%.2 A lack of time is cited as the primary reason physicians do not prepare more papers for publication.3 Although some physicians overcome this hurdle through professional writing companies, this practice is discouraged in the academic field.4 Given the importance of publications to faculty for promotion and the community of educators for advancing their practice, we sought to create a more productive research model that reduces the time burden of manuscript preparation for busy teams of physicians.5, 6 OBJECTIVES The objective of this innovation is to describe a novel approach to assist with scholarly productivity by intentionally incorporating a graduate student research assistant (GSRA) into a research team to help with manuscript preparation. INNOVATION DESIGN The Medical Education Research Group (MERG) at the University of Michigan, the structure of which has been described elsewhere, created the position of a GSRA as part of its research team in 2013.5 Student research assistants commonly engage in data collection in the emergency department, but they much less frequently assist in publication. The idea behind the

Western Journal of Emergency Medicine

GSRA was that time constraints made it difficult for many members of MERG to complete the process of moving abstracts to manuscript preparation. Graduate students, on the other hand, are frequently required during their coursework to gather resources about a topic they have not previously been exposed to, synthesize the information, and produce a term paper. Thus, MERG leadership postulated that successful GSRAs could apply the same skills they use for their courses to assist physicians in bringing projects to completion. A master’s student from the University of Michigan’s School of Public Health was hired as a GSRA, and was paid $15 per hour for approximately 10 hours per week. Half of this funding came from the federal Work-Study Program and half of it through departmental funding. The GSRA was supervised by the leader of MERG, who spent about two hours per month on this task. Steps of integrating the GSRA were as follows: First, the GSRA provided input into data analysis, interpretation, and determining the scope of the project. Second, the GSRA conducted a literature review and began working on writing the introduction with the first author. Meanwhile, another member of the team produced a draft of the methods and results sections. Third, the research team as a whole discussed ideas for how to frame the paper, which relevant background topics to include, and what conclusions should be drawn. Fourth, the GSRA organized these ideas, fit them into the framework of the existing literature on the topic, and completed a draft of the introduction, discussion, and conclusion sections for a paper. Finally, these sections were disseminated to the entire team, who actively revised them. All of the GSRA’s projects involved preexisting data or data that were being collected. Because of this, the GSRA had a rapid turnaround of projects and completed about one per month. Since

84

Volume XVIII, no. 1: January 2017


Fischer et al. the GSRA made significant contributions during each of these steps, from clarification of the research question to background data interpretation to initial writing and final revision, the GSRA met authorship requirements for each project.7 IMPACT & EFFECTIVENESS The addition of a GSRA was associated with more rapid project completion and paper submission. During the first academic year, seven papers were completed with the GSRA. Five of them have since been published, one was published as an abstract and is being edited for resubmission, and one is unpublished and is being edited for resubmission.5, 8-12 The GSRA also provided minor assistance on an additional project that has not been published. Therefore, at a cost of <2,000 dollars, the GSRA helped publish five papers. Precise measures of increased productivity remain unknown; however, MERG members reported reduced time to paper submission. A key to the success of MERG has been maintaining a high degree of structure with well-defined roles, regular meetings, and committed leadership. We found that a GSRA was a costeffective and productive addition to our team. A GSRA is likely not appropriate for some research groups though. This model may not increase productivity when limiting factors beyond time constraints exist, such as a lesser robustness of data, lack of research mentorship, or motivation on the part of faculty. In addition, many groups may find it difficult to recruit graduate students with interests in both writing and medical education. Finally, our results are not generalizable beyond structured research teams, as it remains unknown what effect a GSRA could have when used with individual physicians. In teams such as MERG, however, our experiences show that a GSRA could provide valuable writing assistance and lead to more efficient research output.

Graduate Student Writer within Productive Research Team Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. Dr. Santen is the co-Principle Investigator on the Accelerating Change in Medical Education Initiative with the American Medical Association. Copyright: Š 2016 Fischer et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

REFERENCES 1. Korn CS HS. Publication of Research abstracts presented at four emergency medicine research forums. Acad Emerg Med. 2000;7(5):425-595. 2. Chan JW and Graham CA. Full text publication rates of studies presented at an international emergency medicine scientific meeting. Emerg Med J. 2011;28(9):802-3. 3. Defining the Role of Authors and Contributors. International Committee of Medical Journal Editors Website. Available at: http:// www.icmje.org/recommendations/browse/roles-and-responsibilities/ defining-the-role-of-authors-and-contributors.html. Published 2016. Accessed July 22, 2016. 4. House JB, Theyyunni N, Barnosky AR, et al. Understanding ethical dilemmas in the emergency department: views from medical students’ essays. J Emerg Med. 2015;48(4):492-8. 5. Sozener CB, Lypson ML, House JB, et al. Reporting Achievement of Medical Student Milestones to Residency Program Directors: An Educational Handover. Acad Med. 2016;91(5):676-84. 6. Hauff SR, Hopson LR, Losman E, et al. Programmatic Assessment of Level 1 Milestones in Incoming Interns. Acad Emerg Med. 2014;21(6):694-8. 7. House J, Santen SA, Carney M, et al. Implementation of an Education Value Unit (EVU) System to Recognize Faculty

Address for Correspondence: Sally Santen, MD, PhD, University of Michigan, Department of Emergency Medicine, 1500 E. Medical Center Drive, Ann Arbor, MI 48109-5305. Email: ssanten@umich. edu.

Volume XVIII, no. 1: January 2017

Contributions. West J Emerg Med. 2015;16(6):952-6. 8. Wei E, Bassin B, Santen S, et al. 145 Patient Safety Culture in the Emergency Department. Ann Emerg Med. 2015;66(4):S51-2.

85

Western Journal of Emergency Medicine


Educational Advances

Transition of Care Practices from Emergency Department to Inpatient: Survey Data and Development of Algorithm Sangil Lee, MD, MS* Jaime Jordan, MD† H. Gene Hern, MD, MS‡ Chad Kessler, MD, MHPE§ Susan Promes, MD¶ Sarah Krzyzaniak, MD|| Fiona Gallahue, MD# Ted Stettner, MD** Jeffrey Druck MD††

*The University of Iowa Carver College of Medicine, Department of Emergency Medicine, Iowa City, Iowa † Harbor-UCLA Medical Center, Department of Emergency Medicine, Torrance, California ‡ Alameda Health System, Highland Hospital, Department of Emergency Medicine, Oakland, California § Duke University, Department of Emergency Medicine and Internal Medicine, Durham, North Carolina ¶ Pennsylvania State University, Department of Emergency Medicine, State College, Pennsylvania || University of Illinois at Peoria, Department of Emergency Medicine, Peoria, Illinois # University of Washington, Department of Emergency Medicine, Seattle, Washington **Emory University, Department of Emergency Medicine, Atlanta, Georgia †† University of Colorado, Department of Emergency Medicine, Aurora, Colorado

Section Editor: Jeffrey Love, MD Submission history: Submitted May 23, 2016; Revision received September 30, 2016; Accepted September 30, 2016 Electronically published November 8, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.9.31004

Introduction: We aimed to assess the current scope of handoff education and practice among resident physicians in academic centers and to propose a standardized handoff algorithm for the transition of care from the emergency department (ED) to an inpatient setting. Methods: This was a cross-sectional survey targeted at the program directors, associate or assistant program directors, and faculty members of emergency medicine (EM) residency programs in the United States (U.S.). The web-based survey was distributed to potential subjects through a listserv. A panel of experts used a modified Delphi approach to develop a standardized algorithm for ED to inpatient handoff. Results: 121 of 172 programs responded to the survey for an overall response rate of 70.3%. Our survey showed that most EM programs in the U.S. have some form of handoff training, and the majority of them occur either during orientation or in the clinical setting. The handoff structure from ED to inpatient is not well standardized, and in those places with a formalized handoff system, over 70% of residents do not uniformly follow it. Approximately half of responding programs felt that their current handoff system was safe and effective. About half of the programs did not formally assess the handoff proficiency of trainees. Handoffs most commonly take place over the phone, though respondents disagree about the ideal place for a handoff to occur, with nearly equivalent responses between programs favoring the bedside over the phone or faceto-face on a computer. Approximately two-thirds of responding programs reported that their residents were competent in performing ED to inpatient handoffs. Based on this survey and on the review of the literature, we developed a five-step algorithm for the transition of care from the ED to the inpatient setting. Conclusion: Our results identified the current trends of education and practice in transitions of care, from the ED to the inpatient setting in U.S. academic medical centers. An algorithm, which guides this process, is proposed to address the current gap in the standardized approach to ED to inpatient handoffs that were identified in the survey’s assessment of needs. [West J Emerg Med. 2017;18(1)86-92.]

Western Journal of Emergency Medicine

86

Volume XVIII, no. 1: January 2017


Lee et al.

Transition of Care Practices from Emergency Department to Inpatient METHODS Survey Content The authors conducted a cross-sectional survey targeted at EM residency programs in the United States. The survey was developed to address the initial two steps of the Kern model for medical curriculum development: 1) problem identification and general needs assessment; and 2) needs assessment for targeted learners.10 Content experts created a web-based survey to assess the current handoff practice from the ED to inpatient providers (Appendix 1. Survey questions).

INTRODUCTION The handoff was defined as “the exchange between health professionals of information about a patient accompanying either a transfer of control over or of responsibility for the patient.”1 Patient handoffs were found to be responsible for medical errors and harmful to the patient, and the Institute of Medicine’s report, “To Err is Human,” highlighted handoffs as a potential area of improvement.2 The Joint Commission and the Accreditation Council for Graduate Medical Education (ACGME) recommended that sponsoring programs ensure and monitor an effective and structured handoff process. 3,4,5 Several studies reported the current practice of transition of care within the emergency department (ED), including previous studies by the Council of Residency Directors (CORD). 678 The CORD survey showed that over half of the respondents from academic EDs indicated that their EDs use a standardized handoff. 9 However, it is not known how emergency medicine (EM) residency programs are providing training around care transitions from the ED to inpatient settings. The authors aimed to assess the current scope of handoff education and practices among resident physicians and to propose a standardized handoff algorithm to improve the transition of care from the ED to the inpatient setting.

Survey Administration We piloted surveys among the CORD Transition of Care (TOC) task force members and revised them before final administration. The survey was designed using the SurveyMonkey® platform (SurveyMonkey Inc., Palo Alto, California, USA. www.surveymonkey.com) and distributed to all members through the CORD listserv. The validity of using the CORD listserv as sample population has been described elsewhere.7,11,12 The responses were collected, and duplicated responses were removed and compiled for data analysis. Transition of Care Algorithm Given the identified needs and opportunities in the transition of care, authors performed a review of the

Table 1. Transition of care curriculum, handoff structure, and safety perception in emergency medicine training programs. Response choices Response rate/total, (%)* Transition of care curriculum Attendings or senior residents provide handoff instruction in the clinical environment

90/121 (74.4)

Handoff training offered during the initial orientation

87/121 (71.9)

Structured workshop/classes to teach proper handoff procedure

27/121 (22.3)

Educational packets or guides for handoff

14/121 (11.6)

Other methods (simulation, policy and online instructions)

7/121 (5.8)

Handoff structure Structured handoff for ED to inpatient providers in place

45/119 (37.2)

How often do residents use a structured handoff? Always

9/45 (20)

Usually

13/45 (29)

Sometimes

19/45 (42)

Rarely

3/45 (6.7)

Safety perception Current handoff process is: Extremely safe and effective

2/121 (1.7)

Safe and effective

57/121 (47.1)

Somewhat safe and effective

56/121 (42.3)

Not safe or effective

6/121 (5.0)

*Multiple choices were allowed.

Volume XVIII, no. 1: January 2017

87

Western Journal of Emergency Medicine


Lee et al.

Transition of Care Practices from Emergency Department to Inpatient Table 2. Do you formally assess the handoff proficiency of your residents? If yes, how? Response choices No, I do not formally assess the handoff of the residents.

Response rate/total, (%) 59/121 (48.8)

Yes, assessment is done through scheduled one-on-one discussion with each resident.

7/121 (5.8)

Yes, assessment is done through regular written feedback/evaluation from EM personnel.

31/121 (25.6)

Yes, I ask the senior EM residents to assess the handoff proficiency of the junior residents.

15/121 (12.4)

Yes, residents/faculty from other services provide informal feedback on the quality of admission handoffs.

26/121 (21.5)

Yes, residents/faculty from other services provide regular formalized feedback on the quality of admission handoff.

3/121 (2.5)

Other methods

literature (Appendix 2. Search strategy). We used a modified Delphi technique to develop an algorithmic approach to conducting efficient handoffs from the ED to the inpatient setting, which served as a primer for the following two steps of the six-step Kern model: 3) goals and objectives, and 4) educational strategies.10 13 The algorithm was initially derived from the CORD TOC EM to EM handoff by Kessler et al.7 and implemented based on the literature review.5,7,8,14-29 The algorithm was modified and approved by seven experts. Statistical Analysis Sample size calculations demonstrated that of the 172 programs surveyed as the true target population, 121 responses would give a 95% confidence interval with a 5% margin of error. We reported data using descriptive statistics and analyzed them by a two-sample test of proportion or Fisher’s exact test, as appropriate. We completed statistical analysis with JMP®, Version <10.0> (SAS Institute Inc., Cary, NC), and we reported p-values. This study was declared exempt by the Alameda Health System. RESULTS Response Rate A survey response was obtained from 121 out of 172 programs with the overall response rate being 70.3%. Transition of Care Curriculum, Handoff Structure and Safety Perception Most programs offer handoff training to their resident physicians (Table 1). The type of training varied, with the most common form being instruction in the clinical setting, followed by handoff training during orientation, structured workshop/classes, educational packets or guides, and other methods. Less than half of the programs responded that they have a structured formal handoff process, yet the compliance among residents was variable. About half of

Western Journal of Emergency Medicine

16/121 (13.2)

responding programs responded that their current handoff system was safe and effective (Table 1). Handoff Assessment Nearly half of responding programs stated that they do not formally assess handoff proficiency in resident physicians (Table 2). Otherwise, Table 2 shows the types of formal assessment methods of handoff proficiency in trainees. Current mode of Handoff and Recommended Handoff Eighty-nine programs responded to the question of which mode of handoff process was used, and a handoff via phone was most common (Figure). On the other hand, of the 116 programs that responded to where the formal handoff should occur, answers were variable (Figure). Handoff Competency Assessment Lastly, two-thirds of programs responded that their residents were extremely competent to competent in giving ED to inpatient handoffs (extremely competent 8/121, 6.6%; competent 71/121, 58.7%; somewhat competent, 41/121, 33.9%; incompetent, 1/121, 0.8%). There was a statistically significant association between achieving competency and instruction offered by attending or senior resident at clinical setting (p=0.006), but not with the handoff training during initial orientation (p=0.23), structured workshop (p=0.12), or educational packet (p=0.5). Handoff Algorithm Given the identified need for handoff education and existing literature, authors developed a handoff algorithm ‘Prep-4Cs.’8,15-18,30 The handoff algorithm consists of five steps (Table 3). DISCUSSION Statement of Principal Findings Our survey showed that most EM programs in the U.S. have some form of handoff training, the majority of them

88

Volume XVIII, no. 1: January 2017


Lee et al.

Transition of Care Practices from Emergency Department to Inpatient

Figure. The ideal handoff location and the reality.

occurring during clinical setting. However, the handoff structure from ED to inpatient is neither well standardized nor followed. Only half of responding programs felt that their current handoff system was safe and effective. About half of the programs did not have a formal assessment. Handoffs most commonly take place over the phone, though respondents disagreed about the ideal place for a handoff to occur. Interpretation of Results Compared to Other Studies The majority of EM programs in the U.S. now have some form of handoff training, which is in compliance with the ACGME common program requirement.3 The overall rate of the handoff education has now increased to 94% from 13% in 2013, reflecting the successful dissemination of handoff education.7 Hern et al. surveyed the trend of EM providers and concluded that there is an insufficient level of mandatory handoff training with varying results.9 Our study supports this finding and implies a further need for an effective handoff education. This study demonstrates that handoff practice from ED to inpatient is not standardized, and even in places where a formal system exists, the compliance rate is not high. This is consistent with the existing literature, which showed that less than half of EM programs had a standardized handoff practice in 2013.12 A standardized handoff practice has been

Volume XVIII, no. 1: January 2017

introduced to several inter-unit handoff processes, namely using mnemonics and checklists.14,31 A recent study showed that the use of communication training, mnemonics, and handoff structures decreased medical error in the pediatric inpatient setting.32 33 It implies that the introduction of standard mnemonics can be a starting point, yet programs may have to expand their curriculum into a handoff bundle tailored for ED to inpatient transition of care. Only approximately half of the responding programs felt that their current handoff system was safe and effective, and about half of programs reported using a formal evaluation process for trainee proficiency. The existing literature identified a knowledge gap and the potential benefit of evaluation tools.34 35 It is prudent to develop validated evaluation tools to accurately assess the effectiveness and safety of handoffs. About half of the respondents reported that the handoff occurred over the phone, yet there was no consensus on what mode of handoff would be ideal (Figure). A previous survey study demonstrated that ED to ED end of shift handoffs should ideally occur at the patient bedside, although many found that the handoffs actually occurred at the computer station.7 The most effective and safest practice model needs to be elucidated. Lastly, while approximately two-thirds of programs reported that their residents were competent, this still leaves room for improvement either in training or assessment. Our

89

Western Journal of Emergency Medicine


Lee et al.

Transition of Care Practices from Emergency Department to Inpatient Table 3. EM-IM transition of care algorithm “PREP-4Cs.” PREP-4Cs Step 1. Preparation Immediate access to patient information, assessment, access to images, labs and medical record. Time commitment (2-5min) Space with minimal interruption Step 2. Contact Sender and receiver identify themselves, including name and service “Face to face or voice to voice” to share real time information Step 3. Communicate patient information Structured sign-out format for each institution Recommended as feasible mnemonics (alphabetical order) for EM-IM transition, cited from Riesenberg table14:

1. HANDOFFS (Hospital location, Allergies, Name, DNR, Ongoing problem, Fact about hospitalization, Follow up, Scenarios) 2. I PASS (Introduction, Patient name, Assessment, Situation, Safety concerns) 3. SBAR (Situation, Background, Assessment, Recommendation) 4. SBARR (Situation, Background, Assessment, Recommendation, Read back) 5. SHARQ (Situation, History, Assessment, Recommendation, Questions) 6. SIGNOUT (Sick, Identifying data, General hospital course, New events, Overall health status, Upcoming possibilities, Tasks) 7. SOAP (Subjective, Objective, Assessment, Plan) Identification of high-risk patient: if high risk, explain the following: a) Why they are high risk b) How they may decompensate c) Planning for continued care d) Frequency of reassessment e) Code status or POLST Step 4. Closing the loop Invitation for asking questions Discuss pending tests, treatment and delegate clear delineation of responsibility on follow ups Receiver verification of information Step 5. Conclusion Conclusion Documentation of the transition of care Documentation of plan Open invitation for re-contact and discussion if a future need arises

analysis showed that only the presence of handoff training during clinical setting was associated with competency. Currently, there is no universally accepted competency assessment.36 As the program requirement includes the milestones for resident education, the level of competency needs to be accurately evaluated. Proposed Handoff Algorithm The proposed algorithm “Prep-4Cs” (Table 3) is meant to provide some standardization while still allowing flexibility so institutions/programs ensure that their unique needs are met. Some institutions may already use a handoff mnemonic or template that can be incorporated into this algorithm. Prospective validation of this algorithm is required.

Western Journal of Emergency Medicine

LIMITATIONS The study has several limitations. First, the response was based on each responder’s perception of the transition of care. Second, construct underrepresentation and construct-irrelevant variance could have affected the validity of the survey questions.37 Third, rater and recall bias need to be considered in the results, as the responder was anonymous in the survey. CONCLUSION This study identified current trends of transitions of care from the ED to inpatient settings among academic medical centers in the U.S. and developed an algorithm to provide a foundation and springboard for educational strategies.

90

Volume XVIII, no. 1: January 2017


Lee et al.

Transition of Care Practices from Emergency Department to Inpatient

Address for Correspondence: Sangil Lee, MD, MS, The University of Iowa Carver College of Medicine, Department of Emergency Medicine, 200 Hawkins Drive, Iowa City, IA 52242. Email: sangillee@uiowa.edu.

emergency medicine. Am J Med Qual. 2014;29(5):408-14. 13. Hasson F, Keeney S, McKenna H. Research guidelines for the Delphi survey technique. J Adv Nurs. 2000;32(4):1008-15. 14. Kessler CS, Kalapurayil PS, Yudkowsky R, et al. Validity evidence for a new checklist evaluating consultations, the 5Cs model. Acad Med.

Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none.

15. Kessler CS, Afshar Y, Sardar G, et al. A prospective, randomized,

Copyright: © 2016 Lee et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http:// creativecommons.org/licenses/by/4.0/

16. Brannen ML, Cameron KA, Adler M, et al. Admission handoff

2012;87(10):1408-12. controlled study demonstrating a novel, effective model of transfer of care between physicians: the 5 Cs of consultation. Acad Emerg Med. 2012;19(8):968-74. communications: clinician’s shared understanding of patient severity of illness and problems. J Patient Saf. 2009;5(4):237-42. 17. Apker J, Mallak LA, Gibson SC. Communicating in the “gray zone”: perceptions about emergency physician hospitalist handoffs and patient safety. Acad Emerg Med. 2007;14(10):884-94.

REFERENCES

18. Collins SA, Stein DM, Vawdrey DK, et al. Content overlap in

1. Cohen MD and Hilligoss PB. The published literature on handoffs

nurse and physician handoff artifacts and the potential role of

in hospitals: deficiencies identified in an extensive review. Qual Saf

electronic health records: a systematic review. J Biomed Inform.

Health Care. 2010;19(6):493-7. 2. Kohn L T CJM and Donaldson MS (Institute of Medicine). To err is

2011;44(4):704-12. 19. Maughan BC, Lei L, Cydulka RK. ED handoffs: observed practices

human: building a safer health system. Washington, DC: National Academy Press. 2000.

and communication errors. Am J Emerg Med. 2011;29(5):502-11. 20. Apker J, Mallak LA, Applegate EB, 3rd, et al. Exploring emergency

3. Accreditation Council for Graduate Medical Educa-

physician-hospitalist handoff interactions: development of the Handoff

tion. Common Program Requirements, Section VI.B.2. Available at: http://www.acgme.org/acgmeweb/

Communication Assessment. Ann Emerg Med. 2010;55(2):161-170. 21. Patterson ES, Roth EM, Woods DD, et al. Handoff strategies in

tabid/429/ProgramandInstitutionalAccreditation/

settings with high consequences for failure: lessons for health care

CommonProgramRequirements.aspx. Accessed Nov 9, 2015. 4. Joint Commission. 2006 Critical Access Hospital andHospital

operations. Int J Qual Health Care. 2004;16(2):125-132. 22. Cheung DS, Kelly JJ, Beach C, et al. Improving handoffs in the

National Patient Safety Goals #2E, 2006.Available at: http://www.

emergency department. Ann Emerg Med. 2010;55(2):171-180.

neodevices.com/resources/CR_NationalPatientSafetyGoals.pdf.

23. Telem DA, Buch KE, Ellis S, et al. Integration of a formalized handoff

Accessed Nov 9,2015. Accessed on 10/20/2015.

system into the surgical curriculum: resident perspectives and early

5. Riesenberg LA. Shift-to-Shift Handoff Research: Where Do We Go From Here? J Grad Med Educ. 2012;4(1):4-8.

results. Arch Surg. 2011;146(1):89-93. 24. Arora VM, Manjarrez E, Dressler DD, et al. Hospitalist handoffs: a

6. Ye K, Mc DTD, Knott JC, et al. Handover in the emergency

systematic review and task force recommendations. J Hosp Med.

department: deficiencies and adverse effects. Emerg Med Australas. 2007;19(5):433-41.

2009;4(7):433-40. 25. Sinha M, Shriki J, Salness R, et al. Need for standardized sign-out

7. Kessler C, Shakeel F, Hern HG, et al. An algorithm for transition

in the emergency department: a survey of emergency medicine

of care in the emergency department. Acad Emerg Med.

residency and pediatric emergency medicine fellowship program

2013;20(6):605-10. 8. Riesenberg LA, Leitzsch J, Little BW. Systematic review of handoff

directors. Acad Emerg Med. 2007;14(2):192-6. 26. Riesenberg LA, Leitzsch J, Massucci JL, et al. Residents’ and

mnemonics literature. Am J Med Qual. 2009;24(3):196-204.

attending physicians’ handoffs: a systematic review of the literature.

9. Hern HG, Jr., Gallahue FE, Burns BD, et al. Handoff Practices in Emergency Medicine: Are We Making Progress? Acad Emerg Med.

Acad Med. 2009;84(12):1775-87. 27. Wayne JD, Tyagi R, Reinhardt G, et al. Simple standardized patient

2016;23(2):197-201.

handoff system that increases accuracy and completeness. J Surg

10. Kern D. Curriculum development for medical education: a six step approach. Baltimore, MD: Johns Hopkins University Press. 1998.

Educ. 2008;65(6):476-85. 28. Wohlauer MV, Arora VM, Horwitz LI, et al. The patient handoff: a

11. Downing SM. Validity: on meaningful interpretation of assessment

comprehensive curricular blueprint for resident education to improve

data. Med Educ. 2003;37(9):830-7. 12. Kessler C, Shakeel F, Hern HG, et al. A survey of handoff practices in

Volume XVIII, no. 1: January 2017

continuity of care. Acad Med. 2012;87(4):411-8. 29. Hinami K, Farnan JM, Meltzer DO, et al. Understanding

91

Western Journal of Emergency Medicine


Lee et al.

Transition of Care Practices from Emergency Department to Inpatient communication during hospitalist service changes: a mixed methods study. J Hosp Med. 2009;4(9):535-40.

journal of medicine. 2014;371(19):1803-12. 34. Tapia NM, Fallon SC, Brandt ML, et al. Assessment and

30. Kessler C, Scott NL, Siedsma M, et al. Interunit handoffs of patients

standardization of resident handoff practices: PACT project. J Surg

and transfers of information: a survey of current practices. Ann Emerg Med. 2014;64(4):343-9 e345.

Res. 2013;184(1):71-7. 35. Smith CJ, Britigan DH, Lyden E, et al. Interunit handoffs from

31. Boat AC and Spaeth JP. Handoff checklists improve the reliability of

emergency department to inpatient care: A cross-sectional survey of

patient handoffs in the operating room and postanesthesia care unit. Paediatric anaesthesia. 2013;23(7):647-54.

physicians at a university medical center. J Hosp Med. 2015. 36. Doty CI, Roppolo LP, Asher S, et al. How Do Emergency Medicine

32. Starmer AJ, Sectish TC, Simon DW, et al. Rates of medical errors

Residency Programs Structure Their Clinical Competency

and preventable adverse events among hospitalized children following implementation of a resident handoff bundle. JAMA.

Committees? A Survey. Acad Emerg Med. 2015. 37. Downing SM. Threats to the validity of locally developed multiple-

2013;310(21):2262-70.

choice tests in medical education: construct-irrelevant variance and

33. Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical

construct underrepresentation. Adv Health Sci Educ Theory Pract.

errors after implementation of a handoff program. The New England

Western Journal of Emergency Medicine

2002;7(3):235-41.

92

Volume XVIII, no. 1: January 2017


Educational Advances

Student-Advising Recommendations from the Council of Residency Directors Student Advising Task Force Emily Hillman, MD* Lucienne Lutfy-Clayton, MD† Sameer Desai, MD‡ Adam Kellogg, MD† Xiao Chi Zhang, MD, MS§ Kevin Hu, MD¶ Jamie Hess, MD||

*Truman Medical Center, University of Missouri-Kansas City School of Medicine, Department of Emergency Medicine, Kansas City, Missouri † Baystate Medical Center, University of Massachusetts Medical School-Baystate Health, Department of Emergency Medicine, Worcester, Massachusetts ‡ University of Kentucky, Department of Emergency Medicine, Lexington, Kentucky § Alpert Medical School of Brown University, Department of Emergency Medicine, Providence, Rhode Island ¶ Icahn School of Medicine, Department of Emergency Medicine, New York, New York || University of Wisconsin School of Medicine and Public Health, Department of Emergency Medicine, Madison, Wisconsin

Section Editor: David P. Way, MEd Submission history: Submitted June 15, 2016; Revision received August 23, 2016; Accepted October 27, 2016 Electronically published November 21, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.10.31296

Residency training in emergency medicine (EM) is highly sought after by U.S. allopathic medical school seniors; recently there has been a marked increase in the number of applications per student, raising costs for students and programs. Disseminating accurate advising information to applicants and programs could reduce excessive applying. Advising students applying to EM is a critical role for educators, clerkship directors, and program leaders (residency program director, associate and assistant program directors). A variety of advising resources is available through social media and individual organizations; however, currently there are no consensus recommendations that bridge these resources. The Council of Residency Directors (CORD) Student Advising Task Force (SATF) was initiated in 2013 to improve medical student advising. The SATF developed best-practice consensus recommendations and resources for student advising. Four documents (Medical Student Planner, EM Applicant’s Frequently Asked Questions, EM Applying Guide, and EM Medical Student Advisor Resource List) were developed and are intended to support prospective applicants and their advisors. The recommendations are designed for the mid-range EM applicant and will need to be tailored to students’ individual needs. [West J Emerg Med. 2017;18(1)93-96.]

BACKGROUND Students considering applying to emergency medicine (EM) frequently look to medical school educators, EM faculty and residents, clerkship directors and program leaders (residency program director, associate and assistant program directors) for advising and mentorship. Advisors can help prepare students for a successful future by discussing topics such as individually-based career options, potential clinical experiences, and the application process.1,2 Effective advising is an acquired skill that necessitates careful consideration to help foster the student’s personal, professional, and educational growth while offering individualized guidance, with direct and honest answers to address the student’s anxieties and fears.3,4 Volume XVIII, no. 1: January 2017

One important factor in effective advising is knowledge of the issues specific to each student applying to an EM residency.4 Students considering a career in EM may lack access to faculty who can provide accurate advising information. Although advising is considered to be critically important, many prospective applicants seek an advisor late in their training or do not have an advisor before the application process.5,6 Students without access to local mentors may seek out “distance mentoring;” however, this requires that students first be aware of potential mentoring resources.7,8 The literature on mentoring in EM is scarce.9 While there is limited literature correlating measurable benefits of undergraduate mentoring, a recent study published in the Western Journal of Emergency Medicine 93

Western Journal of Emergency Medicine


Hillman et al.

Recommendations from the CORD Student Advising Task Force found a positive relationship between match outcomes and perceived mentor effectiveness.6,10 In 2016, EM ranked as the third most commonly matched specialty for United States (U.S.) allopathic medical school seniors (U.S. senior) with fewer match-positions relative to internal medicine and pediatrics, the top two matched specialties.11 While the percentage of EM postgraduate year (PGY)-1 positions filled per U.S. senior has remained stable over the last five years11, U.S. seniors are applying to more programs.12,13 According to the National Resident Matching Program (NRMP) applicant survey, EM-matched U.S. seniors applied to an average of 26 programs in 2011 and 39 in 2015.12,13 During the same period, U.S. seniors who did not match applied to nearly twice as many programs (32 to 60), but received half as many interview offers (15 and 7).12,13 Though the overall competitiveness of EM has remained stable, increased applications have resulted in a heightened sense of EM competitiveness. In 2013, during a Council of Residency Directors (CORD) Academic Assembly meeting, the Student Advising Task Force (SATF) was established to improve student advising. Variation in the quality and availability of student advising, as well as the increasing number of applications, led the task force to develop consensus documents to guide prospective EM applicants and their advisors. The task force formed as a joint venture with members of CORD, Clerkship Directors of EM (CDEM), the American Academy of Emergency Medicine (AAEM), and the Emergency Medicine Residents Association (EMRA). SATF members include faculty and residents representing programs throughout the country. OBJECTIVES The goals of SATF in creating and disseminating consensus recommendations and advising documents are two-fold: 1) to provide advising resources and advice for students considering applying to EM; and 2) to equip faculty in advising roles with the knowledge and resources to provide high quality advising to students. CURRICULAR DESIGN To identify best-practice advising information, SATF working groups were established; members self-selected to participate in working groups based on interest and expertise. Collation of available literature, existing advising resources, member opinion and experience guided the development of the consensus recommendations and documents. Group leaders worked between groups to ensure consistency between documents and to distribute materials to the task force as a whole for comments, revisions, and approval. The resources developed include the following: 1. Medical Student Planner – a chronological planner for each semester and year of medical school with recommendations for what to prioritize to maximize a student’s potential. Western Journal of Emergency Medicine

2. EM Applicant’s Frequently Asked Questions (EM-FAQ) – a brief question-and-answer guide that addresses the most commonly asked questions from applicants. 3. EM Applying Guide – a comprehensive document that provides in-depth answers on a broad range of topics, including planning visiting rotations, obtaining letters of recommendation, preparing an ERAS application, and navigating the interview and ranking process. 4. EM Medical Student Advisor Resource List – a comprehensive list of available high quality advising resources, including embedded links. The CORD-SATF developed the following recommendations as best practice for student advising. These recommendations can be found within the aforementioned resources and were approved by the CORD Board of Directors, as well as by AAEM and CDEM. The recommendations are intended to serve as a general guide as each student needs an individualized approach. 1. Pre-Clinical Years: Students with an early interest in EM should be encouraged to consider how early academic achievement, volunteer activities, and career exposure can positively impact their ability to match in EM. Students should aim to be in the top half of their class in basic science courses. Consistent longitudinal volunteer experiences are valued. Research is not required for EM applicants to match but is considered a strength. Joining an EM interest group (EMIG) can help solidify the student’s career choice and open the opportunity for mentorship and research opportunities. For students at institutions without an interest group or EM faculty advisors, it would be especially beneficial to consider joining EMRA. EMRA can provide resident mentorship opportunities and advising resources. 2. Emergency Medicine Rotations: Doing two rotations in EM at institutions with training programs is recommended to allow for a variety of experiences, development of EM skills, and multiple perspectives on performance. A third rotation may be appropriate for some students depending on prior academic performance and application goals. Optimal timing is during the summer and fall months of a student’s fourth year. Most students at an institution with an academic EM program will do one rotation at their home school and a visiting rotation at another program. Students should consider participating in rotations that expose them to different practice varieties, locations, and 94

Volume XVIII, no. 1: January 2017


Hillman et al.

3.

4.

5.

6.

program design. Students who excel in their rotations come prepared to work hard, are enthusiastic, develop EM presentations skills, create full management plans, and read to expand their medical knowledge. The Role of the Standardized Letter of Evaluation (SLOE): Letters of evaluation from within the specialty of EM are highly important factors in selecting applicants to interview, rivaling the importance of United States Medical Licensing Exam (USMLE) performance.14 Obtaining two SLOEs is recommended, preferably one from each EM rotation at a training program. These letters, often written by the education team, can provide a meaningful comparison group, and are considered less biased than other letters. USMLE Step 1 and Step 2 Clinical Knowledge (CK) Performance: Each program will weigh test scores differently in their applicant review process. With a USMLE Step 1 or Step 2 CK score > 230 many programs will grant interviews.14 Students with a Step 1 score < 220 should be encouraged to take Step 2 CK early, to allow for results to be included in their initial application review. Electronic Residency Application Service (ERAS) Application: It is recommended that students submit their application as early as possible, ideally on September 15 when ERAS opens. The personal statement is an area for the student to set himself apart and explain his interest, dedication and aptitude for EM. It is also an opportunity to address any discrepancies, delays, or perceived deficiencies in the student’s training and application. There is no standard number of applications that will guarantee matching in EM. Applicants should apply to a variety of programs across a spectrum of perceived competitiveness. Typical range for an applicant is 20-30 programs. Applications to > 40 programs is rarely warranted and often leads to diminishing returns. The number of applications to be made is particularly individualized and is best discussed directly with an EM advisor. Interviews: Interviews at 10-12 programs correlates with a very high match rate.14 Students who are couples matching may need more interviews to reach a similar match rate. Independent applicants and those with “red flags” have a lower match rate overall but can still successfully match with fewer interviews. Once a student has decided he/she will not attend an interview, he or she should immediately cancel;

Volume XVIII, no. 1: January 2017

Recommendations from the CORD Student Advising Task Force a minimum of two weeks is recommended so the program can fill the interview spot. 7. Rank List: Students should rank programs based on their order of preference, not based on where they think they will appear on a program’s rank list. Other important factors to consider are location, program type, and the student’s personal experience. Developing consensus documents to tackle the needs of advisors and the breadth of EM applicants met with multiple challenges: 1. Applicant uniqueness: No resource can meet all of the needs and questions of an individual applicant. Our resources represent consensus best-practice advice but are generalized to the average applicant. They do not supply the scalability applicants require to maximize their individual application. Students are encouraged to meet with individual advisors; when no advisor is available these resources can serve as a starting point. Over the next year our task force will produce addenda to better guide specific groups of applicants. 2. Pre-existing resources: In addition to SATF member expertise and opinion, the consensus recommendations and resources were developed by reviewing existing resources from the NMRP, CORD, EMRA, CDEM and AAEM, as well as blogs and social media. While these resources were developed using the best current advice, they are also largely based on opinion, remain subject to prejudices, and are inherently biased by their sources. 3. Dissemination of materials: Currently the resources are available on AAEM, CORD, CDEM and EMRA websites, and will be propagated via social media. Additionally, SATF resources and recommendations will need refinement and continuous revision. 4. Lack of published data: While associations can be inferred, there is no research on the success of applicants based on the advising received. Given the lack of evidence-based studies, the consensus recommendations are limited in that they are based on reviews of pre-existing resources, opinions, experience, and unanimity of the members participating in the SATF. IMPACT/EFFECTIVENESS The CORD-SATF developed four resources through consensus recommendations to improve the advising of EM applicants and simultaneously support their advisors. These resources are endorsed by CORD, CDEM and AAEM and will 95

Western Journal of Emergency Medicine


Hillman et al.

Recommendations from the CORD Student Advising Task Force be disseminated via multiple avenues. They will also be used to support distance advising for students without access to advising locally. These resources form a foundation for students and advisors to better understand the application process. Time and continued application will reveal if the development of consensus advising recommendations improves the application experience for stakeholders. The SATF looks forward to the upcoming application cycles and NRMP data to evaluate the impact these resources have.

Mentoring-ThePharos-Winter2012.pdf. Published 2012. Accessed June 12, 2016. 4. Sambunjak D, Straus SE, Marusic A. A Systematic Review of Qualitative Research on the Meaning and Characteristics of Mentoring in Academic Medicine. J Gen Intern Med. 2009;25:72-8. 5. Sambunjak D, Straus SE, Marusic A. Mentoring in academic medicine: a systematic review. JAMA. 2006:296;1103-15. 6. Dehon E, Cruse M, Dawson B, et al. Mentoring during Medical School and Match Outcome among Emergency Medicine Residents. West J Emerg Med. 2015;16(6):927-30. 7. EMRA Hangouts. Emergency Medicine Residents’ Association Web Site. Available at: https://www.emra.org/students/EMRA-Hangouts/. Published 2015. Accessed June 7, 2016.

Address for Correspondence: Emily Hillman, MD, Truman Medical Center, Department of Emergency Medicine, 2301 Holmes Street Kansas City, Missouri 64108. Email: emily.hillman@tmcmed.org.

8. EMRA Student-Resident Mentorship Program. Emergency Medicine

Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none.

9. Yeung M, Nuth J, Stiell IG. Mentoring in emergency medicine: the art

Residents’ Association Web Site. Available at: https://www.emra.org/ students/Mentorship/. Published 2015. Accessed June 7, 2016. and the evidence. CJEM. 2010;12:143-9. 10. Frei E, Stamm M, Buddeberg-fischer B. Mentoring programs for medical students – a review of the PubMed literature 2000-2008. BMC Med Educ. 2010;10:32.

Copyright: © 2016 Hillman et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

11. National Resident Matching Program, Results and Data: 2016 Main Residency Match®. National Resident Matching Program, Washington, DC. 2016. 12. National Resident Matching Program, Data Release and Research Committee: Results of the 2011 NRMP Applicant Survey by Preferred Specialty and Applicant Type. National Resident Matching Program, Washington, DC. 2011. 13. National Resident Matching Program, Data Release and Research

REFERENCES

Committee: Results of the 2015 NRMP Applicant Survey by Preferred

1. Garmel G. Mentoring Medical Students in Academic Emergency

Specialty and Applicant Type. National Resident Matching Program,

Medicine. Acad Emerg Med. 2004;11:1351-7.

Washington, DC. 2015.

2. Zerzan JT, Hess R, Schur, E, et al. Making the Most of Mentors: A

14. National Resident Matching Program, Data Release and Research

Guide for Mentees. Acad Med. 2009;84:140-4.

Committee: Results of the 2014 NRMP Program Director Survey.

3. Bynny R. Mentoring and coaching in medicine. The Pharos. 2012;13. Available at: https://fhs.mcmaster.ca/macortho/documents/

Western Journal of Emergency Medicine

National Resident Matching Program, Washington, DC. 2014.

96

Volume XVIII, no. 1: January 2017


Educational Advances

Emergency Medicine Resident Orientation: How Training Programs Get Their Residents Started Jillian McGrath, MD Michael Barrie, MD David P. Way, MEd

The Ohio State University College of Medicine, Department of Emergency Medicine, Columbus, Ohio

Section Editor: Sally A. Santen, MD, PhD Submission history: Submitted June 15, 2016; Revision received September 1, 2016; Accepted October 1, 2016 Electronically published November 15, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.10.31275

Introduction: The first formal orientation program for incoming emergency medicine (EM) residents was started in 1976. The last attempt to describe the nature of orientation programs was by Brillman in 1995. Now almost all residencies offer orientation to incoming residents, but little is known about the curricular content or structure of these programs. The purpose of this project was to describe the current composition and purpose of EM resident orientation programs in the United States. Methods: In autumn of 2014, we surveyed all U.S. EM residency program directors (n=167). We adapted our survey instrument from one used by Brillman (1995). The survey was designed to assess the orientation program’s purpose, structure, content, and teaching methods. Results: The survey return rate was 63% (105 of 167). Most respondents (77%) directed three-year residencies, and all but one program offered intern orientation. Orientations lasted an average of nine clinical (Std. Dev.=7.3) and 13 non-clinical days (Std. Dev.=9.3). The prototypical breakdown of program activities was 27% lectures, 23% clinical work, 16% skills training, 10% administrative activities, 9% socialization and 15% other activities. Most orientations included activities to promote socialization among interns (98%) and with other members of the department (91%). Many programs (87%) included special certification courses (ACLS, ATLS, PALS, NRP). Course content included the following: use of electronic medical records (90%), physician wellness (75%), and chief complaint-based lectures (72%). Procedural skill sessions covered ultrasound (94%), airway management (91%), vascular access (90%), wound management (77%), splinting (67%), and trauma skills (62%). Conclusion: Compared to Brillman (1995), we found that more programs (99%) are offering formal orientation and allocating more time to them. Lectures remain the most common educational activity. We found increases in the use of skills labs and specialty certifications. We also observed increases in time dedicated to clinical work during orientation. Only a few programs reported engaging in baseline or milestone assessments, an activity that could offer significant benefits to the residency program. [West J Emerg Med. 2017;18(1)97-104.]

INTRODUCTION Emergency medicine (EM) residency programs commonly offer dedicated curricula designed for orientation of beginning residents. An orientation curriculum was first developed for incoming EM residents at the University of Cincinnati in 1976.1

Volume XVIII, no. 1: January 2017

Major objectives of that first orientation were to identify and delineate the subject matter of EM and to review the basic elements of EM. In 1995, Brillman et al. surveyed EM residency program directors regarding composition of orientation curricula. At that time, 93% of EM programs offered

97

Western Journal of Emergency Medicine


McGrath et al.

EM Residency Orientation an orientation program. Orientation consisted mainly of lectures and certification courses, had variable lengths, composition, goals, and associated courses, and very few programs offered procedural labs or special skills training sessions (2-11%).2 More recently, Lucas et al. described a redesigned resident orientation curriculum using the Kern model of curriculum development.3,4 Components of their redesigned curriculum included instruction on administrative procedures and policy, skills training, instruction on medical knowledge, setting expectations for learning, introductory performance assessment, and socialization. Min et al. also described an optional introductory clinician development course (intern “boot camp”) prior to the start of residency, which focused on core medical content, common patient presentations, basic procedural skills instruction and supervised clinical shifts.5 Both Lucas and Min asked new EM residents to rank components of their curriculum in terms of perceived “helpfulness.” Since 1995 there have been no general descriptions or studies describing EM orientation practices throughout the U.S. Additionally, we found no standards to guide program development. The literature is rich, however, with conversations about bridging the gap between undergraduate medical education (UME) and graduate medical education (GME),6 which include specifics about assessing medical students and medical graduates at these critical stages of professional development (e.g. Entrustable Professional Activities, and American Council on Graduate Medical Education [ACGME] Milestones).7-9 There is less in the literature about how orientation programs contribute to the transition from UME to GME, particularly in the specialty of EM. The purpose of this project was to profile the current state of orientation programs for entering EM residents across the U.S. At the outset, we anticipated an increase in the number of formal orientation programs and also predicted that we would find considerable variability in program characteristics, length, and goals. Further, we expected to find that residency programs had increased their use of benchmark assessments for incoming interns to determine where they were in their progress towards achieving the ACGME milestones. Finally, we hoped to find significant innovation in program activities and assessments that might be generalizable to others. METHODS Study Participants We surveyed the EM residency program directors of all ACGME-accredited programs in the U.S. Survey participants were identified through three different residency program directories: The Society for Academic Emergency Medicine Residency Directory,10 the American

Western Journal of Emergency Medicine

College of Emergency Physicians Directory of Approved ACGME Residencies,11 and the American Medical Association’s FREIDA Online® Services.12 Instrument development We adapted our survey instrument from one used by Brillman (1995).2 Adaptations included changes to the types of questions asked, and the addition of questions regarding contemporary teaching methods. Instead of open-ended questions to gather program information, we asked respondents to choose items from checklists with instructions to select all that apply. We also added questions about the use of high-fidelity simulation, simulated patient encounters, and social activities. Finally, unlike Brillman’s survey, we asked respondents to give us an idea of the overall program structure by estimating the percentage of time allocated to each of 10 types of program activities. The survey was developed collaboratively among former and current residency program directors and associate program directors, under the direction of a survey development specialist. Two of the developers have designed and administered an orientation program for our local residency. All developers have participated in an orientation program as residents. Survey developers were presented with a draft survey derived from the Brillman article. They were asked to add, modify, or delete items to create an instrument that contained only items they believed were important for profiling a modern residency orientation program. The subsequent results were fine-tuned into proper survey format and then presented to the developers as a pilot, which led to an additional round of modifications. The final instrument contained 18 items: 13 checklist items, one multiple choice, one fill in the blank and three open-ended comment items. To shorten the survey administration time, each of the checklist items was preceded with a skip logic question, which is a yes vs. no filter item that directs the respondent only to applicable subsequent checklists. Survey Implementation We used the Dillman tailored design method (TDM) for electronic (e-mail) surveys to guide this national study.13 Notices about the study were sent in advance to residency directors. Email communications were personalized. The cover letter and survey were delivered within three days of the initial notice. Respondents were offered an alternative method for sending back their responses. Finally, including the initial notice, we contacted program directors up to five times with reminder notifications or personal requests to complete the survey. Email addresses were verified and updated at all stages of survey implementation. Our institution’s human subjects review board approved this survey project.

98

Volume XVIII, no. 1: January 2017


McGrath et al.

EM Residency Orientation

Table 1. Demographic profile of emergency medicine residency programs in the U.S. by survey respondents and non-respondents: program size, program length, and region of the country.* Demographics Respondents Non-respondents Total Program size Below 25th percentile

14 (52%)

13 (48%)

27 (17%)

25th - 50th percentile

37 (65)

20 (35)

57 (36)

51th - 75th percentile

24 (62)

15 (39)

39 (24)

Above 75th percentile

27 (73)

10 (27)

22 (23)

Data unavailable

7 (4) X2= 3.13, df=3, p= .37

Program length 3-Year

83 (62%)

50 (38%)

133 (80%)

4-Year

21 (67)

10 (32)

31 (47)

Data unavailable

3 (2) X2= 0.31, df=1, p= .68

Region Northeast

33 (60%)

22 (40%)

55 (33%)

Central

26 (65)

14 (35)

40 (24)

South

30 (63)

18 (38)

48 (29)

16 (67)

8 (33)

24 (14)

West

X2= 0.42, df=3, p= .94 Total 105 (63%) 62 (37%) 167 (100%) *The authors surveyed residency program directors of 167 emergency medicine residency programs in the United States. The respondents of the survey are profiled demographically using residency program characteristics: program size (number of residents), program length (three- or four-year program), and region of the country. Chi- square tests of proportion (X2) are used to evaluate whether the sample obtained is representative of the population at large.

Data Analysis We analyzed electronic survey data with IBM-SPSS for Windows, Version 22.0.14 We compared the respondents and non-respondents on demographic characteristics to check that our respondent data were representative of the population using chi-square tests of proportions (X2). The program demographics that we tested for bias included region of the country (Northeastern, Central, Southern, and Western); program length (three- or four-year program); and program size (number of residents) by percentile rank (1-25th percentile, 26th-50 percentile, 51-75th percentile, and 7699th percentile). We used descriptive statistics to profile the orientation programs for EM medicine residency programs. (Note: Since 1995, three-year programs that start in the postgraduate year 2 have been phased out.) RESULTS The overall survey return rate was 63% (105 of 167). The respondent sample was evaluated for representativeness using chi-square tests of proportion (Table 1). Survey participants were representative of the population of residency program

Volume XVIII, no. 1: January 2017

directors in EM with regard to program size, program length, and region of the country. All but one of the EM residency program director respondents said that they conduct intern orientations (99%; or 104 of 105). Orientation programs were most frequently sponsored by the Department of EM (97%), but some programs obtain additional sponsorship through the following: academic health centers (AHCs) (59%), medical schools (14%), or other affiliated hospitals (12%). One program said that their orientation program was sponsored through their Graduate Medical Education Office and that most of the orientation activities were shared with interns from other specialties’ residency programs. The length of EM orientation programs averaged 22 days (SD=11.8). Residents spent an average of 8.9 days (SD=7.3) of clinical orientation, i.e. clinical work in the ED. Nonclinical activities accounted for 13.2 orientation days (SD 9.3). We asked directors to estimate how they allocated their orientation time across various activities (Figure 1). Directors reported that about a quarter of their time was allocated to classroom didactics (27%), and a quarter to clinical work,

99

Western Journal of Emergency Medicine


McGrath et al.

EM Residency Orientation

Figure 1. Average percentage of time allocated to emergency medicine resident orientation activities.

Table 2. Frequency and percentages of purposes served by EM orientation programs as reported by 104 U.S. residency program directors (Directors were permitted to select more than one purpose.) Frequency Percent Getting to know each other

103

99.0

Familiarizing interns with hospital and department policies

99

95.2

Acclimation to a new emergency department

97

93.3

Getting to know members of the department

95

91.3

Administrative tasks and chores

93

89.4

Promoting positive environment

90

86.5

Team building

85

81.7

Teaching new skills

78

75.0

Teaching new knowledge

76

73.1

Earning additional credentials such as ACLS, ATLS, etc.

70

67.3

Reviewing skills learned in medical school

61

58.7

Baseline assessment of clinical skills

57

54.8

Reviewing medical knowledge learned in medical school

57

54.8

Baseline assessment of medical knowledge

52

50.0

Other purpose not listed

12

11.5

ACLS, advanced cardiac life support; ATLS, advanced trauma life support

Western Journal of Emergency Medicine

100

Volume XVIII, no. 1: January 2017


McGrath et al.

EM Residency Orientation

Figure 2. Percentage programs reporting various orientation activities.

including pediatric clinical work (23%). The other half was mostly comprised of skills training and assessment (18%), administrative activities (10%), socialization (9%), ED acclimation (5%), and miscellaneous other activities (5%). The most frequently expressed goals of orientation programs were an opportunity for interns to get to know each other (98%), familiarization with hospital and departmental policy (95%), acclimating to a new ED (93%), opportunity to get to know other members of the department (91%), and completion of administrative tasks (89%). Less frequently expressed goals were review of skills or medical knowledge learned in medical school (59% and 55%), baseline assessment of clinical skills and medical knowledge (54% and 50%), and other purposes, which included additional baseline assessment and certification courses (11%) (Table 2). Specific orientation activities offered by programs included social activities (100%), lectures/didactic sessions (98%), procedure labs (95%), special certification courses (87%), high-fidelity simulation (82%), simulated patient encounters/ objective structured clinical examination (OSCE) (34%), and baseline assessment (33%) (Figure 2). The most frequent topics included in lectures or didactic sessions were use of the electronic medical record (90%), physician wellness (75%), and chief complaint-based lectures

Volume XVIII, no. 1: January 2017

(72%). Table 3 lists the other topics covered by didactic sessions. Specific procedural or skill sessions offered by programs included ultrasound (94%), airway management (91%), vascular access (90%), suturing/wound management (77%), splinting (67%), trauma-related (62%), cadaver-based lab (25%), Head, eye, ears, nose and throat emergencies (HEENT) (17%), animal-based lab (12%), dental lab (8%), and other skills (10%) (Table 4). Specialized certification courses were offered by 88% of programs. Specific specialized courses offered during the orientation curriculum were ACLS (77% of all programs), PALS (74%), ATLS (68%), NRP or other neonatal courses (27%), and other specialized courses (12%). The most frequent social activities offered by programs were social events for both EM residents and faculty (87%), team-building activities (55%), and social events for EM interns only or EM residents only (40% and 40%). Twentyeight percent of programs offer a formal retreat that occurs off-site and 18% offer social events that include other ED personnel such as nursing or staff. For programs that perform baseline assessment of new EM residents, baseline assessment practices focused on medical knowledge (79%), patient communication (49%), history-taking skills (42%), physical exam skills (36%), EKG interpretation

101

Western Journal of Emergency Medicine


McGrath et al.

EM Residency Orientation Table 3. Frequency and percentages of topics covered through didactics or lectures during EM orientation programs as reported by 102 U.S. residency program directors (Directors were permitted to select more than one topic.) Frequency Percent

Table 4. Frequency and percentages of topics covered through procedural skill sessions during EM orientation programs as reported by 99 U.S. residency program directors (Directors were permitted to select more than one topic.) Frequency 93

93.9

Airway management

90

90.9

Vascular access

89

89.9

Wound management/suturing

76

76.8

Splinting

66

66.7

65.7

Trauma-related procedures (e.g. chest tube placement)

61

61.6

65.7

Cadaver-based lab

25

25.3

HEENT

17

17.2

Animal-based lab

12

12.1

92

90.2

Wellness

76

74.5

Clinical chief complaint-based lectures

73

71.6

Patient safety/quality

69

67.6

EKG interpretation

67

Trauma

67

Nursing integration

60

58.8

Work-life balance

58

56.9

Clinical topic-based lectures

56

54.9

Consultation

55

53.9

Arthrocentesis

Radiology interpretation

51

50.0

Pericardiocentesis

Social media

51

50.0

Research

47

46.1

Electronic communication

43

42.2

CV insertion

Coding/billing

46

45.1

Decontamination

Impaired physician

46

45.1

Regulatory/legal

46

45.1

Ethics

38

37.3

EMS

32

31.4

EBM

31

30.4

Culture/diversity

27

26.5

Crew resource management

17

16.7

Other topics not listed

17

16.2

Personal financial

15

14.7

8

7.8

Palliative care/advanced directives

EKG, electrocardiogram; EMS, emergency medical services; EBM, evidence based medicine

(36%), emotional intelligence/personality assessments (27%), radiology interpretation (18%), learning style tests, evidencebased medicine (EBM) knowledge (9%), or other skills such as Level 1 Milestones or procedural skills (12%). We identified several themes through analysis of verbatim comments. First, EM residency program directors appreciated having time within an orientation curriculum for bonding and

Western Journal of Emergency Medicine

Percent

Ultrasound

Electronic medical record

Dental emergencies Other skill set not listed

8

8.1

10

10.1

Venous pacing Transcutaneous pacing OB delivery

Line placement (3) Common bedside procedures such as Foley catheters, NG tube placement Sexual assault forensic examination Slit lamp usage (3) Incision and drainage of abscesses OB, obstetrics; CV, central venous; HEENT, head ears eyes neck throat

socialization. They also valued dedicated time to introduce their care delivery system and expectations for the program. However, there was an expressed desire to further streamline administrative requirements and tasks, continue to move away from lecture-based curriculum while placing more emphasis on interactive didactics (small groups, procedural or skills labs, simulation, and OSCEs), and incorporate more assessment of baseline skills. Survey respondents described innovations such as active learning experiences that include procedural assessment, and simulation experiences that involve

102

Volume XVIII, no. 1: January 2017


McGrath et al.

EM Residency Orientation

breakdowns of critical steps and opportunities to train and remediate specific steps. Programs also included many special topics such as EBM skills, patient safety and quality, consultation skills, crew resource training, electronic communication-social media use, and worklife balance. Among programs that conducted baseline milestone assessment, some confirm that all Level 1 Milestones are met, while others evaluate only select Level 1 Milestones. DISCUSSION In comparison to the 1995 survey by Brillman et al, slightly more EM programs are offering a formal orientation curriculum (2014= 99% vs. 1995=93%). Orientations now average 22 days, a 57% increase over the 14 days reported by Brillman in 1995. The difference appears to come from the 8.9 days, on average of additional clinical time working in the ED, almost triple that of Brillman’s reported 2.4 days. Similar to the 1995 survey, the activity with the most dedicated time during orientation was lecture-based didactics (2014= 34 h vs. 1995= 35 h). However, we note an overwhelming increase in the number of programs that offer procedure labs and specialty sessions during their nonclinical orientation (2014= 95% vs. 1995= 52%). As Brillman reported in 1995, EM programs continue to offer specialized courses during orientation (ACLS: 2014= 74% vs. 1995= 84%; and ATLS: 2014= 65% vs. 1995= 68%). Considerably more programs are now offering a PALS course (2014= 71% vs. 1995= 39%). We report an increase in the proportion of dedicated time applied to formal clinical orientation (ED clinical work), 44% in 2014 vs. 17% in 1995, as well. The three-fold increase in ED clinical work during orientation between 1995 and 2014 is perhaps best explained by the survey participants’ responses to the “purpose” for orientation. Most of the purposes provided seem to involve enculturation: Getting to know one another, familiarizing interns with the hospital and department policies, acclimation to a new ED, getting to know members of the department, and team building. Since only 33% of respondents said that they include formal baseline assessment during orientation, an alternative explanation is that the additional ED clinical work is designed for informal assessment of an intern’s baseline clinical skills. Innovations described by respondents include an increasing number of specialty topics and sessions, expanded active learning experiences, and incorporation of introductory assessment and baseline EM milestone assessment. With the increased focus on competency-based assessment introduced by the ABEM/ACGME Milestone Project,9 we were surprised to observe that only 32.7% of program directors reported the incorporation of baseline

Volume XVIII, no. 1: January 2017

assessment of clinical skills during orientation. We speculate that this survey project, conducted in late 2014, was out ahead of residency programs’ implementation of formal milestone assessments (such as the one described by Hauff, et al.),15 and that the landscape has likely shifted from informal to formal assessment over the past two years. Documentation of medical student progress towards Level 1 Milestones could offer significant benefits to residents, their residency programs, and ultimately their patients. Deficiencies could be identified and remediated earlier, or customized learning plans based on milestone achievement could be developed. Competency-based assessment that document milestone progress or measure attainment of “Entrustable Professional Activities” are being developed and are beginning to surface in the literature.7,15,16 However, when assessments should be conducted and who should be responsible for assessment, whether it should be UME or GME programs, are questions that remain unanswered.7 Future research should contribute to identifying “best practices” for improving the learner “hand-off” process from UME to GME. LIMITATIONS We demonstrated that our respondents were representative of the population as a whole, but because we did not receive a survey from every program, generalizability to all programs is not assured. Additionally, we should note the limitations common to survey research. First is the potential that selection bias occurred, which in our case would have been the tendency for residencies with no orientation program to have avoided participation in the survey. Second is the potential for recall bias among those who completed the survey. Finally, we cannot be certain that we captured the rich detail of every residency orientation program. By seeking a general profile of residency orientation, some unique and creative program details may have remained undetected. CONCLUSION Since the last national survey of EM residency program directors about their orientation programs, much has changed. Now, nearly every program has an established orientation program for incoming residents. Overall, the duration of orientation has increased by almost 60%, which is primarily attributable to increases in dedicated clinical work during orientation. The most common activities remain didactic sessions and social activities, but with improvements in technology and simulation, there has been an increase in skill training sessions. A minority of programs implement baseline assessments of their learners, which is an opportunity for programs to develop early interventions for incoming residents not meeting minimum expectations.

103

Western Journal of Emergency Medicine


McGrath et al.

EM Residency Orientation Address for Correspondence: Jillian McGrath, MD, The Ohio State University College of Medicine, Department of Emergency Medicine, 774 Prior Hall, 376 W 10th Ave, Columbus, OH 43210. Email: jillian.mcgrath@osumc.edu.

7. Santen SA, Rademacher N, Heron SL, et al. How competent are emergency medicine interns for Level 1 Milestones: who is responsible? Acad Emerg Med. Jul 2013;20(7):736-739. 8. Ten Cate O, Chen HC, Hoff RG, et. al. Curriculum development for the workplace using Entrustable Professional Activities (EPAs):

Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none. Copyright: Š 2016 McGrath et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http:// creativecommons.org/licenses/by/4.0/

AMEE Guide No. 99. Med Teach. 2015;37(11):983-1002. 9. Emergency Medicine Milestone Project. Chicago, IL: Accreditation Council for Graduate Medical Education, & East Lansing, MI: The American Board of Emergency Medicine. 2012. Available at: https:// www.abem.org/public/docs/default-source/migrated-documentsand-files/em-milestones.pdf?sfvrsn=6. Accessed: April 29, 2014. 10. SAEM Residency Directory. Des Plaines, IL: The Society for Academic Emergency Medicine. Available at: http://www.saem.org/ membership/services/residency-directory. Accessed on August 27, 2014. 11. Sulton, LD. Approved ACGME Residencies. Chicago, IL: The American College of Surgeons, Residency Review Committee

REFERENCES

for Emergency Medicine. Available at: https://www.acep.org/

1. Levy R and Anwar RA. Orientation program for emergency medicine residents. J Am Col Emerg Physicians. 1979;8(2):77-9.

acgmeresidencies/ Accessed on June 14, 2016. 12. FREIDA Online Services. Program and institution search for

2. Brillman JC, Sklar DP, Viccellio, P. Characteristics of emergency

ACGME-accrdited residency and fellowship programs. Chicago, IL:

medicine resident orientation programs. Acad Emerg Med.

The American Medical Association. Available at: http://www.ama-

1995;2(1):25-31.

assn.org/ama/pub/education-careers/graduate-medical-education/

3. Lucas R, Choudhri T, Roche C, et al. Developing a Curriculum for

freida-online/about-freida-online/freida-online-services.page?

Emergency Medicine Residency Orientation Programs. J Emerg Med. 2014;46(5):701-705.

Accessed on August 25, 2014 13. Dillman DA. Mail and internet surveys: the tailored design method

4. Kern DE, Thomas PA, Hughes MT. Curriculum development for medical education: a six step approach. 2 edition Baltimore, MD: nd

(2nd Ed.) Hoboken, NJ: John Wiley & Sons, Inc. 2007. 14. IBM Corp. Released 2013. IBM- SPSS Statistics for Windows,

The Johns Hopkins University Press. 2009. 5. Min AA, Stoneking LR, Grall KH, et al. Implementation of the

Version 22.0. Armonk, NY: IBM Corp. 15. Hauf SR, Hopson LR, Losman E, et al. Programmatic assessment

introductory clinician development series: an optional boot camp for

of Level 1 Milestones in incoming interns. Acad. Emerg Med.

emergency medicine interns. Adv Med Educ Pract. 2014;5:275-9. 6. Wijnen-Meijer M, Ten Cate O, van der Schaaf M, et. al.

2014;21:694-698. 16. Sozener CB, Lypson ML, House JB, et al. Reporting achievement

Vertical integration in medical school: effect on the transition to

of medical student milestones to residency program directors: an

postgraduate training. Med Educ. 2010;44(3):272-9.

educational handover. Acad Med. 2016;91:676-684.

Western Journal of Emergency Medicine

104

Volume XVIII, no. 1: January 2017


Educational Advances

Trends in NRMP Data from 2007-2014 for U.S. Seniors Matching into Emergency Medicine David E. Manthey, MD* Nicholas D. Hartman, MD* Aileen Newmyer, MD† Jonah C. Gunalda, MD* Brian C. Hiestand, MD* Kim L. Askew, MD*

*Wake Forest School of Medicine, Department of Emergency Medicine, WinstonSalem, North Carolina † Henry Ford Hospital, Department of Emergency Medicine, Detroit, Michigan

Section Editor: Andrew W. Phillips, MD, MEd Submission history: Submitted June 14, 2016; Accepted October 27, 2016 Electronically published November 23, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.10.31237

Introduction: Since 1978, the National Residency Matching Program (NRMP) has published data demonstrating characteristics of applicants who have matched into their preferred specialty in the NRMP main residency match. These data have been published approximately every two years. There is limited information about trends within these published data for students matching into emergency medicine (EM). Our objective was to investigate and describe trends in NRMP data to include the following: the ratio of applicants to available EM positions; United State Medical Licensing Examination (USMLE) Step 1 and Step 2 scores (compared to the national means); number of programs ranked; and Alpha Omega Alpha Honor Medical Society (AOA) membership among U.S. seniors matching into EM. Methods: This was a retrospective observational review of NRMP data published between 2007 and 2016. We analyzed the data using analysis of variance (ANOVA) or Kruskal-Wallis testing, and Fischer’s exact or chi-squared testing, as appropriate to determine statistical significance. Results: The ratio of applicants to available EM positions remained essentially stable from 2007 to 2014 but did increase slightly in 2016. We observed a net upward trend in overall Step 1 and Step 2 scores for EM applicants. However, this did not outpace the national trend increase in Step 1 and 2 scores overall. There was an increase in the mean number of programs ranked by EM applicants over the years studied from 7.8 (SD4.2) to 9.2 (SD5.0, p<0.001), driven predominantly by the cohort of U.S. students successful in the match. Among time intervals, there was a difference in the number of EM applicants with AOA membership (p=0.043) due to a drop in the number of AOA students in 2011. No sustained statistical trend in AOA membership was identified over the seven-year period studied. Conclusion: NRMP data demonstrate trends among EM applicants that are similar to national trends in other specialties for USMLE board scores, and a modest increase in number of programs ranked. AOA membership was largely stable. EM does not appear to have become more competitive relative to other specialties or previous years in these categories. [West J Emerg Med. 2017;18(1)105-109.]

INTRODUCTION Since 1978, the National Residency Matching Program (NRMP) has published data demonstrating characteristics of applicants who have matched into their preferred specialty in the NRMP main residency match. Data available on the NRMP Volume XVIII, no. 1: January 2017

website approximately every two years include a summary entitled “Charting the Match Outcomes,” as well as the results of the NRMP Applicant Survey results and the most recent Main Residency Match data.1-10 Although this information is publically available and fairly easy to interpret, there is limited

105

Western Journal of Emergency Medicine


Manthey et al.

Trends in NRMP Data for U.S. Seniors Matching into EM

USMLE score distributions for students matching in EM were available as proportions of participants scoring within 10-point intervals. As an example, in 2009 15.52% of U.S. participants successfully matching in EM had a Step 1 score between 201-210. To transform these categories into continuous data distributions, we calculated weighted averages using the midpoint of each range to generate an overall average score and variance. Other data elements were taken directly from the NRMP reports without transformation or alteration of definitions. We used analysis of variance (ANOVA) to compare continuous variables when sufficient detail was available from NRMP sources. The data were sufficient for EM applicants, but not for the overall cohort of students participating in the U.S. residency match. Therefore, comparison of means and reported standard deviations involving the total U.S. cohort were performed without hypothesis-based testing. Equality of variances assumption was violated for the evaluation of number of programs ranked per year, so we performed Kruskal-Wallis testing in lieu of one-way ANOVA, and used Dunn’s test for post-hoc comparisons. We calculated Fisher’s exact test or chi-squared test as appropriate to compare categorical data. All statistics were two-tailed, and a p<0.05 was held to represent statistical significance. Given that the sizes of the cohorts were fixed, we did not perform sample-size calculations when the intent was to use all available data. We calculated statistics using Stata IC 11.2 (College Station, TX). As no subject level data were provided, the Biomedical Institutional Review Board at Wake Forest University Health Sciences determined this study to be non-human subjects research and exempt from formal review.

information about trends within these published data for students matching into emergency medicine (EM). In a recent commentary, a respected EM educator stated in the American College of Emergency Physicians (ACEP) News, “It is getting tougher every year to match in EM. In 2015, the average United States Medical Licensing Exam (USMLE) Step 1 score of a student who matched in EM was 230, up from 219 in 2006.”11 This sentiment has been echoed in multiple arenas by EM residency leaders as well as by those tasked with advising medical students applying for residency positions, including medical student educators in EM. Therefore, we sought to evaluate the available data for any trends that might suggest that EM was becoming more competitive. Although an increased average USMLE Step 1 or 2 score may indicate that the quality of applicants to EM has improved and therefore the competitiveness of the specialty has grown, it may also be that overall Step 1 or 2 scores are increasing across all medical students. Other ways to suggest increased competitiveness in EM applicants would include an increased number of applicants per available spot. As the Alpha Omega Alpha Medical Honor Society (AOA) is considered by many to be a marker of a more competitive applicant, an increasing percentage of applicants attaining AOA status would suggest a trend towards EM becoming a more competitive specialty in which to match. Our objective was to investigate and describe secular trends in the NRMP data to include the ratio of applicants to available EM positions, USMLE Step 1 and Step 2 scores (taken in context with trends in all match participant scores), number of programs ranked by each student, and AOA membership among U.S. seniors matching into EM. METHODS We performed a retrospective cohort analysis of NRMP data generated between 2007 and 2016. Summary data were available for students participating in the match process in 2007, 2009, 2011 and 2014, with limited data available for 2016. Data included both successful and unsuccessful participants, as well as both U.S. and international medical graduates (categorized as “independent” in the NRMP products).

RESULTS The total number of U.S. and independent applicants for EM increased steadily from 1,669 in 2007 to 2,476 in 2016 (Table 1), while the total number of EM positions available in the main match increased from 1,384 to 1,895 over the same time period. Across U.S. and independent seniors (matched and unmatched), the ratio of applicants to available EM positions remained relatively flat from 2007 to 2014 but did increase in 2016. The proportions of U.S., independent,

Table 1. Number of applicants to emergency medicine residency programs from 2007 to 2016. Data are presented as counts and percentages. Subgroup data not available (n/a) for 2016. 2007 2009 2011 2014 2016 Total number of applicants

1669

1817

2025

2106

2476

Total number of positions

1384

1515

1626

1786

1895

Applicant to position ratio

1.21

1.20

1.25

1.18

1.30

1092 (65.4)

1153 (63.5)

1259 (62.2)

1371 (65.1)

n/a

89 (5.3)

92 (5.1)

137 (6.8)

106 (5.0)

n/a

Independent* matched applicants

265 (15.9)

317 (17.5)

330 (16.3)

370 (17.6)

n/a

Independent unmatched applicants Independent, international applicant.

223 (13.4)

255 (14.0)

299 (14.8)

259 (12.3)

n/a

U.S. matched applicants U.S. unmatched applicants

Western Journal of Emergency Medicine

106

Volume XVIII, no. 1: January 2017


Manthey et al.

Trends in NRMP Data for U.S. Seniors Matching into EM

Table 2. USMLE scores among all U.S. seniors participating in the match and U.S. seniors who matched in EM. Scores are presented as mean (standard deviation). P values were generated via ANOVA. P value 2007 2009 2011 2014 F statistic U.S. EM step 1

220 (18.5)

222 (17.8)

221 (17.5)

230 (16.9)

<0.001 F3,4737 = 83.09

All U.S. step 1

220 (20.3)

224 (20.3)

225 (20.6)

230 (18.8)

n/a

U.S. EM step 2

227 (19.5)

229 (19.2)

234 (17.9)

243 (14.9)

<0.001 F3,3899=156.10

All U.S. step 2

225 (22.3)

230 (21.8)

234 (20.4)

242 (16.6)

n/a

Table 3. Trends in numbers of programs ranked. Data are presented as means (standard deviation). 2007 2009 2011 2014 All EM applicants

P value χ2

7.8 (4.2)

8.0 (4.3)

8.5 (4.6)

9.2 (5.0)

<0.001 χ2(3)=127.5

U.S. matched

9.5 (3.4)

9.8 (3.4)

10.7 (3.4)

11.6 (3.4)

<0.001 χ2(3)=293

U.S. unmatched

5.2 (3.4)

5.3 (3.6)

4.9 (3.5)

4.4 (2.5)

0.11 χ2(3)=6.1

Independent matched

5.9 (3.9)

6.3 (4.0)

6.7 (4.0)

6.7 (4.4)

0.13 χ2(3)=5.7

Independent unmatched

2.8 (2.8)

3.0 (3.1)

3.1 (3.1)

2.5 (2.9)

0.02 χ2(3)=9.5

Independent, international applicant.

matched, and unmatched students remained stable from 2007 to 2014 (χ2 (9) = 16.67, p=0.054). We observed a statistically significant upward trend in overall USMLE Step 1 and Step 2 scores for EM applicants in the time period studied (Table 2). However, the mean USMLE Step 1 exam score for matched U.S. seniors in EM increased at a rate similar to all U.S. seniors. The mean USMLE Step 2 exam score for matched U.S. seniors in EM and other U.S. seniors rose by 16 and 17 points respectively from 2007 to 2014. Table 3 shows the pattern of number of ranked programs for EM applicants. Overall, there was a statistically significant increase in the average number of programs ranked by EM applicants among the years studied (p<0.001 by Kruskal-Wallis testing). An overall decrease in the number of programs ranked by independent unmatched students from 2007 to 2014 (p=0.018 by Dunn’s test) was offset by the larger cohort of U.S. students who matched, which demonstrated a consistent year-to-year increase in the

number of programs ranked from 2007 to 2014 (p<0.001 by Dunn’s test). Across the study period, there was a statistical difference in the number of applicants who matched in EM and were AOA (p=0.043 by Fisher’s exact test), primarily due to a drop in the number of AOA students in 2011 (Table 4). There was no statistical difference in AOA membership among students who did not match (p=0.30 by Fisher’s exact test). DISCUSSION Although the data reviewed here regarding the EM residency match are publicly available, we attempted to consolidate the information and interpret trends. The growth of EM as a specialty has coincided with a perception that entry into the field has become more competitive in recent years. The findings in our study challenge this assertion in a few ways while suggesting a possible source for this perception. First, while the number of EM applicants has steadily increased each year, this

Table 4. AOA medical society status among U.S. seniors pursuing a match in emergency medicine. 2007 2009 Percent AOA among matched U.S. seniors Percent AOA among unmatched U.S. seniors AOA, Alpha Omega Alpha Medical Honor Society.

Volume XVIII, no. 1: January 2017

2011

2014

12.4

10.9

9.1

12

1.1

3.3

0.7

4.1

107

Western Journal of Emergency Medicine


Manthey et al.

Trends in NRMP Data for U.S. Seniors Matching into EM increase has mostly been matched by an increase in the number of EM positions available in the match. The 2016 match revealed the first increase during this time period in the ratio of applicants to positions available. Time will tell whether this is an anomaly or the beginning of a trend. Second, while scores on entry examinations have increased in EM applicants, this mirrors the increase in USMLE scores seen in students applying to all specialties. Third, there appears to be a small but statistically significant increase in the number of programs ranked by individual applicants, driven largely by an increase in programs ranked by students who ultimately match. Fourth, after accounting for minor year-to-year variation, the percentage of EM applicants who are members of AOA has also remained fairly constant. Thus, by most of these measures, entry into EM has not become more competitive over the past decade, with the possible exception of the 2016 match, for which complete data are not yet available. The effect of the increased number of programs ranked by ultimately matched applicants is not clear, but may be driving a perception of competitiveness by lessening opportunities for students whose applications are less competitive. Despite the lack of relative change in the composition of the applicant pool with regard to test scores and AOA status, there are other domains in which applicants to EM may be becoming increasingly competitive. For example, in this analysis we are not able to comment on changes in clerkship or medical school grades or interview performance, factors that programs and applicants both rate as important.12 These factors may predict success as well as the factors that have been quantified and examined here.13 Test scores and memberships in an honorary society should not be taken as evidence of an entirely unchanging applicant pool. However, in light of previously reported difficulties that EM faculty members have in accurately assessing applicants for letters of evaluation and in predicting position on rank lists, it is important that the relative meaning of these scores and designations be understood.14, 15 Another area of recent discussion has involved applicant behavior regarding rank lists and interviews. We used programs ranked as a proxy for interviews taken, since the data for this metric were more complete and could be better analyzed. There does appear to be a small but apparent increase in mean programs ranked by successful U.S. applicants. The decrease in the number of ranked programs seen in the unmatched applicant cohort may reflect the reality of fewer interview opportunities for those applicants at the lower end of the competitive spectrum. We also noted that independent applicants, both matched and unmatched, appeared to rank fewer programs than their U.S. counterparts. This is likely due to independent applicants being granted fewer interviews.

Western Journal of Emergency Medicine

Appropriately assessing growth in both medical school enrollment and available residency positions necessitates close monitoring of the applicant pool. Our findings suggest that over the past decade these trends have been appropriately matched and the quality of the applicant pool for EM has remained relatively stable. Further study is needed to more accurately identify changes in interviewing behavior among programs and applicants; there are trends suggesting more interviews are taken by competitive (and ultimately matched) applicants while less competitive applicants interview at and rank fewer programs. However, these trends bear further study before firm conclusions can be made. LIMITATIONS This study relied upon data available on the NRMP website. We requested additional data to allow more indepth analysis; however, these data were not accessible to the authors. Information published on the EM match in 2016 is limited at this time, thus not allowing the authors to make further analysis from the two most recent match years. As more data are published by the NRMP, these analyses should be revisited. Additional data on residency applicants to EM, such as characteristics on a standardized letter of evaluation (SLOE), are not accessible to allow for more granular analysis. Finally, the study looks at the general pool of applicants, and not at specific cohorts or individual applicants. Intangible, and therefore unquantifiable, characteristics of the applicant may have as much impact on the competitiveness of the application as numeric data. CONCLUSION NRMP data demonstrate trends among EM applicants that are similar to national trends in other specialties for USMLE board scores, and stability in number of programs ranked and AOA membership. EM does not appear to have become more competitive relative to other specialties or previous years in these categories.

Address for Correspondence: David E. Manthey, MD, Wake Forest School of Medicine, Department of Emergency Medicine, 1 Medical Center Blvd, Winston-Salem, NC, 27157. Email:DManthey@wakehealth.edu. Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none. Copyright: Š 2016 Manthey et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

108

Volume XVIII, no. 1: January 2017


Manthey et al.

Trends in NRMP Data for U.S. Seniors Matching into EM

REFERENCES

7. National Resident Matching Program, Charting Outcomes in the Match, 2014. National Resident Matching Program, Washington,

1. National Resident Matching Program, Results and Data: 2016

DC 2014.

Main Residency MatchÂŽ. National Resident Matching Program,

8. National Resident Matching Program, Charting Outcomes in the Match,

Washington, DC. 2016.

2011. National Resident Matching Program, Washington, DC 2011.

2. National Resident Matching Program, Data Release and Research Committee: Results of the 2015 NRMP Applicant Survey by Preferred Specialty and Applicant Type. National Resident Matching Program,

9. National Resident Matching Program, Charting Outcomes in the Match, 2009. National Resident Matching Program, Washington, DC 2009.

Washington, DC. 2015.

10. National Resident Matching Program, Charting Outcomes in the

3. National Resident Matching Program, Data Release and Research Committee: Results of the 2013 NRMP Applicant Survey by Preferred Specialty and Applicant Type. National Resident Matching Program,

Match, 2007. National Resident Matching Program, Washington, DC 2007. 11. House, H. The 2016 NRMP: Emergency medicine remains one of the

Washington, DC. 2013.

most popular specialties. ACEP Now 2016; 35(5): 10.

4. National Resident Matching Program, Data Release and Research Committee: Results of the 2011 NRMP Applicant Survey by Preferred Specialty and Applicant Type. National Resident Matching Program,

12. McCann S, Nomura J, Terzian W, et al. Importance of the emergency medicine application components: the medical student perception. J Emerg Med. 2016;50(3):466-470.

Washington, DC. 2011.

13. Bhat R, Takenaka K, Levine B, et al. Predictors of a top

5. National Resident Matching Program, Data Release and Research Committee: Results of the 2009 NRMP Applicant Survey by Preferred Specialty and Applicant Type. National Resident Matching Program,

performer during emergency medicine residency. J Emerg Med. 2015;49(4):505-12. 14. Grall K, Hiller K, Stoneking L. Analysis of evaluative components

Washington, DC. 2009.

on the standard letter of recommendation (SLOR) in emergency

6. National Resident Matching Program, Data Release and Research

medicine. West J Emerg Med. 2014;15(4):419-423

Committee: Results of the 2008 NRMP Applicant Survey by Preferred

15. Oyama L, Kwon M, Fernandez J, et al. Inaccuracy of the global

Specialty and Applicant Type. National Resident Matching Program,

assessment score in the emergency medicine standard letter of

Washington, DC. 2008.

recommendation. Acad Emerg Med. 2010;17:S38-S41.

Volume XVIII, no. 1: January 2017

109

Western Journal of Emergency Medicine


Educational Advances

Defining Uniform Processes for Remediation, Probation and Termination in Residency Training Jessica L. Smith, MD* Monica Lypson, MD, MHPE† Mark Silverberg, MD‡ Moshe Weizberg, MD§ Tiffany Murano, MD¶ Michael Lukela, MD†|| Sally A. Santen, MD, PhD#

*Alpert Medical School of Brown University, Department of Emergency Medicine, Providence, Rhode Island † University of Michigan, Department of Internal Medicine, Ann Arbor, Michigan ‡ SUNY Downstate/Kings County Hospital, Department of Emergency Medicine, Brooklyn, New York § Staten Island University Hospital, Northwell Health, Staten Island, New York ¶ Columbia University Medical College-NY Presbyterian Hospital, Department of Emergency Medicine, New York, New York || University of Michigan, Department of Pediatrics, Ann Arbor, Michigan # University of Michigan, Department of Learning Health Sciences, Ann Arbor, Michigan

Section Editor: Sorabh Khandelwal, MD Submission history: Submitted July 7, 2016; Revision received October 3, 2016; Accepted October 26, 2016 Electronically published November 21, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.10.31483

It is important that residency programs identify trainees who progress appropriately, as well as identify residents who fail to achieve educational milestones as expected so they may be remediated. The process of remediation varies greatly across training programs, due in part to the lack of standardized definitions for good standing, remediation, probation, and termination. The purpose of this educational advancement is to propose a clear remediation framework including definitions, management processes, documentation expectations and appropriate notifications. Informal remediation is initiated when a resident’s performance is deficient in one or more of the outcomes-based milestones established by the Accreditation Council for Graduate Medical Education, but not significant enough to trigger formal remediation. Formal remediation occurs when deficiencies are significant enough to warrant formal documentation because informal remediation failed or because issues are substantial. The process includes documentation in the resident’s file and notification of the graduate medical education office; however, the documentation is not disclosed if the resident successfully remediates. Probation is initiated when a resident is unsuccessful in meeting the terms of formal remediation or if initial problems are significant enough to warrant immediate probation. The process is similar to formal remediation but also includes documentation extending to the final verification of training and employment letters. Termination involves other stakeholders and occurs when a resident is unsuccessful in meeting the terms of probation or if initial problems are significant enough to warrant immediate termination. [West J Emerg Med. 2017;18(1)110-113.]

BACKGROUND Residency training ensures physicians develop the knowledge, skills, and attitudes required to practice medicine independently, and provides the foundation for professional growth.1 Recently, the Accreditation Council on Graduate Medical Education (ACGME) and the American Board of Medical Specialties created the Milestones Project to provide Volume XVIII, no. 1: January 2017

competency-based outcomes for trainees. Milestones serve many purposes in both graduate medical education and the accreditation process. Among them, milestones provide transparent expectations, support better longitudinal assessment of trainees, and enhance public accountability through aggregate reporting of competency by specialty.2 Residents achieve ACGME milestones at different 110

Western Journal of Emergency Medicine


Smith et al.

Defining Uniform Processes in Residency Training

stages during training.3 Some residents require remediation (additional training, assistance or supervision) to meet expectations.2, 4-7 The remediation continuum ranges from residents needing minimal guidance to those who cannot successfully complete training.8-10 The process of remediation, however, varies greatly across training programs, in part related to the inconsistency in definitions and procedures. Lack of standardized definitions for good standing, remediation, probation and termination creates challenges for program directors (PDs) and residents.11-16 OBJECTIVES Establishing shared and consistent definitions of remediation processes will enable training programs to achieve the goals intended by the ACGME, hold the medical profession accountable, and will further engender the public trust.1 Although categorizing specific resident deficiencies is beyond the scope of this paper, by incorporating the remediation practices from multiple specialties and identifying the common threads, we provide guidelines for the remediation process independent of the medical specialty. This paper proposes uniform definitions and processes for informal remediation, formal remediation, probation and termination. We examine classification definitions and triggers and elaborate on the documentation and notification requirements. Definition of these four domains were identified through review of the literature. To achieve consensus and external content validity of our model, we conducted semistructured interviews of seven PDs from different programs and five designated institutional officers (DIOs) or deans of graduate medical education (GME) from five institutions. Interviews began with open-ended questions to allow PDs, DIOs, and the GME office to describe their processes for remediation, including probation and termination. They were then asked to provide definitions for each of the four domains identified within our model. Themes were abstracted and compared with our definitions. While we noted minor variances regarding how programs or institutions applied different aspects of the remediation process, the central definitions and sequence followed were consistent with our model. DESIGN Informal remediation Informal remediation represents the first step in the process and is initiated when warning signs of problems exist but problems are not so significant to warrant immediate formal remediation.17 This stage serves as a critical opportunity to document the process if the resident fails to improve and there is an ultimate need to escalate the remediation. After surveying various PDs and GME officials in the authors’ own institutions, we found that some programs create official documentation in the resident’s official file; others use e-mail communication with the resident to document the informal remediation conversation; Volume XVIII, no. 1: January 2017

“Confidential Notes” may be created to remain peer-review protected; other PDs use separate “shadow files,” which are disposed of once the resident course corrects over time. It is important to document the resident’s strengths, deficiencies, expectations for improvement, an observation period and progress during remediation. If the resident subsequently requires formal remediation, this initial documentation will serve as the official file. During informal remediation, the PD, resident and clinical competency committee (CCC) are engaged, but not the GME office (which consists of the DIO and/or the deans of GME). Provided the resident remediates, informal remediation is not disclosed in the final verification of training or employment letters. Formal remediation Formal remediation represents the next step in the process of managing residents with deficiencies. This stage should be implemented when the resident has failed to correct identified deficiencies during informal remediation, or problems are significant enough to warrant immediate formal remediation. The length of formal remediation is determined by the PD, often at the recommendation of the CCC, and should be well defined. First, the failed informal remediation process and the unresolved deficiencies should be documented to provide evidence that formal remediation is necessary. Next, an updated corrective action plan should be documented with expected outcomes, a time frame for reassessment, and potential consequences if the remediation is not successful. Program and/or institutional grievance and due process policies should be made available to the resident. The PD should provide the resident a formal letter to be signed by both parties to acknowledge receipt and understanding. This documentation should be maintained in the resident’s permanent file. The GME office should be notified that the resident has been placed on formal remediation. Some GME offices may want to review and contribute to the formal remediation letter or plan. In most cases, provided the resident successfully remediates the deficiency, formal remediation documentation is not disclosed in the resident’s final verification of training or employment letters. Probation Probation is initiated when a resident fails to correct deficiencies during formal remediation or if problems are significant enough to warrant immediate probation. Some programs prescribe a maximum of six months of formal remediation, after which the resident is placed on probation if identified deficiencies are not corrected. Further, if resident difficulties require extension of training, the resident may need to be placed on probation, depending on institutional guidelines. The time period for probation should be concrete and follow due process if there is consideration of non-

111

Western Journal of Emergency Medicine


Smith et al.

Defining Uniform Processes in Residency Training renewal of contract or termination. The process during probation is similar to formal remediation. The PD should place formal documentation into the resident’s file noting the status, expected outcomes, revised remediation plan, a time frame and consequences if the remediation during probation is not successful. Probation may include limitations on clinical responsibilities. Both the PD and the trainee sign the documentation to ensure receipt and understanding. Institutional grievance policies and due process policies must be given to the resident. The resident’s training responsibilities may need to be modified. The GME office must be involved in resident probation. In addition, the CCC, department chair, and faculty participating in the resident’s remediation should collaborate. Based on GME office guidance, the institution’s legal counsel might be involved to ensure due process. Probation is disclosed in the final verification of training, employment letters and letters of reference. If the resident fails to meet the requirements of probation, the program may choose to not renew the employment contract or to terminate the resident. The resident on probation shouldFigure be informed that the contract will not be renewed 1. Recommendations for the following academic year, and PDs have the ability to rescind the non-renewal process should the resident demonstrate significant progress. Alternatively, the program may proceed with termination.

Termination Termination occurs when a resident fails to meet the terms of probation or if initial problems are significant enough to warrant immediate termination. It is important to document how the resident failed to resolve the identified deficiencies during remediation and probation. The GME office, legal counsel and human resources are often involved in termination. If there is a house officer union, a representative may need to be involved. Termination disclosure is included in the final verification of training, employment letters, and in letters of reference. IMPACT There is significant variation among programs regarding definitions and processes of remediation, probation, and termination.16 We provide a consensus framework for defined triggers, associated documentation, and disclosure practices. If remediation is not adequately documented and a clear process is not followed, this can hamper and affect the outcomes of formal grievance processes; thus, this schema has a standardized component to avoid that pitfall. PDs are responsible for resident remediation and may be bound by requirements from the DIO, the GME office, the department, human resources, legal counsel or unions. It is important for PDs to work closely with key stakeholders, reach out early in the remediation process, and be aware of

Figure. Remediation schema for residents at risk of not meeting educational milestones during their training.

Warning/Informal Remediation •Warning/Informal Remediation: •Process: initiate after identifying a resident's performance is deficient in one or more Milestones or core competencies •Documentation: record placeholder information in the resident's file •Disclosure: warning/informal remediation not disclosed if the deficiency is corrected

*GME, graduate medical education

Formal Remediation •Formal Remediation: •Process: initiate if the resident demonstrates a substantial deficiency, or fails to correct an identified deficiency in the designated observation period of informal remediation •Documentation: record the failed informal remediation process, an updated corrective action plan with expected outcomes/ consequences, and the time frame for resolution •Disclosure: notify the GME* office in accordance with institutional guidelines; formal remediation is not necessarily disclosed if the deficiency is corrected

Western Journal of Emergency Medicine

Probation •Probation: •Process: initiate if the resident demonstrates a substantial deficiency, or if the resident fails to correct the deficiency identified in the formal remediation stage •Documentation: record the failed formal remediation process and update the expected outcomes, consequences, and time frame for resolution •Disclosure: notify the GME office, include probation status in letters of recommendation and in the final verification of training

112

Termination •Termination: •Process: terminate the resident if a substantial deficiency warranting immediate removal from training is demonstrated, or if the resident fails to meet the terms outlined in probation •Documentation: work with the GME office, human resources , and often legal counsel, to assure due process •Disclosure: include termination status in letters of recommendation and in the final verification of training

Volume XVIII, no. 1: January 2017


Smith et al.

Defining Uniform Processes in Residency Training

local policies. Although we provide clear lines of distinction at each remediation stage, the lines sometimes blur. This occurs from insufficient documentation, lack of transparency, and poor communication. Therefore, creating clarity through good documentation and open communication is critical. This is an initial model to help clarify the definitions in the remediation process. Our remediation schema (Figure) will prove a valuable reference for PDs to provide clear instructions on how to navigate remediation and the documentation and disclosures that are required. This will help communication between residents and faculty, so trainees are aware of the process and consequences if their performance requires remediation. Ultimately, every program must ensure that they have well-defined guidelines to deal with issues of remediation, probation and termination. Next steps might be to collect further validity evidence and utility for the model.

the Medical Knowledge Core Competency. J Grad Med Educ. 2013;5(3):464-467. 4. Bok HGJ, Teunissen PW, Favier RP, et al. Programmatic assessment of competency-based workplace learning: when theory meets practice. BMC Med Educ. 2013;13:123. 5. Carraccio C and Burke AE. Beyond Competencies and Milestones: Adding Meaning Through Context. J Grad Med Educ. 2010;2(3):419-422. 6. Green ML, Aagaard EM, Caverzagie KJ, et al. Charting the Road to Competence: Developmental Milestones for Internal Medicine Residency Training. J Grad Med Educ. 2009;1(1):5-20. 7. Lurie SJ, Mooney CJ, Lyness JM. Measurement of the General Competencies of the Accreditation Council for Graduate Medical Education: A Systematic Review. Acad Med. 2009;84(3):301-309. 8. Dupras DM, Edson RS, Halvorsen AJ, et ak. “Problem Residents”: Prevalence, Problems and Remediation in the Era of Core Competencies. Am J Med. 2012;125(4):421-425. 9. Yao DC and Wright SM. The challenge of problem residents. J Gen Intern Med. 2001;16(7):486-492. 10. Steinert Y. The “problem” junior: whose problem is it? BMJ. 2008;336(7636):150-153.

Address for Correspondence: Jessica L. Smith, MD, Alpert Medical School of Brown University, Department of Emergency Medicine, 593 Eddy St. c/o Claverick, First Floor #121, Providence, RI 02903. Email: jessicasmithmd@gmail.com.

11. Riebschleger MP and Haftel HM. Remediation in the Context of the Competencies: A Survey of Pediatrics Residency Program Directors. J Grad Med Educ. 2013;5(1):60-63.

Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none. Copyright: © 2016 Smith et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

12. Guerrasio J, Brooks E, Rumack CM, et al. Association of Characteristics, Deficits, and Outcomes of Residents Placed on Probation at One Institution, 2002-2012. Acad Med. 2015 (online ahead of print). 13. Guerrasio J, Garrity MJ, Aagaard EM. Learner Deficits and Academic Outcomes of Medical Students, Residents, Fellows, and Attending Physicians Referred to a Remediation Program, 2006-2012. Acad Med. 2014;89(2):352-358. 14. Hauer KE, Ciccone A, Henzel TR, et al. Remediation of the Deficiencies of Physicians Across the Continuum From Medical School to Practice: A Thematic Review of the Literature. Acad Med. 2009;84(12):1822-1832.

REFERENCES

15. Zbieranowski I, Takahashi SG, Verma S, et al. Remediation of

1. Accreditation Council for Graduate Medical Education. ACGME

Residents in Difficulty: A Retrospective 10-Year Review of the

Common Program Requirements. Available at: https://http://www.

Experience of a Postgraduate Board of Examiners. Acad Med.

acgme.org/acgmeweb/Portals/0/PFAssets/ProgramRequirements/

2013;88(1):111-116.

CPRs2013.pdf. Accessed October 25, 2014.

16. Weizberg M, Smith JL, Murano T, et al. What Does Remediation and

2. Nasca TJ, Philibert I, Brigham T, et al. The next GME

Probation Status Mean? A Survey of Emergency Medicine Residency

accreditation system-rationale and benefits. New Engl J Med.

Program Directors. Acad Emerg Med. 2015;22(1):113-116.

2012;366(11):1051-1056.

17. Silverberg M, Weizberg M, Murano T, et al. What is the Prevalence

3. Visconti A, Gaeta T, Cabezon M, et al. Focused Board Intervention

and Success of Remediation in Emergency Medicine Residents?

(FBI): A Remediation Program for Written Board Preparation and

Volume XVIII, no. 1: January 2017

West J Emerg Med. 2014;16(6):839-844.

113

Western Journal of Emergency Medicine


Educational Advances

Adapting Gel Wax into an Ultrasound-Guided Pericardiocentesis Model at Low Cost Robert Daly, MD* Jason H. Planas, MD† Mary Ann Edens, MD†

*University of Florida Health Shands Hospital, Department of Emergency Medicine, Gainesville, Florida † LSU-HSC Shreveport, Emergency Medicine Department, Shreveport, Louisiana

Section Editor: David A. Wald, DO Submission history: Submitted July 8, 2016; Accepted October 27, 2016 Electronically published November 21, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.10.31506

Cardiac tamponade is a life-threatening emergency for which pericardiocentesis may be required. Real-time bedside ultrasound has obviated the need for routine blind procedures in cardiac arrest, and the number of pericardiocenteses being performed has declined. Despite this fact, pericardiocentesis remains an essential skill in emergency medicine. While commercially available training models exist, cost, durability, and lack of anatomical landmarks limit their usefulness. We sought to create a pericardiocentesis model that is realistic, simple to build, reusable, and cost efficient. We constructed the model using a red dye-filled ping pong ball (simulating the right ventricle) and a 250cc normal saline bag (simulating the effusion) encased in an artificial rib cage and held in place by gel wax. The inner saline bag was connected to a 1L saline bag outside of the main assembly to act as a fluid reservoir for repeat uses. The entire construction process takes approximately 16-20 hours, most of which is attributed to cooling of the gel wax. Actual construction time is approximately four hours at a cost of less than $200. The model was introduced to emergency medicine residents and medical students during a procedure simulation lab and compared to a model previously described by dell’Orto.1 The learners performed ultrasound-guided pericardiocentesis using both models. Learners who completed a survey comparing realism of the two models felt our model was more realistic than the previously described model. On a scale of 1-9, with 9 being very realistic, the previous model was rated a 4.5. Our model was rated a 7.8. There was also a marked improvement in the perceived recognition of the pericardium, the heart, and the pericardial sac. Additionally, 100% of the students were successful at performing the procedure using our model. In simulation, our model provided both palpable and ultrasound landmarks and held up to several months of repeated use. It was less expensive than commercial models ($200 vs up to $16,500) while being more realistic in simulation than other described “do-it-yourself models.” This model can be easily replicated to teach the necessary skill of pericardiocentesis. [West J Emerg Med. 2017;18(1)114-116.]

BACKGROUND Cardiac tamponade is a life-threatening emergency in which pericardiocentesis may be required. Real-time bedside ultrasound (US) has obviated the need for routine blind procedures in cardiac arrest, and the number of pericardiocenteses being performed has declined. Despite this fact, pericardiocentesis remains an essential skill in emergency medicine that can be performed with a high degree of success.2 While commercially available training models exist, cost,3 durability, and lack of anatomical landmarks limit their usefulness. Cheaper, do-it-yourself (DIY) models have been described in the literature. Dell’Orto described one in 2013,1 Western Journal of Emergency Medicine

in which a tennis ball was placed in a fluid-filled balloon, set on a layer of gel wax in a square container, and then submersed in US gel. This model was easy to build but lacked realism, durability, and cleanliness during use. OBJECTIVE Although low cost, simple, and reusable DIY models have been described,1 we sought to create a model that retains those qualities while being more realistic. CURRICULAR DESIGN The model used a red dye-filled ping pong ball

114

Volume XVIII, no. 1: January 2017


Daly et al.

Adapting Gel Wax into a Pericardiocentesis Model

(simulating the right ventricle) and a 250cc normal saline (NS) bag (simulating the effusion) encased in an artificial rib cage, held in place by a gel wax/flour solution. The inner saline bag was connected to a 1L saline bag outside of the main assembly to act as a fluid reservoir for repeat uses. The model was mounted loosely to a piece of plywood and covered with TheraBandTM latex exercise resistance bands, which proved to be an excellent skin analog. The materials listed make the cost of the assembly <$200 (Axial skeleton from Amazon $85, gel wax $35, molding bucket $8, ping pong ball $1, NS bag $6, infusion tubing $14, stopcock $3, cyanoacrylate glue $3, flour $2, TheraBand $30; total ~$195). Complete instructions for construction are outlined below.

4.

Construction 1. Prep the artificial rib cage. Remove the posterior portion of the rib cage at the mid-axillary line. This can be done with heavy-duty scissors or trauma shears as seen in Figure 1. The prepped rib cage was then placed into a 20 x14 x 8 inch container, sternum down.

Figure 1. Prepping an artificial rib cage for the first step in creating an inexpensive simulation alternative for teaching pericardiocentesis.

2.

3.

Figure 2. Melted gel wax is used to fill the artificial sternum, covering half of a ping pong ball that simulates the right ventricle.

Prepare the internal plumbing. a. Fill a ping pong ball with red-dyed water via 18g needle. A small second hole next to the injection hole will allow air to escape. Once the ball is as fluid filled and air free as possible, seal the hole with cyanoacrylate glue. b. Spike a 250cc bag of saline with accessory IV tubing. Inject the bag with 1cc food coloring and mix. Attach a three-way stopcock to the tubing and flush the tubing of air. Manipulate the 250cc bag when flushing the line to ensure that as much air is removed from the bag as possible. Prepare the gel-wax solution. Melt one gallon of gel wax (available at most craft stores) over low-medium heat. Once melted, mix in three tablespoons of flour until dissolved. The flour should be added very slowly

Volume XVIII, no. 1: January 2017

to avoid boil-over. Once dissolved, strain away surface foam and discard. Pour the wax to create the mold. This is done in three steps. a. The first pour: Slowly pour the melted wax solution over the sternum down the rib cage. Fill the container until the wax is just high enough to cover the exposed sternum. Allow mold to cool for 20 minutes. b. The second pour: First place the 250cc bag on the model in the anatomical location of the anterior portion of the pericardial sac. The IV tubing was allowed to drape over the cooled wax and extend out to leave the area of the container. The ping pong ball was then placed on the 250cc bag in the position of the right ventricle. The wax mixture was re-heated and returned to liquid form. A wooden spoon or stick was used to apply slight pressure to the ping pong ball to hold it in place and maintain safety while the mixture was poured

to cover ~1/2 of the ping-pong ball. Now allow mold to cool for 20 min (Figure 2). c. The third pour: Re-heat the wax solution again, and then pour over the mold until the ribs are

Figure 3. The mold for pericardiocentesis simulation model.

115

Western Journal of Emergency Medicine


Daly et al.

Adapting Gel Wax into a Pericardiocentesis Model 5.

submerged. Allow to cool for one hour. Model completion. Invert the container holding the model and remove the container, leaving just the mold as seen (Figure 3). Mount this on plywood, apply a small layer of US gel to the mold, and then cover with the TheraBand skin analog. Markers can then be used to draw nipples and the costal borders (Figure 4).

Figure 5. Residents performed ultrasound-guided pericardiocentesis using a model constructed of inexpensive materials.

Figure 4. Adding finishing touches to torso model, including a TheraBand skin analog with drawn-on nipples.

The dell’Orto model was also constructed per the directions outline in their paper.1. Twenty-three learners, comprised of 20 EM residents and three medical students, used and rated both models with a four-question survey. The questions rated from 1 (not well) to 9 (very well) the realism of the models, as well as the ease of recognition of the pericardium, heart, and pericardial effusion. IMPACT/EFFECTIVENESS The model was introduced to EM residents (n=20) and medical students (n=3) during a procedure simulation lab and compared to a model previously described by dell’Orto. The learners performed US-guided pericardiocentesis using both models (US demonstration of our model seen in Figure 5). Learners were given a survey comparing realism of the two models and rated ours 7.8/9 vs 4.5/9 for the previously described model. The survey also showed perceived improvement in the recognition of important structures: pericardium (5.7/9 to 8/9), the heart (5.8/9 to 8.1/9), and the pericardial sac (6.2/9 to 8.4/9). The model performed well for repeated uses over one year. Once the model begins to lose functionality due to multiple needle punctures through the wax and internal plumbing, the wax can be pulled off and re-melted. This limits subsequent reproduction costs to just the replacement of the internal plumbing.

ultrasound landmarks and held up to several months of repeated use. It was less expensive than commercial models ($200 vs $16,500) while being more realistic in simulation than other described “DIY” models. This model can be replicated to teach the necessary skill of pericardiocentesis.

Address for Correspondence: Jason H. Planas, MD, LSU-HSC Health Shreveport, Department of Emergency Medicine, 1501 Kings Highway, P.O. Box 33932, Shreveport, Louisiana 711303932. Email: jasonplanas@gmail.com. Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none. Copyright: © 2016 Daly et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

REFERENCES 1. Dell’Orto M, Hempel D, Starzetz A, et al., “Assessment of a Low-Cost Ultrasound Pericardiocentesis Model,” Emerg Med Int. vol. 2013, Article ID 376415,7 pages,2013. 2. Callahan JA and Seward JB. (1997), Pericardiocentesis Guided by

LIMITATIONS This model was tested with a small number of residents and medical students, limiting statistical power for results.

Two-Dimensional Echocardiography. Echocardiography, 14:497–504. 3. Transthoracic Echocardiography and Pericardiocentesis Ultrasound Training Model. CAE Healthcare. Available at: http://www.

CONCLUSION In simulation, this model provided both palpable and Western Journal of Emergency Medicine

bluephantom.com/product/Transthoracic-Echocardiography-andPericardiocentesis-Ultrasound-Training-Model.aspx?cid=411.

116

Volume XVIII, no. 1: January 2017


Educational Advances

Novel Airway Training Tool that Simulates Vomiting: Suction-Assisted Laryngoscopy Assisted Decontamination (SALAD) System James DuCanto, MD* Karen D. Serrano, MD† Ryan J. Thompson, MD‡

*Aurora St. Luke’s Medical Center, Department of Anesthesiology, Milwaukee, Wisconsin † University of North Carolina, Department of Emergency Medicine, Chapel Hill, North Carolina ‡ University of Wisconsin, Department of Emergency Medicine, Madison, Wisconsin

Section Editor: Jeffrey Love, MD Submission history: Submitted May 15, 2016; Revision received August 29, 2016; Accepted September 18 Electronically published November 8, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.9.30891

Introduction: We present a novel airway simulation tool that recreates the dynamic challenges associated with emergency airways. The Suction-Assisted Laryngoscopy Assisted Decontamination (SALAD) simulation system trains providers to use suction to manage emesis and bleeding complicating intubation. Methods: We modified a standard difficult-airway mannequin head (Nasco, Ft. Atkinson, WI) with hardwarestore equipment to enable simulation of vomiting or hemorrhage during intubation. A pre- and post-survey was used to assess the effectiveness of the SALAD simulator. We used a 1-5 Likert scale to assess confidence in managing the airway of a vomiting patient and comfort with suction techniques before and after the training exercise. Results: Forty learners participated in the simulation, including emergency physicians, anesthesiologists, paramedics, respiratory therapists, and registered nurses. The average Likert score of confidence in managing the airway of a vomiting or hemorrhaging patient pre-session was 3.10±0.49, and post-session 4.13±0.22. The average score of self-perceived skill with suction techniques in the airway scenario presession was 3.30±0.43, and post-session 4.03±0.26. The average score for usefulness of the session was 4.68±0.15, and the score for realism of the simulator was 4.65±0.17. Conclusion: A training session with the SALAD simulator improved trainee’s confidence in managing the airway of a vomiting or hemorrhaging patient. The SALAD simulation system recreates the dynamic challenges associated with emergency airways and holds promise as an airway training tool. [West J Emerg Med. 2017;18(1)117-120.]

INTRODUCTION Emergency airway management is a critical skill in emergency medicine. Traditional training in airway management relies on use of airway mannequins and intubations in the controlled setting of the operating room in fasting, preoxygenated patients.1,2 Neither of these methods duplicates the dynamic, challenging conditions surrounding emergency airways, including actively vomiting patients and

Volume XVIII, no. 1: January 2017

those with blood and secretions contaminating the glottic view.3 Blood and vomitus in the airway has been identified as a predictor of difficult intubation.4,5,6,7 A training model that could simulate the challenges of an actively vomiting patient or a bloody airway would be ideal to prepare trainees to face these situations in real clinical practice. Here we present a novel airway training tool that simulates the airway of a vomiting patient.

117

Western Journal of Emergency Medicine


DuCanto et al.

The SALAD Simulation System The Suction-Assisted Laryngoscopy Assisted Decontamination (SALAD) simulation system pushes the boundaries of traditional mannequin-based simulations to present the trainee with the experience of using suction to control emesis and/or bloody secretions during an airway management scenario. An airway mannequin is adapted using simple hardware-store equipment to allow pumping of simulated vomit (simulated airway contaminant, or SAC) into the airway. Trainees are presented with two airway scenarios, one in which they must clear a static pool of vomit contaminating the glottic view, and one in which they must contend with continuous flow rates of SAC to suction the glottis and pass an endotracheal tube. This model has been pioneered among various trainee groups, including physicians, medical students, paramedics, nurses, and respiratory therapists. The objective of this study was to pilot an innovative airway management simulator and demonstrate learner satisfaction and self-reported comfort with difficult airways.

Table 1. Components and approximate associated costs of the Suction-Assisted Laryngoscopy-Assisted Decontamination (SALAD) simulation system. SALAD component Price

METHODS Institutional review board exemption was sought and granted. The Suction-Assisted Laryngoscopy Assisted Airway Decontamination (SALAD) simulation mannequin was built from commercially available materials. We modified a standard difficult airway mannequin head (Nasco, Ft. Atkinson, WI) to enable simulation of vomiting or hemorrhage during intubation. The modifications involved fitting clear vinyl 5/8 inch I.D. x 7/8 inch O.D. (1/8 inch wall) tubing to the existing esophagus of the mannequin, and using clear acrylic glue to secure this tubing. Quick connect hose parts were used to link the esophagus to a self-priming drillpowered fluid pump, which was connected via vinyl tubing to a large plastic reservoir that contained the SAC. The flow of SAC is controlled using a variable rheostat, which the drill is plugged into. A simple on/off switch mechanism with wireless radio control permits the instructor to control the timing and flow of SAC that the trainee must clear from the oropharynx. We created the SAC by mixing white vinegar with xanthan gum powder, in a ratio of 10 ml of xanthan gum powder to 1L of white vinegar. Food coloring, either red or green, is added to the mixture to simulate either vomit or hematemesis. If a different consistency of vomit is desired, more xanthan gum powder could be added for thicker vomit, and less for thinner vomit. For the purposes of the study, we kept the mixture consistent. Vinegar is used to add an olfactory component to the vomit and also to help prevent the growth of mold in the system. Table 1 lists components of the SALAD simulator system and approximate associated costs. Learners were run through two scenarios – one in which they must decontaminate a static pool of vomit in the airway prior to intubation, and one in which there is continuous vomiting that must be actively suctioned during the intubation. Students used a video-assisted laryngoscopy device (C-MAC,

SAC, simulated airway contaminant

Western Journal of Emergency Medicine

Nasco airway head

$895

Vinyl tubing

$19

Quick connect hose kit x 2

$6

Drill pump

$12

Corded electric variable speed drill

$20

Remote control switch

$15

Rheostat

$10

5 gallon reservoir

$10

Total simulator cost

$987

1 gallon white vinegar

$3

8 oz xantham gum

$10

Total SAC cost

$13

Total cost

$2000

Karl Storz, Tuttlingen, Germany) during both intubations. The C-MAC was chosen because it allows the instructor to view the oropharynx on the video screen and provide feedback to the learner. We used a pre-and-post session survey to collect information on learner perceptions of confidence in managing the airway in a vomiting or hemorrhaging patient on a 1-5 Likert scale, with 1 being “not at all” and 5 being “extremely.” Self-perception of skill in using suction devices and techniques during the management of emergent airways was assessed on a similar 1-5 Likert scale. We also collected learner prior experience using simulation to learn airway management. and their prior experience using simulation to learn airway management in a vomiting or hemorrhaging patient. Learner perception of realism of the simulator and usefulness of the session was also assessed using a 1-5 Likert scale after the session. RESULTS Forty learners participated in the simulation, including six paramedics, five respiratory therapists, six registered nurses, seven certified registered nurse anesthetists, one nurse practitioner, six emergency physicians, seven anesthesiologists, and two medical students. Thirty-four (85%) had used simulation in the past to learn airway management skills, but only one (2.5%) had used simulation to learn airway management in a vomiting or hemorrhaging patient. The average Likert score of confidence in managing the airway of a vomiting or hemorrhaging patient pre-session was 3.10±0.49, and the post-session score was 4.13±0.22. The average score pre-session of self-perceived skill with suction

118

Volume XVIII, no. 1: January 2017


DuCanto et al.

The SALAD Simulation System

Table 2. Pre-and-post survey results regarding simulation training system for difficult airways. Mean Likert score (1-5) Pre course I am confident in my ability to manage the airway of a vomiting or hemorrhaging patient.

3.10±0.49

I am skilled with various suction devices and techniques during the emergent airway.

3.30±0.43

Post course I am confident in my ability to manage the airway of a vomiting or hemorrhaging patient.

4.13±0.22

I am skilled with various suction devices and techniques during the emergent airway.

4.03±0.26

I plan to apply the SALAD technique in the future with vomiting patients.

4.53±0.19

How useful was this session for you?

4.68±0.15

Was the simulator sufficiently realistic to challenge your skills?

4.65±0.17

SALAD, Suction-Assisted Laryngoscopy Assisted Decontamination

devices and techniques in the emergent airway was 3.30±0.43, and the post-session score was 4.03±0.26 (Table 2). The average score for usefulness of the session was 4.68±0.15, and the score for realism of the simulator was 4.65±0.17. DISCUSSION Blood, secretions, and active vomiting have all been identified as predictors of difficult intubation.8,9,10 Current airway training models use traditional airway mannequins and intubations in the controlled setting of the operating room. Trainees are then expected to apply these basic airway skills in the more complicated, real-life airway emergencies involving emesis, blood, and secretions contaminating the glottic view. These true airway emergencies occur relatively infrequently in clinical practice, so even seasoned providers often do not feel comfortable in these scenarios, adding to the stress of an already very challenging situation of a critically ill patient.11, 12, 13 We believe the SALAD system adds value to traditional airway teaching models by providing learners with unlimited opportunity to master the most challenging of airway skills. While our study did not evaluate retention of skill or real-world clinical outcomes, prior research suggests that simulation is an excellent method to teach procedural competence. Simulation has been shown to be superior to non-simulation based methods of instruction in skill acquisition and retention,14 and also to generate a similar stress response in learners to real-world resuscitations,15 preparing learners to perform in high-stress situations. Retention rates of complex procedural skills after simulation training is also high,13 and simulation-based airway management training has been shown to improve clinical metrics such as first-pass success.16 The SALAD system teaches a complex set of tasks required to manage an airway contaminated with vomit or secretions. The trainee, upon opening the mannequin’s mouth

Volume XVIII, no. 1: January 2017

and inserting the laryngoscope blade, will see the oropharynx filling rapidly with simulated vomit. Students must learn to grip the suction catheter, clear the airway of vomit, visualize the glottic structures, and pass the endotracheal tube. In the airway scenario with continuous vomiting, we instruct learners to position the suction catheter directly into the esophagus after clearing the glottic field to prevent additional contamination of the airway. This requires use of the nondominant forearm to keep the suction catheter lodged in position, while the non-dominant hand holds the laryngoscope blade and the dominant hand manages the endotracheal tube. This requires manual dexterity, which can be quickly learned in the training sessions. The SALAD simulation training system also allows monitoring of the learners’ progress. Skill acquisition can be easily measured and documented, as students master endotracheal tube placement while contending with low flow rates of simulated vomiting, and must demonstrate these same skills at higher flow rates. Residency and fellowship training programs can track the progression of their learners, and this can be correlated with airway milestone acquisition per Accreditation Council for Graduate Medical Education requirements.17 LIMITATIONS The primary limitation of this study is that the outcome measure was self-reported confidence with managing the airway of a vomiting patient. Additional research is needed to evaluate whether this subjective outcome translates to improved patient-oriented outcomes, such as time to intubation or success of first-pass intubation in a vomiting patient. The data show a very highly statistically significant increase in self-reported confidence for the airway management of a vomiting patient. However, the post-test was taken immediately after the training, and the possibility of skill decay is real. The duration of this improved confidence level is unknown. Additionally, this study is limited by a

119

Western Journal of Emergency Medicine


DuCanto et al.

The SALAD Simulation System relatively small number of participants. Furthermore, the training exercise was multidisciplinary in nature, including emergency physicians as well as medical students, nurses, respiratory therapists, and nurse anesthetists. A minority of participants were emergency physicians, the providers arguably most likely to encounter the difficult airway in clinical practice.

endotracheal intubation. Crit Care Med. 2014;42:1372-8. 4. Combes X, Jabre P, Jbeili C, et al. Prehospital standardization of medical airway management: Incidence and risk factors of difficult airway. Acad Emerg Med. 2006;13:828-34. 5. Gaither J, Spaite D, Stolz U, et al. Prevalence of difficult airway predictors in cases of failed prehospital endotracheal intubation. J Emerg Med. 2014;47:294-300. 6. Mosier J, Stolz U, Chiu S, et al. Difficult airway management in the

CONCLUSION In summary, we feel the SALAD simulation system holds promise as an educational tool to provide experience in managing difficult airways. Participants’ self-reported confidence in managing the airway of a vomiting patient improved with the training session, and trainees shared anecdotal reports that the training session helped them in a subsequent clinical encounter. Further research is needed to evaluate whether training with the SALAD simulator improves patient-related outcomes in the management of emergency airways.

emergency department: GlideScope videolaryngoscopy compared to direct laryngoscopy. J Emerg Med. 2012;42:629-34. 7. Burns B, Habig K, Eason H, et al. Difficult intubation factors in prehospital rapid sequence intubation by an Australian helicopter emergency medical service. Air Med J. 2016;35:28-32. 8. Gaither J, Spaite D, Stolz U, et al. Prevalence of difficult airway predictors in cases of failed prehospital endotracheal intubation. J Emerg Med. 2014;47:294-300. 9. Combes X, Jabre P, Jbeili C, et al. Prehospital standardization of medical airway management: Incidence and risk factors of difficult airway. Acad Emerg Med. 2006;13:828-34. 10. Burns B, Habig K, Eason H, et al. Difficult intubation factors in prehospital rapid sequence intubation by an Australian helicopter emergency medical service. Air Med J. 2016;35:28-32.

Address for Correspondence: Ryan J. Thompson, MD, University of Wisconsin, Department of Emergency Medicine, 800 University Bay Drive, Suite 310, Madison, WI 53705. Email: RThompson@ medicine.wisc.edu.

11. Crosby E, Cooper R, Douglas M, et al.The unanticipated difficult airway with recommendations for management. Can J Anaesth. 1998;45:757-76.

Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none. Copyright: © 2016 DuCanto et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

12. Mort T. Emergency tracheal intubation: Complications associated with repeated laryngoscopic attempts. Anesth Analg. 2004;99(2):607-13. 13. Sakles J, Laurin E, Rantapaa A, et al. Airway management in the emergency department: A one-year study of 610 tracheal intubations. Ann Emerg Med. 1998;31:325-32. 14. Kennedy C, Cannon E, Warner D, et al. Advanced Airway Management Simulation in Medical Education: A Systematic Review and Meta-Analysis. Brit Care Med. 2014;42(1):169-78. 15. Daglius D and Scalabrini N. Stress levels during emergency care: A comparison between reality and simulated scenarios. J Crit Care. 2016;33:8-13. 16. Mosier J, Malo J, Sakles J, et al. The Impact of a Comprehensive

REFERENCES

Airway Management Training Program for Pulmonary and Critical

1. Gott L, Randel G, Straker T, et al. A survey of airway training among

Care Medicine Fellows: A Three-Year Experience. Ann Am Thorac

US and Canadian anesthesiology residency programs. J Clin Anesth. 2001;23(1):15-26.

Soc. 2014;12:539-48. 17. Accreditation Council for Graduate Medical Education and American

2. Rai M and Popat M. Evaluation of airway equipment: Man or

Board of Emergency Medicine. The Emergency Medicine Milestone

manikin? Anesthesia. 2011;66(1):1-3.

Project.Available at: https://www.acgme.org/acgmeweb/Portals/0/

3. Prekker M, Kwok H, Shin J, et al. The process of prehospital

PDFs/Milestones/EmergencyMedicineMilestones.pdf. Accessed

airway management; challenges and solutions during paramedic

Western Journal of Emergency Medicine

March 7, 2016.

120

Volume XVIII, no. 1: January 2017


Educational Advances

Creation of a Realistic Model for Removal of a Metallic Corneal Foreign Body for Less than $75 Julie S. Sayegh, MD, MA.Ed*† Sari Lahham, MD, MBA*† Logan Woodhouse, BS* Jenny Seong, BS* C. Eric McCoy, MD, MPH*†

* University of California, Irvine, School of Medicine, Irvine, California † University of California, Irvine, UC Irvine Medical Center, Orange, California

Section Editor: Andrew W. Phillips, MD, MEd Submission history: Submitted August 27, 2016; Accepted October 27, 2016 Electronically published December 15, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.10.32234

[West J Emerg Med. 2017;18(1)121-125.]

BACKGROUND Metallic corneal foreign bodies (MCFB) are one of the most common causes of ocular injury presenting to the emergency department.1-4 Patients are at risk of developing tissue necrosis, infection, and even vision loss if the foreign body is not removed in a timely manner.1-3,5,6 Traditionally, these foreign bodies are removed under slit-lamp examination using a sterile, large-gauge needle, followed by an electric burr to remove the subsequent rust ring.1,3,5-7 Forceful attempts to remove the MCFB can lead to perforation of the anterior chamber, corneal scarring, and worsening of vision.1,6,7 Healthcare providers must use proper technique in removing ocular foreign bodies to prevent these underlying complications.1,6,7 Several studies have shown that simulation improves procedural confidence and skill in MCFB removal.4,6,7 Models made from bovine eyes, agar plates, gelatin, and corneas created from glass and paraffin wax have previously been created; however, the use of corneas created from ballistics gel for MCFB removal has not been studied.4,6,7 We propose a realistic, sustainable, cost-effective MCFB task-trainer to introduce the fundamental skills required for MCFB removal. This task-trainer also helps with the kinesthetics required for measurement of intraocular pressure (IP) with a TonoPen. In a brief PubMed search using the words Tono-Pen task-trainer, IP task-trainer, ocular glaucoma task-trainer, and IP simulator, only one study using a rubber glove filled with water was found to simulate this task.8 OBJECTIVE The objective of this article is to provide educators with an easy to follow, step-by-step recipe to create a realistic, sustainable, cost-effective MCFB and Tono-Pen task-trainer to train healthcare providers who will be responsible for patients

Volume XVIII, no. 1: January 2017

requiring these procedures. Creation of this task-trainer requires a total time of approximately 90 minutes, and costs less than $75 to create. This task-trainer is comprised of three major components: the head, eyeball and cornea. CURRICULAR DESIGN Items Needed Head 12-inch Styrofoam mannequin head: $7-13 (amazon.com) Liquid latex: 4 oz. bottle $6 -10 (ebay.com) 11-blade scalpel/small, sharp knife: $1 (Dollar Tree/99 Cents Only Store) Molds for Eyeball and Cornea 100% clear silicone tube: $3-6 (Home Depot) 1-pt paint thinner: $3-4 (Home Depot) Baby oil: $2-4 (Walmart) 1 box of cornstarch: $1 (This item and items below: Dollar Tree/99 Cents Only Store) Small ¾ inch plastic balls: $1 Bag of small, round 0.5-inch marbles: $1 Jar of petroleum jelly: $1 Disposable plastic cups: $1 Disposable plastic spoons: $1 Disposable latex gloves: $1 Cornea 1-lb block of ballistics gel: $14 (http://store.clearballistics.com) 0.5-inch circle hole punch: $5 (amazon.com, brand: EK tools) Microwave-safe glass cup: $1 (This item and item below: Dollar Tree/99 Cents Only Store) Instant glue: $1

121

Western Journal of Emergency Medicine


Sayegh et al.

Removal of a Metallic Corneal Foreign Body Metallic Foreign body Paper clips: $1/packet (This item and items below: Dollar Tree/99 Cents Only Store) Tweezers: $1 Wire cutters: $1

5. Press a ¾-inch plastic ball halfway into the mold, making sure not to press the ball through the mold (Image 1A). 6. Allow the mold to cure for 20 minutes until firm. 7. After curing, remove the plastic ball from the mold. 8. Using a cleanly gloved finger, coat the impression of the ¾-inch ball with a thin layer of petroleum jelly. (This functions as a releasing agent for the material that will be placed inside to create the eyeball.)

Assembly instructions Begin creation of the task-trainer by preparing the head first so that ample time can be allowed for it to dry. The Head 1. Take the 12-inch Styrofoam mannequin head, and using an 11-blade scalpel or sharp knife, cut out the eyes to create approximately one-inch deep eye sockets. 2. Paint the Styrofoam head with a thin coating of fleshcolored liquid latex, and set aside to dry. After preparing the head, begin making the molds for the eyeball and cornea. Make sure to wear gloves prior to handling the silicone mixture. Also, ensure the work is done in a well-ventilated space as both silicone and paint thinner release a noxious odor. (See material data safety sheets [MSDS] for safety and handling information.) 9,10 The Molds Eyeball Mold 1. In a disposable plastic cup, squeeze ¼ cup of 100% clear silicone. 2. Add ¼ cup of cornstarch. 3. Add 5-6 drops of baby oil to the silicone/cornstarch mixture. Combine thoroughly using a disposable plastic spoon. The final mixture should have the consistency of dough. 4. Using gloved hands, remove the mixture from the plastic cup and form it into a ball.

Cornea Mold 1. Squeeze ¼ cup of 100% clear silicone into a disposable plastic cup. 2. Add ¼ cup of cornstarch. 3. Add 5-6 drops of baby oil to the silicone/cornstarch mixture, and combine thoroughly using a disposable plastic spoon. The final mixture should have the consistency of dough. 4. Using gloved hands, remove the mixture from the plastic cup and press it into a ¼-inch thick rectangle that is approximately 2 inches wide x 3 inches long. (Size may vary slightly depending on amount silicone/cornstarch mixture.) 5. Press 6 marbles, evenly spaced, halfway into the mold, ensuring not to press them through the mold. 6. Allow the mold to cure for 20 minutes until firm. 7. After curing, remove the marbles from the mold (Image 1B). 8. Using a cleanly gloved finger, coat the impressions from the marbles with a thin layer of petroleum jelly to act as a releasing agent for the cornea material. The Eyeball 1. Squeeze ¼ cup of 100% clear silicone into a disposable plastic cup.

Image 1. Mold for creating the eyeball, and finished eyebal with cornea, 1A; and mold for creating the cornea, and completed cornea with iris, 1B.

Western Journal of Emergency Medicine

122

Volume XVIII, no. 1: January 2017


Sayegh et al.

Removal of a Metallic Corneal Foreign Body 4. Using the sharp knife or scalpel, cut a 1-inch cube of the clear ballistics gel and place into a microwavesafe glass cup. 5. Melt the ballistics gel in the microwave, using 2-4 minutes intervals until fully melted. (Do not stir or mix the ballistics gel until fully melted, as this will introduce bubbles into the gel and result in an unclear cornea). 6. Add 1 tablespoon of paint thinner to the melted ballistics gel and, using a plastic spoon, stir the solution slowly until thoroughly mixed, taking caution not to introduce bubbles into the mixture. (Caution: this step releases volatile gas so perform in a well-ventilated area).10 7. Slowly pour the mixture into the petroleum-lined cornea molds to prevent formation of bubbles. 8. Place a cut-out iris (from steps 1-3) over the top of each cornea, ensuring that the colored image side is downward. The paper will adhere to the ballistics gel, creating a cornea with its respective iris. 9. Allow corneas to harden for about five minutes. 10. Using a finger, apply gentle lateral pressure to each cornea to remove them from the mold (Image 1B). 11. Using instant glue, secure the cornea to the center of the eyeball that was placed inside the mannequin head (Image 2B). Take caution not to press the cornea too firmly as this will cause a deformity.

Image 2. Completed task-trainer once assembled, A; and close-up image of the eye with embedded metallic corneal foreign body, B.

2. Add Âź cup of cornstarch. 3. Rather than adding baby oil, add 2 tablespoons of paint thinner to the silicone/cornstarch mixture and mix thoroughly using a disposable plastic spoon. The paint thinner will soften the silicone, creating a more pliable and life-like feel to the eye. (Caution: this step releases a noxious smell; see MSDS for additional safety information.)10 The final mixture should have the consistency of pudding. 4. Fill the petroleum-lined eyeball mold with the eyeball mixture using the plastic spoon. Make sure to have gloves on during this step if not already wearing them. 5. Allow the eyeball mixture to cure for 15-20 minutes. (Once cured, the eyeball should feel firm. If not, allow to cure for an additional 5-10 minutes.) 6. After the curing process, apply lateral pressure to the eyeball to remove it from the mold. 7. Insert the eyeball into the hollowed-out eye socket on the Styrofoam mannequin head (Image 2A). 8. Repeat steps 1-7 to create the second eyeball.

The Metallic Foreign Body Using wire cutters, cut a paperclip at a diagonal angle into 2-mm pieces to create a sharp MCFB. 1. Use the tweezers to embed the small metallic foreign body into the cornea at the desired location (Image 2B). 2. After being assembled, an18-guage needle attached to a syringe can be used to teach and practice the technique required to remove a MCFB, and to measure IP using a Tono-Pen.

The Cornea 1. To create the iris for the cornea, obtain desired image of an iris from the Internet. 2. Resize the iris image to 0.5-inch using a printable document program. Paste several images of the iris on the same document and print it, preferably in color for a more life-like eye. 3. Using the 0.5-inch circular whole punch, cut out the irises and place aside (Image 1A).

Volume XVIII, no. 1: January 2017

A realistic click can be felt as the needle scrapes the MCFB. Due to the properties of the ballistics gel, the residual area of the MCFB will re-seal and can be reused multiple times as long as the needle tip does not destroy the gel. Once the task-trainer is created, only the corneas will need to be remade as needed, which should take only 15-20 minutes. The shelf life of unused corneas can be extended by several months when stored covered in the refrigerator, as the ballistics gel will dry out with time. When using the Tono-Pen to measure IP, the pliability of the ballistics gel provides a realistic feel to the cornea, and leads to an accurate pressure reading of less than 20mmHg. For increased realism, the mannequin head can be secured to a slit lamp using an IV tourniquet or strap. Additional props for realism, such as eyelashes, eyebrows or a wig, can also be applied.

123

Western Journal of Emergency Medicine


Sayegh et al.

Removal of a Metallic Corneal Foreign Body IMPACT/EFFECTIVENESS We conducted an informal pilot study with the help of 14 learners – six medical students (MS) and eight emergency medicine residents – in order to assess the usefulness of this trainer. The learners were given surveys to determine their level of comfort performing MCFB removal and IP measurement with a Tono-Pen prior to, and after, using the task-trainer. All of the MS group (100%) did not feel comfortable performing MCFB removal or using the Tono-Pen prior to using the task-trainer. After use, all (100%) felt at least somewhat comfortable or comfortable removing a MCFB and using the Tono-Pen (Figures 1 & 2). Only 25% of the first-year residents (R1s) felt some degree of comfort removing a MCFB; however, after using the task-trainer, all felt some degree of comfort performing this procedure (Figure 1). For Tono-Pen use, 25% of R1s initially felt very comfortable using the Tono-Pen, with an increase to 75% after using the task-trainer (Figure 2).

All second-year residents (R2s) felt somewhat comfortable performing MCFB removal, and after using the task-trainer, all felt very comfortable performing this procedure (Figure 1). There was no change noted after using the Tono-Pen task-trainer, as all R2s initially felt very comfortable handling the Tono-Pen (Figure 2). None of the third-year residents (R3s) felt very comfortable performing MCFB removal prior to using the task-trainer. However 67% felt very comfortable doing this procedure after task-trainer use (Figure 1). Seventy-five percent of R3s felt very comfortable using the Tono-Pen prior to using the task-trainer, which increased to 100% after use (Figure 2). The learners were also asked to examine the task-trainer and comment on its realism. All learners felt that the task-trainer was realistic. Lastly, the learners were asked whether they felt this type of task-trainer would be valuable during their training, and all agreed that it would be valuable for both MCFB removal and measurement of IP with a Tono-Pen (Table).

Figure 1. Level of comfort for medical students and residents year 1-3 before (pre) and after (post) the use of the metallic corneal foreign body task-trainer. MS, medical students, n=6; R1, first year residents, n=4; R2, second year residents, n=1; R3, third year residents, n=3.

Figure 2. Level of comfort for medical students and residents year 1-3 before (pre) and after (post) the use of the Tono-Pen task-trainer. MS, medical students, n=6; R1, first year residents, n=4; R2, second year residents, n=1; R3, third year residents, n=3.

Western Journal of Emergency Medicine

124

Volume XVIII, no. 1: January 2017


Sayegh et al.

Removal of a Metallic Corneal Foreign Body

Table. Pilot study survey results: Survey results obtained for task-trainer after completion of metallic corneal foreign body removal and intraocular pressure measurements using a Tono-Pen; n=number of medical students or residents who participated in the survey. Metallic corneal foreign body Tonopen Medical students n=6 Have you performed this procedure before?

Residents n=8

Medical students n=6

Residents n=8

0/6 (0%)

2/8 (25%)

0/6 (0%)

7/8 (87.5%)

Do you think the task-trainer felt realistic?

6/6 (100%)

8/8 (100%)

6/6 (100%)

8/8 (100%)

Do you think this task-trainer is an effective tool for training?

6/6 (100%)

8/8 (100%)

6/6 (100%)

8/8 (100%)

As current trends in simulation become more focused on patient safety, task-trainers can provide an invaluable learning experience for residents, medical students and physicians.6,11,12 This task-trainer serves as a realistic, cost-effective, handson training tool that can improve the skills required to care for patients presenting with MCFBs, and can also teach the manual skills necessary for measuring intraocular pressures with a Tono-Pen. Creation of the task-trainer required a total time of 90 minutes. The cost of materials to build the model was less than $75.

Philadelphia, PA: Saunders; 2010:715-7. 2. Zhonghui L and Gardiner M. The Incidence of Intraocular Foreign Bodies and Other Intraocular Findings in Patients with Corneal Metal Foreign Bodies. Ophthalmology. 2010;117(11):2218-21. 3. Ramakrishnan T, Constantinou M, Jhanji V, et al. Corneal Metallic Foreign Body Injuries Due To Suboptimal Ocular Protection. Arch Environ Occup Health. 2012;67(1):48-50. 4. Collins DW and Coroneo MT. Removal of Corneal Foreign Bodies: An Instructional Model. Ophthalmic Surg. 1994;25(2):99-101. 5. Babineau MR and Sanchez LD. Ophthalmologic procedures in the Emergency Department. Emerg Med Clin North Am. 2008;26:17–34, v-vi. 6. Cheng ML, Fu L, Cackett P. A novel, safe and cost effective way for teaching corneal foreign body removal. Emerg Med J.

Address for Correspondence: Julie Sami Sayegh, MD, MA.Ed, UC Irvine Department of Emergency Medicine, 333 The City Blvd, West RT 128-01, Suite 640, Orange, CA 92868. Email: jsayegh@ uci.edu.

2015;32(6):501-2. 7. Austin PE, Ljung M, Dunn K. A New Model for Teaching Corneal Foreign Body Removal. Acad Emerg Med. 1995;2:831-4. 8. Hoonpongsimanont W, Nguyen K, Deng W, et al. Effectiveness of a

Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none.

40-minute Ophthalmologic Examination Teaching Session on Medical Student Learning West J Emerg Med. 2015;16(5):721–6. 9. Material Safety Data Sheet: GE5000 12C-Crtrg (0.730 Lbs-0.331 Kg) Silicone Rubber Sealant. Momentiv Performance Materials; 2007:1-8. Available at: http://www.homedepot.com/catalog/

Copyright: © 2016 Sayegh et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

pdfImages/2d/2d9622c5-268d-4ab0-8dce-32dd186b8b72.pdf. 10. Barr WM. Material Safety Data Sheet: Klean Strip Paint Thinner. W.M. Barr and Company; 2015:1-7. Memphis, TN. Available at: http:// www.homedepot.com/catalog/pdfImages/1e/1eaaa4ea-0638-4220adff-41a15a257c65.pdf. 11. Okuda Y, Bryson E, DeMaria S, et al. The utility of simulation in medical education: what is the evidence? Mt Sinai J Med. 2009;76(4):330-43.

REFERENCES

12. Chakravarthy B, Ter Haar E, Bhat SS, et al. Simulation in Medical

1. Thomas SH and White BA. Foreign Bodies. In: Marx JA, ed. Rosen’s

School Education: Review for Emergency Medicine. West J Emerg

Emergency Medicine: Concepts and Clinical Practice. 7th ed.

Volume XVIII, no. 1: January 2017

Med. 2011;12(4):461-6.

125

Western Journal of Emergency Medicine


Brief Research Report

What Do They Want from Us? A Survey of EM Program Directors on EM Application Criteria Kevin King, MD Dara Kass, MD

*The University of Texas Health Science Center at San Antonio, Department of Emergency Medicine, San Antonio, Texas †New York University School of Medicine, Department of Emergency Medicine, New York, New York

Section Editor: Sorabh Khandelwal, MD Submission history: Submitted July 8, 2016; Accepted October 27, 2016 Electronically published November 23, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.10.31496

Introduction: Although a relatively young specialty, emergency medicine (EM) is popular among medical students and is one of the most competitive large specialties. Consequently, students increasingly seek more opportunity to differentiate themselves from their colleagues by pursuing more clerkships at the cost of taking out additional loans: this despite the fact that those who match in EM typically do so in their top three choices. We sought to ascertain what factors EM program directors seek in their typical candidate. Methods: We recruited EM program directors via the Council of Emergency Medicine Residency Directors email listserv to participate in an anonymous survey regarding the United States Medical Licensing Examination (USMLE), the number of standardized letters of evaluation (SLOE), and the number of EM rotations during the fourth year. Results: 135 respondents completed the anonymous survey: 59% of respondents stated their program did not have a minimum USMLE Step 1 score, but 39% reported a minimum score of 210 or higher; 95% of programs do not require Step 2 to grant an interview, but 46% require it to place the student on the rank list; 80% require only one EM rotation to grant an interview and none require more than two; 95% of programs will accept two SLOEs for both application and rank list placement. Conclusion: For the typical EM applicant, there is likely little benefit to performing more than two rotations and obtaining more than two SLOEs. Students can defer USMLE Step 2 but must complete it by the time rank lists are due. Our study was limited by the anonymity of the survey, and comments by the respondents revealed the questions did not account for some nuances programs apply to their application review process. [West J Emerg Med. 2017;18(1)126-128.]

INTRODUCTION Although emergency medicine (EM) is a young specialty, it is a popular career choice for graduating medical students. Internal medicine, family medicine, pediatrics and EM offer the highest number of categorical residency positions.1 Despite consistent increases in the number of training spots, last year there was only one unmatched spot in EM, making it the most competitive of the aforementioned specialties.1 The residency match process is stressful for both advisors and medical students. Students applying to competitive specialties, such as EM, are being told to apply to an Western Journal of Emergency Medicine

increasing number of residency programs.2 Although students are applying to a record number of programs, 80% of matched candidates ultimately match at one of their top three choices.1 Published recommendations for students applying to EM have impacted how we advise students. In 2000, Crane et al surveyed EM program directors (PD) to understand which aspects of the EM application were important to them.3 They found that EM rotation grades, residency interview, clinical grades and clinical recommendations were the most important aspects of the EM application. More recent data from the NRMP corroborate that as well.4

126

Volume XVIII, no. 1: January 2017


King et al.

Survey of EM Program Directors on EM Application Criteria

In 2016, Clerkship Directors in Emergency Medicine (CDEM) published an advising guide for students applying to EM. This guide, written by experienced EM clerkship directors, addresses the question, “If 3 is good, 4 is better, right??”5 This referred to the incremental increase in number of away rotations students were pursuing. Despite knowledge that strong EM rotation grades and the standardized letter of evaluation (SLOE) carry significant weight6 in an EM residency application, students often buffer their applications with multiple away rotations and apply to dozens of residency programs, irrespective of the overall quality of their candidacy.5 While the clerkship directors declared that more is not necessarily better in respect to rotations and SLOEs, we felt that the addition of EM residency PD attitudes would reinforce the message of the CDEM guide. The increase in the number of away rotations and Electronic Residency Application Service (ERAS) applications comes at a real cost to the student and the system.2 Away rotations are a fixed resource and without a centralized mechanism for tracking students and rotations, with some students struggling to secure even one away rotation. The monetary costs of the residency application process causes EM-bound medical students to take out additional loans in their final year and accrue an average of $4,000 in debt to pay for the residency application process.2 We set out to understand what EM PDs require during two phases of the application process: granting an interview and placing an applicant on the rank list. We hope that students and their advisors can use this information and that published by CDEM, to formulate an efficient application strategy. We hoped this information, coupled with the knowledge that the majority of applicants match in one of their top choices, could be used to accurately advise EM-bound medical students, reducing the number of “extra” away rotations and residency applications students seek. METHODS We designed a survey to collect the information from EM residency PDs toward EM residency applications. The survey contained 18 questions pertaining to United States Medical Licensing Examination Step 1 and Step 2, EM rotations and SLOEs in reference to interview offers and rank list placement. (See Addendum 1.) We created the survey in Google Forms and distributed an invitation to PDs to participate on two dates in December 2015 and January 2016, electronically, via the Council of Emergency Medicine Residency Directors email (CORD) listserv. Responses were collected over the subsequent 30-day period. Respondents were asked to report their SAEM regional location, but their identity and program affiliation were not tracked. We collected data anonymously via Google Forms. This project was reviewed by the University of Texas Health Science Center Institutional Review Board (IRB) and deemed exempt from informed consent. Volume XVIII, no. 1: January 2017

Table. Emergency medicine program directors’ response to survey on applicant criteria, by SAEM region. SAEM region Great Plains New England

% Respondents (n=135) 5.6% (7) 9.6% (12)

Mid-Atlantic

18.4% (23)

Midwest

25.6% (32)

Southeastern

22.4% (28)

SAEM, Society of Academic Emergency Medicine.

RESULTS We received 135 responses to the survey. At the time of the survey, there were 198 domestic EM residency programs on the CORD listserve (as per CORD headquarters, personal correspondence). SAEM regional response can be seen in Table. USMLE Scores Four questions referred to USMLE Step 1 and Step 2: 59% of respondents stated they did not have a minimum score to grant an interview and 39% required a minimum score of 210 or higher. While 95% of respondents reported they did not require Step 2 to grant an interview, 46% required it to place an applicant on the rank list. EM Rotations Eighty percent of respondents require at least one EM rotation to grant an interview and 20% required two. No respondents required more than two. Twenty-seven percent (27%) reported that it was “critical” for a student to complete a home rotation, even if they have no intention of attending residency there. SLOEs To grant an interview, 97% of respondents required two or fewer SLOEs and 95% required two or fewer to place an applicant on their rank list. Forty-seven percent (47%) of respondents required one letter and 10% reported requiring no SLOE to grant an interview. Respondents were split on personal letters in lieu of a SLOE, with 46% reporting they would not accept a personal letter. Application Timing The timing of completion of application packets is important. Only 15% of respondents reported that it was “highly likely” that an application would be reviewed if it were completed after the ERAS opening date in September. DISCUSSION The competitive nature of EM residency applications has resulted in applicant anxiety regarding away rotations, application submission and USMLE scores. We asked EM PDs what their minimum criteria were for interview extension and rank list submission.

127

Western Journal of Emergency Medicine


Survey of EM Program Directors on EM Application Criteria

King et al.

Most EM programs do not have a minimum Step 1 score to grant an interview. For those that do, cutoff scores falls between 200-220 and many programs note that they “almost always” grant interviews to students with Step 1 score above 230.4 It is reasonable to expect that most students with a Step 1 score greater than 230 will meet the screening criteria for most residency programs. USMLE Step 2 was not required by the vast majority of respondent programs for interview extension. However, nearly half required those results to place an applicant on the program rank list. Therefore, students with adequate Step 1 scores should be able to defer taking Step 2 until sometime in the late fall to permit more flexibility in scheduling rotations. Finally, all students should try to have USMLE Step 2 results in ERAS by the time programs have finalized their rank lists for submission. Regarding EM rotations, students may want to rotate at their home institution, regardless of their interest in matching at that program, since one-quarter of respondents wanted to see a SLOE from the “home institution.” Our data support that most students only need to complete two rotations unless their advisor feels an additional rotation is needed to strengthen their application.

CONCLUSION The anxiety of students and their advisors has resulted in an increase of EM applications and away rotations with an accompanying increased cost for both the students and medical schools. By using multiple sources of information such as the National Residency Matching Program data, this survey and other sources, advisors can reduce the cost and complexity of student EM residency application while providing a reasonable expectation of a successful match. While our small preliminary study provides insight into the attitudes of EM residency program directors regarding residency applications, it falls short of comprehensive recommendations or guidelines. The further development of this area of study would undoubtedly assist students, residency programs and medical schools to develop rational, costeffective application strategies.

LIMITATIONS This study was designed to provide initial, pilot data to assist the CDEM Student Advisory Task Force in developing student-advising guidelines for the EM residency application process. As a pilot study, it has several important limitations. Anonymous participants were recruited via the CORD listserv and multiple responses from the same individual or the same residency program were possible. Yet our regional response data suggest a wide variety of programs participated in the survey. We did allow for free-text comments on our survey, which may have affected how people responded to specific questions. However, we did not perform an analysis on the responses due to time and resource constraints. Additionally, due to incomplete or conflicting data and responses, some survey questions were excluded from this analysis. The survey itself was not a validated instrument and was intended only as a barometer of overall opinion of residency programs. It is inadvisable to draw specific conclusions about a specific applicant from these data and advisors should incorporate this information into the overall advising they provide to their students.

Western Journal of Emergency Medicine

Address for Correspondence: Kevin King, MD, University of Texas Health Science Center San Antonio, Department of Emergency Medicine, 7703 Floyd Curl Drive, San Antonio, TX 78229. email: kingk4@uthscsa.edu. Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none. Copyright: © 2016 King et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

REFERENCES

128

1. NRMP. Results and Data: 2016 Main Residency Match.; 2016. www. nrmp.org. 2. Fried JG. AAMC: Cost of Applying to Residency Questionnaire.; 2015. 3. Crane JT and Ferraro CM. Selection criteria for emergency medicine residency applicants. Acad Emerg Med. 2000;7(1):54-60. 4. NRMP. Results of the 2014 NRMP Program Director Survey.; 2014. Available at: www.nrmp.org. 5. Student Advising Task Force: Cassandra Zhuang, Xiao C. Zhang, MD, MS, Kevin Hu MD, Sameer Desai MD, Jamie Hess MD, David Tillman MD LL-CM. Emergency Medicine Applying Guide.; 2015. 6. Love JN, Smith J, Weizberg M, et al. Council of emergency medicine residency directors’ standardized letter of recommendation: The program director’s perspective. Acad Emerg Med. 2014;21(6):680-687.

Volume XVIII, no. 1: January 2017


Brief Research Report

Demographics and Fellowship Training of Residency Leadership in EM: A Descriptive Analysis Josh Greenstein, MD Ross Hardy, MD Jerel Chacko, MD Abbas Husain, MD

*Staten Island University Hospital, Northwell Health, Department of Emergency Medicine, Staten Island, New York

Section Editor: Sorabh Khandelwal, MD Submission history: Submitted July 5, 2016; Revision received October 14, 2016; Accepted October 26, 2016 Electronically published November 21, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.10.31452

Introduction: Emergency medicine (EM) fellowships are becoming increasingly numerous, and there is a growing trend among EM residents to pursue postgraduate fellowship training. Scant data have been published on the prevalence of postgraduate training among emergency physicians. We aimed to describe the prevalence and regional variation of fellowships among EM residency leadership. Methods: We conducted an online anonymous survey that was sent to the Council of EM Residency Directors (CORD) membership in October 2014. The survey was a brief questionnaire, which inquired about fellowship, secondary board certification, gender, and length in a leadership position of each member of its residency leadership. We separated the responses to the survey into four different geographic regions. The geographic regions were defined by the same classification used by the National Resident Matching Program (NRMP). We defined residency leadership as program director (PD), associate PD and assistant PD. Residencies that did not complete the survey were then individually contacted to encourage completion. The survey was initially piloted for ease of use and understanding of the questions with a select few EM PDs. Results: We obtained responses from 145 of the 164 Accrediting Council for Graduate Medical Educationaccredited EM residencies (88%). The fellowship prevalence among PDs, associate PDs, and assistant PDs was 21.4%, 20.3%, and 24.9% respectively. The most common fellowship completed was a fellowship in toxicology. Secondary board certification among PDs, associate PDs, and assistant PDs was 9.7%, 4.8%, and 2.9% respectively. Eighty-two percent of PDs have at least five years in residency leadership. Seventy-six percent of PDs were male, and there was a near-even split of gender among associate PDs and assistant PDs. The Western region had the highest percentage of fellowship and or secondary board certification among all levels of residency leadership. Conclusion: There is a low prevalence of fellowship training and secondary board certification among EM residency leadership, with the most common being toxicology. Assistant PDs, the majority of whom had less than five years residency leadership experience, had the highest percentage of fellowship training. There may be a regional variation in the percentage of residency leadership completing postgraduate training. [West J Emerg Med. 2017;18(1)129-132.]

INTRODUCTION It is the perception that emergency medicine (EM) fellowships are becoming increasingly common due to a Volume XVIII, no. 1: January 2017

growing trend among EM residents to pursue postgraduate fellowship training. There are scant data on the prevalence of fellowships among EM physicians. We aimed to describe the 129

Western Journal of Emergency Medicine


Greenstein et al.

Demographics and Fellowship Training of Residency Leadership in EM prevalence and regional variation of fellowships among EM residency leadership. Additionally, completion of secondary board certification among EM residency leadership was also investigated, as this additional training is relevant to the primary study investigation. METHODS We sent an online confidential survey to the Council of EM Residency Directors (CORD) membership list service in October 2014. The survey was closed on January 1, 2015. No incentives were offered. The institutional review board of our hospital approved this study. Individual emails were sent to each member of the CORD membership, with a link to the website of the survey (www.surveymonkey.com). The survey was a brief questionnaire, which inquired about fellowship training, secondary board certification, gender, and length of time in a leadership position of each member of its residency leadership. We defined residency leadership as program director (PD), associate PD and assistant PD. We did not define length of time in a leadership position, and this could have included a member who had completed a leadership role at a different

institution as well. Lastly, we categorized responses into four different geographic regions as used by the National Resident Matching Program (NRMP) and Association of American Colleges (AAMC).1 Participants self-reported their residency affiliation and programs; we did not clarify which member of residency leadership responded - only which residency the responses were from. We also included responses from the researchers’ home institution. PDs of residencies who did not complete the survey were then individually contacted to encourage completion. This took place approximately six weeks after the initial survey request. Residency programs that still did not complete the survey were contacted one last time one month prior to completion of the study. The authors created the survey. A pilot survey was given to a consensus panel of three EM PDs to complete and comment on its clarity. We were then able to complete the survey without a need for any significant revisions to its content (Appendix). The survey ended with an open-ended question allowing for any additional responses or clarifications of the subject’s responses. We included responses only from residencies accredited by the Accrediting Council of Graduate Medical Education (N=164).

Table. Fellowship, secondary board certifications, gender distribution, and length at leadership position of program directors, associate program directors, and assistant program directors. Program directors (N = 145)

Associate program directors (N = 187)

Assistant program directors (N = 177)

Fellowship None

114

149

149

0

0

1(1%)

Education

7(5%)

2(1%)

8(5%)

EMS

2(1%)

3(2%)

0

Simulation

1(1%)

1(1%)

3(2%)

0

6(3%)

9(5%)

3(2%)

8(4%)

9(5%)

8(6%)

7(4%)

6(3%)

10(7%)

11(6%)

8(5%)

Critical care

Pediatric Ultrasound Toxicology Other Second board certification None Internal medicine Pediatrics Surgery

131

178

172

11(8%)

8(4%)

5(3%)

0

0

0

0

0

0

3(2%)

1(1%)

0

110(76%)

105(56%)

96(54%)

35(24%)

82(43%)

81(46%)

< 5 years

25(17%)

81(43%)

138(78%)

5-10 years

63(43%)

74(40%)

32(17%)

> 10 years

57(39%)

32(17%)

11(6%)

Other Gender Male Female Leadership experience

EMS, emergency medical services

Western Journal of Emergency Medicine

130

Volume XVIII, no. 1: January 2017


Greenstein et al.

Demographics and Fellowship Training of Residency Leadership in EM

RESULTS We obtained responses from 145 of 164 residencies (88%). The table illustrates the breakdown of fellowship and secondary board certifications. The fellowship prevalence among PDs, associate PDs and assistant PDs was 21.4%, 20.3%, and 24.9% respectively. Secondary board certification prevalence among PDs, associate PDs, and assistant PDs was 9.7%, 4.8%, and 2.9% respectively. The most common fellowship completed was medical toxicology. The “other” category included various fellowships not listed in the survey. Examples of more common responses in the “other” category included but were not limited to fellowships such as global health and wilderness medicine. Some less common responses were a fellowship in sports medicine and cardiac emergencies. Internal medicine was the most common secondary board certification completed. Residency leaders who had completed a pediatric emergency medicine fellowship in addition to secondary board certification in pediatrics were only tallied as having completed a fellowship to ensure proper statistical analysis of the data. The figure demonstrates the regional variation of residency leaders who had completed an EM residency and either a fellowship or secondary board certification. The four

regions were Western, Northeastern, Central, and Southern. The Western region had the highest percentage of fellowship and secondary board certification across all levels of residency leadership. Greater than 46% of assistant PDs in the Western region had completed either a fellowship and/or secondary board certification. DISCUSSION We found that there is an overall low prevalence of fellowship and secondary board certifications among residency leadership. The majority of PDs are male, with a near-even split among associate and assistant PDs. The Western region had the highest percentage of fellowship and secondary board certification among all levels of residency leadership. The results from this survey not only define the current demographics of fellowship and secondary board certification among EM residency leadership, but also suggest a growing trend of postgraduate training among residency leadership. PDs averaged the longest time in residency leadership and had the highest percentage of secondary board certifications. With regard to length of time in residency leadership position, no differentiation was made for whether this was at a single or multiple institutions. Assistant PDs who

Figure. Combined fellowship and secondary board certification percentages of program directors, associate program directors, and assistant program directors, compared across geographic regions (Western, Central, Southern, and North Eastern as defined by the map).

Volume XVIII, no. 1: January 2017

131

Western Journal of Emergency Medicine


Demographics and Fellowship Training of Residency Leadership in EM had the least experience had the highest prevalence of completed fellowships, and the least prevalence of secondary board certification. It is possible that there is a trend away from obtaining secondary board certifications and towards completion of fellowship training for those interested in obtaining a residency leadership position. This trend further varies by region. The Western region, considered by many to be a competitive academic EM job market, had the highest percentage of fellowship and secondary board certifications, as nearly 46% of assistant PDs have completed either a fellowship or secondary board certification. We speculate that the growing competitive job market for residency leadership positions may be, in addition to other variables, what drives residents to pursue postgraduate fellowship training. We recommend that emergency physicians contemplating a career in academic EM, and more specifically in residency leadership, should pursue additional training. We plan to repeat this study in 10 years, and it is our belief that the current trends will continue to reflect the changes already taking place. Moreover, it would be interesting to study if the increased postgraduate training trend was exclusive to residency leadership or if there is an overall trend among all emergency physicians to pursue postgraduate training. LIMITATIONS While survey-based studies are helpful in obtaining data and outlining trends, it is important to be aware that these studies are more vulnerable to subjectivity and interpretation. As with any survey, it is possible that responders may have interpreted some of the questions and answer choices differently. Even though we piloted the survey for ease of use and understanding, clarification was needed in one particular area. Many respondents included the American College of Emergency Physicians (ACEP) Teaching Fellowship, or Medical Education Research Certificate Program (MERC) as an education fellowship. We did not consider those programs to count as true fellowships as these are training workshops that do not involve the same amount of time investment as other EM fellowships. As a result, respondents were contacted to clarify their responses. We were able to track responses back to a specific residency program as the survey requested the program name. An additional limitation of the study is that

Western Journal of Emergency Medicine

Greenstein et al.

the results obtained could be a reflection of the availabilities of fellowship at the time of residency leadership graduations. Many experienced faculty in EM residency leadership positions did not have fellowships available in their early careers, which may have led them to pursue secondary board certification. Lastly, even though there was an excellent response rate (88%), one could argue that the results might have been different with a higher percentage and might not truly describe additional education among EM residency leadership. CONCLUSION There is an overall low prevalence of fellowship training and secondary board certification among EM residency leadership, with the most common being medical toxicology. However, the assistant PDs, who averaged the shortest length of time in leadership experience, had the highest percentage of additional fellowship training. This indicates a possible trend toward additional postgraduate training among residency leadership. There may be a regional variation with the Western region exhibiting the highest percentage of fellowship and secondary board certifications across all levels of residency leadership.

Address for Correspondence: Josh Greenstein, MD, Staten Island University Hospital, Northwell Health, Department of Emergency Medicine, 475 Seaview Avenue, Staten Island, NY 10305. Email: jgreenstein1@northwell.edu. Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none. Copyright: Š 2016 Greenstein al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

REFERENCES 1. Organization of Student Representatives. 2016 Handbook. Available at: https://www.aamc.org/download/164942/data/osr_handbook.pdf.

132

Volume XVIII, no. 1: January 2017


Brief Research Report

Medical Student Documentation in the Electronic Medical Record: Patterns of Use and Barriers Kathleen Wittels, MD* Joshua Wallenstein, MD† Rahul Patwari, MD‡ Sundip Patel, MD§

*Harvard Medical School, Brigham and Women’s Hospital, Department of Emergency Medicine, Boston, Massachusetts † Emory University School of Medicine, Department of Emergency Medicine, Atlanta, Georgia ‡ Rush Medical College, Department of Emergency Medicine, Chicago, Illinois § Cooper University Health Care, Department of Emergency Medicine, Camden, New Jersey

Section Editor: David A. Wald, DO Submission history: Submitted June 15, 2016; Revision received September 26, 2016; Accepted October 27, 2016 Electronically published December 15, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.10.31294

Introduction: Electronic health records (EHR) have become ubiquitous in emergency departments. Medical students rotating on emergency medicine (EM) clerkships at these sites have constant exposure to EHRs as they learn essential skills. The Association of American Medical Colleges (AAMC), the Liaison Committee on Medical Education (LCME), and the Alliance for Clinical Education (ACE) have determined that documentation of the patient encounter in the medical record is an essential skill that all medical students must learn. However, little is known about the current practices or perceived barriers to student documentation in EHRs on EM clerkships. Methods: We performed a cross-sectional study of EM clerkship directors at United States medical schools between March and May 2016. A 13-question IRB-approved electronic survey on student documentation was sent to all EM clerkship directors. Only one response from each institution was permitted. Results: We received survey responses from 100 institutions, yielding a response rate of 86%. Currently, 63% of EM clerkships allow medical students to document a patient encounter in the EHR. The most common reasons cited for not permitting students to document a patient encounter were hospital or medical school rule forbidding student documentation (80%), concern for medical liability (60%), and inability of student notes to support medical billing (53%). Almost 95% of respondents provided feedback on student documentation with supervising faculty being the most common group to deliver feedback (92%), followed by residents (64%). Conclusion: Close to two-thirds of medical students are allowed to document in the EHR on EM clerkships. While this number is robust, many organizations such as the AAMC and ACE have issued statements and guidelines that would look to increase this number even further to ensure that students are prepared for residency as well as their future careers. Almost all EM clerkships provided feedback on student documentation indicating the importance for students to learn this skill. [West J Emerg Med. 2017;18(1)133-136.]

INTRODUCTION Electronic health records (EHR) are commonly used in academic medical centers and provide advancement over traditional paper records in healthcare delivery. As EHRs have become more common, it is important to consider the implications of these systems on medical student education.

Volume XVIII, no. 1: January 2017

The Liaison Committee on Medical Education (LCME) and the Association of American Medical Colleges (AAMC) have both identified communication as a key skill to be taught to medical students, including written communication.1,2 Further, AAMC has defined 13 Entrustable Professional Activities (EPA) that all medical students should attain by graduation.

133

Western Journal of Emergency Medicine


Wittels et al.

Medical Student Documentation in the EHR EPA 5 requires that the student be able to “document a clinical encounter in the medical record.”3 The Alliance for Clinical Education (ACE) published a statement in 2012 with the recommendations that students should have the opportunity to document in the EHR and that the notes should be reviewed.4 In this statement, they also recommended that students have the opportunity to practice entering orders in the EHR and that medical schools should have competencies related to charting in the EHR. In a previous survey of clerkship directors across specialties in 2009, 64% of students had access to their institutions EHR, and of those two-thirds had the ability to document. While EM clerkship directors were well represented in this multi-specialty study (26%), the study was somewhat limited by its low response rate of 32%.5 EHR use has grown significantly since 2009, in large part due to national incentives including those contained in the Affordable Care Act. While our published national EM curriculum recommends that students have the ability to document in the patient record,6 little is known about our current practices with regard to the EHR. Our objective was to better understand the frequency with which students are permitted to document in the EHR in EM clerkships and perceived barriers to student documentation. In addition, we examined current practices in the review of student notes and their use in feedback and assessment. METHODS We conducted a cross-sectional study of EM clerkship directors at U.S. medical schools between March and May 2016. Eligible participants were members of Clerkship Directors in Emergency Medicine (CDEM). The Emory University Institutional Review Board reviewed our study protocol and determined that it was exempt from full committee review. We developed a 13-question electronic survey that assessed student documentation during EM clerkships. The survey was designed for completion on an institutional level, and the survey instructions requested that only one survey be completed per clerkship. Participants were required to enter their name, title, and institution, and provide consent for use of responses for research purposes. Participants were asked if students in their clerkship document patient-care encounters at one or more clinical sites, and reasons why students do not document in the EHR were requested from those who indicated “no” to the preceding question. Six reasons for non-use that were expected by the authors to be common among clerkship directors were listed. Clerkship directors were asked to select one or more of the six that applied to their clerkship, or enter a free-text response. For those who indicated that students do document in the EHR, the survey then addressed review of those notes for accuracy and/or student assessment, as well as the process of providing students feedback and assessment. The full survey can be accessed in our online appendix.

Western Journal of Emergency Medicine

We sent out a link to the survey at the end of a lecture in the March 2016 Council of Residency Directors meeting where many EM clerkship directors were present. Subsequently, email invitations for the electronic survey (SurveyMonkey, Palo Alto, CA) were sent to current members of CDEM in March 2016, totaling 160 individuals representing 116 institutions. Second and third email reminders were sent to clerkship directors in early April 2016 using the CDEM as well as SAEM listservs to maximize response rate. RESULTS Survey responses were received from 113 individuals. We excluded 13 entries as they were completed by a second clerkship (or assistant) director from within the same clerkship. This yielded 100 unique institutional entries, yielding a survey response rate of 86%. Three clerkships indicated that their sites do not use an EHR, and those surveys were excluded from further survey analysis. A representation of our survey distribution, response pattern, and exclusions can be seen in Figure 1. Of the remaining 97 completed surveys, 61 clerkships (63%) indicated that students document patient care encounters in the EHR at one or more sites. Of the 36 clerkships (37%) that indicated their students do not document in the EHR, the most

Figure. Representation of participants in a survey of medical student use of (EHR) electronic health records.

134

Volume XVIII, no. 1: January 2017


Wittels et al.

Medical Student Documentation in the EHR

Table 1. Reasons cited for not allowing students to document in the EHR. 80% Medical school or hospital rule forbidding student documentation 60% Concern for medical liability 53% Inability for student notes to support medical billing 38% Lack of computer workspace / access 15% No documentation educational objective for the clerkship 11%

Lack of ability to review notes and provide feedback

EHR, electronic health record

common reason cited was a hospital or medical school rule forbidding student documentation (80%). Table 1 details all reasons selected for not allowing medical student documentation in the EHR. There were four free-text responses that were closely aligned with our pre-selected choices (hospital policy - two, liability -one, not and educational objective - one) and did not reveal any additional reasons for non-use. Almost all programs (95%) indicated that a portion of notes are reviewed for purposes of providing feedback. Details on the process of review, feedback, and evaluation can be found in Table 2. DISCUSSION Nearly all clerkship directors surveyed indicated that EHRs are used in their departments, a significant increase from the 2009 ACE study. EM educators have recognized the importance of training in EHR documentation as almost two-thirds of clerkships allow students to document in the EHR. Nearly all programs that allow students to document in the EHR have a mechanism for review of notes, feedback, and assessment. There is significant variation in the patterns of review, feedback, and assessment among clerkships. This

Table 2. Types of feedback provided on student documentation in electronic health record (EHR). 95% clerkships reviewed student notes for feedback Fewer than half of notes reviewed (70%) Half to three-quarters of notes reviewed (23%) Three quarters or more reviewed (7%) Feedback provided by faculty (92%) Feedback provided by clerkship director (40%) Feedback provided by residents (64%) Oral feedback only provided (75%) Oral and written feedback given (25%) Documentation considered in final grade (58%)

Volume XVIII, no. 1: January 2017

could be explained by variations in student/learner ratios between clerkships as well as other factors. An examination of the barriers to student documentation in EHRs could provide an answer as to why the percentage of clerkships allowing EHR documentation is not even higher. Our study found that the most common reason students were not allowed to document in the EHR was due to hospital/medical school institutional policies. Given that 90% of medical school deans felt students should document in the chart and 93% felt that student education would be adversely affected if this were not allowed,7 there appears to be a disconnect between educational goals and institutional policies related to documentation. It is possible that some of these policies could be due to concerns over medical liability, which emerged as another major reason for non-use even if not explicitly prohibited by hospital policy. While difficult to reliably quantify, there does not seem to be significant evidence indicating a high liability risk specifically associated with medical student documentation. An extensive literature search using PubMed, Ovid Medline, and Google Scholar with the terms “medical student,” “documentation,” “malpractice” and “liability” did not reveal any studies or case reports on student documentation leading to malpractice. The one paper we found discussed the potential for a lawsuit due to student documentation but never cited a case.8 The additional cited barriers to use largely relate to intrinsic challenges faced by all EM clinicians and departments, particularly the need to balance education with clinical productivity, and the lack of available workspace in crowded departments. It is notable that of all clerkship directors who indicated non-use, only 15% related this to documentation not being an educational goal of the clerkship. LIMITATIONS A number of limitations may affect our survey results and their interpretation. First, our CDEM-member eligible study participants represent a subset of EM clerkships and clerkship directors, whose policies and views may not be representative of all clerkships and leaders. This database was selected as our study population as the authors were unable to locate another database of EM clerkships in the U.S. that was felt to be accurate and/or current. The Liaison Committee of Medical Education maintains a list of accredited U.S. medical schools; however, not all of those schools have an academic EM department or clerkship. There are multiple databases of EM residency programs, though there are residency programs without affiliated clerkship programs, and clerkships without affiliated residency programs. While we have no reason to believe that clerkship programs whose leaders are CDEM members are not representative of all EM clerkships, this remains a confounding variable. Doctor of osteopathy programs comprise a small minority of the CDEM membership, so our findings may not represent practices at these programs. Second, as with all survey-based research, our respondents may have different characteristics and viewpoints than non-respondents.

135

Western Journal of Emergency Medicine


Wittels et al.

Medical Student Documentation in the EHR CONCLUSION The large number of EM clerkships that allow students to document in the EHR and provide feedback on EHR use is well aligned with educational recommendations from within and beyond our specialty. An over-exaggerated fear related to medical liability may be a factor in preventing more widespread use. While there are certainly valid and legitimate barriers preventing more widespread use, we should search for solutions within our departments and advocate at an institutional level.

REFERENCES 1. Functions and Structure of a Medical School: Standards for Accreditation of Medical Education Programs Leading to the MD Degree. Publication of the Liaison Committee for Medical Education. Available at: http://lcme. org/publications/. Accessed May 29, 2016. 2. Association of American Medical Colleges. Learning objectives for medical student education—Guidelines for medical schools: Report I of the Medical School Objectives Project. Acad Med. 1999;74:461–2. 3. Core Entrust able Professional Activities for Entering Residency. Publication of the AAMC. Available at: aamc.org. Accessed May 29, 2016. 4. Hammound et al. Medical Student Documentation in Electronic Health Records: A Collaborative Statement from the Alliance for Clinical

Address for Correspondence: Joshua Wallenstein, MD, Emory University School of Medicine, Department of Emergency Medicine, 49 Jesse Hill Jr Dr., Atlanta, GA 30303. Email: jwalle2@emory.edu.

Education. Teach Learn Med. 2012;24:257–66. 5. Hammound et al. Opportunities and Challenges in Integrating Electronic Health Records Into Undergraduate Medical Education: A National

Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none.

clerkship curriculum: an update and revision. Acad Emerg Med. 2010;17:638-43. 7. Friedman E, Sainte M, Fallar R. Taking note of the perceived value and impact of medical student chart documentation on education and patient

Copyright: © 2016 Wittels et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http:// creativecommons.org/licenses/by/4.0/

Western Journal of Emergency Medicine

Survey of Clerkship Directors. Teach Learn Med. 2012;24:19-224. 6. Manthey DE, Ander DS, Gordon DC, et al. Emergency Medicine

care. Acad Med. 2010;85:1440-4. 8. Gliatto P, Masters P, Karani R. Medical student documentation in the medical record: a liability? Mt Sinai J Med. 2009;76:357-64.

136

Volume XVIII, no. 1: January 2017


Original Research

Efficient and Effective Use of Peer Teaching for Medical Student Simulation Joseph B. House, MD*† Carol H. Choe, MD¶ Heather L. Wourman, BA* Kristin M. Berg, MD|| Jonathan P. Fischer, MPH§ Sally A. Santen, MD, PhD*‡

*University of Michigan Medical School, Department of Emergency Medicine, Ann Arbor, Michigan † University of Michigan Medical School, Department of Pediatrics, Ann Arbor, Michigan ‡ University of Michigan Medical School, Department of Learning Health Sciences, Ann Arbor, Michigan § University of Michigan, Department of Health Management and Policy, Ann Arbor, Michigan ¶ Cooper University Hospital, Department of Medicine, Division of Critical Care Medicine, Camden, New Jersey || Memorial Hermann Northeast Hospital, Department of Emergency Medicine, Humble, Texas

Section Editor: David A. Wald, DO Submission history: Submitted October 4, 2016; Revision received October 15, 2016; Accepted November 1, 2016 Electronically published December 21, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.11.32753

Introduction: Simulation is increasingly used in medical education, promoting active learning and retention; however, increasing use also requires considerable instructor resources. Simulation may provide a safe environment for students to teach each other, which many will need to do when they enter residency. Along with reinforcing learning and increasing retention, peer teaching could decrease instructor demands. Our objective was to determine the effectiveness of peer-taught simulation compared to physician-led simulation. We hypothesized that peer-taught simulation would lead to equivalent knowledge acquisition when compared to physician-taught sessions and would be viewed positively by participants. Method: This was a quasi-experimental study in an emergency medicine clerkship. The control group was faculty taught. In the peer-taught intervention group, students were assigned to teach one of the three simulation-based medical emergency cases. Each student was instructed to master their topic and teach it to their peers using the provided objectives and resource materials. The students were assigned to groups of three, with all three cases represented; students took turns leading their case. Three groups ran simultaneously. During the intervention sessions, one physician was present to monitor the accuracy of learning and to answer questions, while three physicians were required for the control groups. Outcomes compared pre-test and post-test knowledge and student reaction between control and intervention groups. Results: Both methods led to equally improved knowledge; mean score for the post-test was 75% for both groups (p=0.6) and were viewed positively. Students in the intervention group agreed that peer-directed learning was an effective way to learn. However, students in the control group scored their simulation experience more favorably. Conclusion: In general, students’ response to peer teaching was positive, students learned equally well, and found peer-taught sessions to be interactive and beneficial. [West J Emerg Med. 2017;18(1)137-141.]

Volume XVIII, no. 1: January 2017

137

Western Journal of Emergency Medicine


House et al.

Use of Peer Teaching for Medical Student Simulation INTRODUCTION The ability to recognize a patient who requires immediate care, to initiate treatment, and to seek additional support is essential for all graduating medical students.1 While clinical education focuses on common clinical presentations and acute management skills, the emphasis on patient safety, billing, and patient satisfaction in recent years has resulted in the marginalization of medical students in the clinical setting.2 In addition, ethical questions are raised around the traditional practice of “see one, do one, teach one.”3 In order to maintain high-quality education in a safe environment, simulation has become increasingly important in medical education. Simulation has the advantage of introducing students to serious clinical conditions in a standardized and non-threatening manner2 without involving actual patients and provides an environment for students to gain practice teaching,4-8 which also increases their knowledge retention.9 The benefits of simulation in medical education have been well documented.2,10,11 However, simulation training requires considerable resources, not the least of which is faculty time required for preparation and delivery. This study aimed to test the feasibility and effectiveness of student peertaught simulations in an emergency medicine (EM) clerkship. Our hypothesis was that student peer-taught simulation sessions would lead to comparable knowledge acquisition when compared to physician-taught sessions, as students would be on a similar educational level and thus understand the needs of their peers. Students may also feel more comfortable asking their peers questions instead of a physician. Additionally, we hypothesized that the peer-taught participants would view simulation positively. METHODS This was a quasi-experimental design study. The setting was an academic emergency department where EM is a required fourth-year clerkship. The subject population was students rotating through the clerkship from January 2013 – December 2013. The university’s institutional review board reviewed this study and determined it to be exempt. Students rotating through the ED are required to attend core didactic lectures of basic EM concepts and simulation sessions. These sessions are integrated into the didactic teaching days so students can quickly apply the knowledge. Students were assigned cases on day two of the rotation and presented the cases 1-2 weeks later. Three simulation-based clinical scenarios were developed that are considered high yield for EM, including ACLS algorithms, and could be taught using simulation (Laerdal Little Anne CPR training manikin with a rhythm generator). These cases included management of a basic disease process followed by stabilization of a life-threatening cardiac dysrhythmia. Cases were asthma exacerbation decompensating into supraventricular tachycardia (SVT), acute myocardial infarction developing symptomatic bradycardia, and congestive heart failure leading to ventricular tachycardia. In 2012, as part of a pilot, EM faculty developed the

Western Journal of Emergency Medicine

cases. A group of six fourth-year medical students assisted with the revision of cases and materials along with providing important feedback to improve the process prior to the launch of the study. These students were involved in a pilot test of student-taught simulations to ensure all material was presented and to improve construct validity. Initially, students felt they needed more direction on the peer-taught cases. The instructions were adjusted to improve these aspects. For the study period, peer-taught simulation (intervention group) and physician-taught simulation (control group) alternated months. Due to not having enough physician volunteers, some months were converted from physiciantaught to student-taught and thus there are not equal numbers between the two cohorts. Intervention group: (111 students) At the beginning of the four-week rotation, each student was assigned a case. Each case had a list of objectives, patient encounter summary, outcomes checklist, questions to facilitate debriefing, instructions on how to use the rhythm generator and a list of resources for the topic. 12 Students were encouraged to augment their knowledge of the topic with outside reading. On the day of simulation, the students were assembled into groups consisting of at least one peer leader for each case. During periods in which the number of students was not a multiple of three, the group had more than one student assigned to the same case; in this situation, the cases were co-taught. To ensure standardized delivery of basic instructional components and to minimize the potential confounding variable effects of multiple instructors, the student peer teachers were given the same set of instructions, the same objectives, and debriefing questions. The peer teacher was instructed to run the basic medical simulation (e.g., asthma case) for about five minutes before transitioning to the cardiac dysrhythmia (e.g., supraventricular tachycardia). After another five minutes, they would end the session allowing the last five minutes for debriefing and discussion. The simulators were set in a “U” shape with the emergency physician in the middle available to answer questions that were beyond the scope of the peer leader’s knowledge and to monitor teaching and learning of the three groups of students. Each table was given a label “A,” “B,” and “C.” The student who had that case, A, B, or C, presented his/ her case. Each group started on a different case, and thus at any one time all three cases were being taught. After each case (15 minutes), the students would rotate, and another peer leader would present his case. As such, each student was a peer teacher for one case, and a peer learner for two. One physician volunteer was required for each session. Control Group: (65 students) Resident or faculty volunteers were provided the cases and objectives 1-2 weeks prior to the simulation session. They were given the same case packets and instructions as the students in regard to five minutes for the medical emergency, five minutes for the dysrhythmia, and five

138

Volume XVIII, no. 1: January 2017


House et al.

Use of Peer Teaching for Medical Student Simulation

Table. Student reactions to student-led and physician-led simulation teaching sessions (5-point scale). Survey questions Leader Mean

SD

Sig*

(Leader)-directed learning was an effective way to learn new concepts

Physician Student

4.6 4.2

0.6 0.8

p=0.001

(Leader)-directed modules will help me retain new concepts better

Physician Student

4.7 4.3

0.5 0.7

p<0.001

I find (leader)-directed learning enjoyable

Physician Student

4.5 4.2

0.6 0.7

p=0.004

I found the (leader)-directed learning more interactive than xxx-ran simulation

Physician Student

4.3 3.8

0.7 0.9

p=0.001

I found the (leader)-directed learning more interactive than xxx-ran simulation

Physician Student

4.3 3.3

0.8 1.0

p<0.001

I feel (leaders) were well prepared to run the simulation cases

Physician Student

4.7 4.3

0.7 0.6

p<0.001

Overall, (leader)-directed simulation cases were a positive experience

Physician Student

4.7 4.3

0.5 0.7

p<0.001

4.5 4.1

0.7 0.8

P=0.002

The simulation cases did not require too much additional work or time outside of Physician this rotation Student *Stem questions are listed for the student-directed sessions, statistics analyzed with Mann-Whitney U. SD, standard deviation

minutes for debriefing, along with objectives and debriefing instructions. After the completion of the case, the students moved to the next case. Three physician volunteers were required for each session, one for each case. The intervention outcomes were evaluated on two levels. The intervention and control groups were given pre- and post-knowledge tests. The teaching objectives for each case were used for test development. The pre-test was administered on the first day of the rotation, prior to providing the cases to the intervention group. The test was piloted in the fall of 2012 and subsequently revised. Students were also surveyed regarding their attitudes toward peerversus physician-led teaching on a five-point Likert scale from strongly disagree to strongly agree. Attention was paid to content and response process validity through the instrument design, and also to internal validity (Crohnbach’s alpha for the attitudes outcomes survey was 0.9). Statistical Analysis We obtained descriptive statistics using SPSS 19. The differences in attitudes between the control and peer teaching groups were compared using a Mann-Whitney U. We compared the differences in knowledge on the pre-test and post-test using a paired t-test; significance was set at p<0.05. To find a 10% difference in post-test scores, a sample of 16 per group would be needed (α=0.05 and power of 0.80). Student comments were noted. RESULTS Both methods of teaching led to improved knowledge,

Volume XVIII, no. 1: January 2017

based on the pre- and post-test. The mean for the pre-test was 66% for the peer-taught group and 65% for the physician-taught group. The mean for the post-test was 75% for both groups (p=0.6). Both methods of simulation were viewed positively (Table). Participants in the peer-taught group agreed that student-directed learning was an effective way to learn. However, students in the physician-taught group thought their experience was better than those in the student-taught group (Table). Student comments on the peer teaching included positive comments such as, “You really learn the case you are assigned much better than you would just reading about it;” “More interactive, at our learning level, fellow students understand better what may be difficult concepts;” “They were fun! (AND I learned a lot…) also it’s a more comfortable environment to ask fellow students questions…” But one student noted, “It’s just a personal preference that I tend to learn better from experts than students, but I didn’t mind participating in various modes of learning to accommodate all styles.” DISCUSSION This study demonstrates student peer-taught simulations are both feasible and effective as a training tool during EM clerkships. Participants in peer-taught simulations achieved the same level of knowledge acquisition as those in physicianled sessions. However, students were not as satisfied with peer-taught simulation as with physician-led ones. There are advantages and disadvantages to peer-taught simulation. Researchers have suggested a number of reasons why student-led teaching is effective.8 It is possible that peers explain

139

Western Journal of Emergency Medicine


House et al.

Use of Peer Teaching for Medical Student Simulation ideas in a more relatable way that fellow students can easily understand. Students may feel more at ease asking questions of peers than of physicians. The act of teaching can also deepen the student instructor’s understanding of a topic.9 Although students have inherently less knowledge of the subject matter than do physicians, the act of teaching and the need for instructor understanding of a topic likely compensates for students’ lower expertise level and results in similar teaching and learning outcomes compared to physicians. It is also believed students would have improved retention of knowledge regarding the cases they taught, which several students commented on months later; this might be confirmed with further study. There are additional advantages to student-led simulation. Perhaps the most important benefit of peer-assisted learning is introducing students to the art of clinical education. Teaching is an important aspect of being a competent physician. Indeed, mastering the ability to teach peers and patients is a competency required by many medical education accrediting organizations.13,14 Peer-assisted learning allows students to participate in clinical education in a way they may not have previously experienced, yet is a necessary skill as they transition into residency. One significant drawback to small group teaching in simulation is the physician resources required. Peer-taught simulation reduces the amount of physician time required to perform the simulation training, providing a significant advantage over physician-led. Each month, our physician-led simulations required a total of nine hours of physician time, while our student-led simulations required only three hours for the single physician to monitor learning and to answer questions. We estimate that reduction in physician time results in an estimated annual reduction of 60 hours at an estimated cost savings of about $11,000. Thus, student-led simulation is time and cost efficient. Student teachers were also more likely to arrive on time and less likely to cancel compared to physician teachers in our study, which can save additional time and money by having more reliable training schedules and did not require a last-minute scramble to find a replacement.There is potential to use peer-led simulations on a broader scale. This model can be expanded to other areas of undergraduate medical education such as preclinical coursework or other clerkship rotations. Additionally, this model can allow for increased use of simulation-based learning for graduate medical education as well as for faculty development. Previous research has shown that simulationbased learning can be effective for both technical and nontechnical skills attainment2 and can be used to teach skills such as teamwork or professionalism.15 Future studies should work to increase acceptability and study peer-teaching in other simulation settings. LIMITATIONS There are important considerations that must be made

Western Journal of Emergency Medicine

regarding limitations of student-led simulation. Participants in the student-led groups viewed their experience less favorably than those in the physician-led groups. The limited research conducted on student perceptions of student-led teaching and what has been done is not in the area of simulation.16,17 A possible explanation of less favorable reactions is that students perceived that their peer teachers provided incomplete or insufficiently detailed information as compared to physician-led groups.18 In our study, a physician was present during the student-led simulations to answer questions, monitor what was being taught, correct mistakes, and to provide additional explanations. Since students did not experience both the physician-led and the student-led simulations, students in the student-led simulations may have been comparing their experience to a perceived experience with physicians that may not have been realistic. One concern for using studentled simulation is the perception that student-led simulation is “less good” than physician-led, and could affect their general perception of simulation in the future. Another limitation of student-led simulations is that they create additional time demands for students, so tradeoffs may need to be made to ensure that students have adequate time to prepare. We hope the additional time leads to deeper understanding of key emergency care principles, but there was variability in engagement of the learner. Some learners would rather be told what they need to know instead of needing to spend out-of-classroom time to learn the information on their own. Our student-led teaching design also reduced opportunities to develop relationships between students and physicians, so it is important to ensure supplemental opportunities are available for students. Further limitations include the possibility that students may have perceived student-led topics as being less valuable than physician-led topics, so adequate explanation must be given to ensure understanding of the importance of the topics being taught. The pre-test was given after the ACLS didactic on the rotation, but one confounder in regards to the posttest performance is that learning may have occurred on EM clinical shifts or via self-directed learning. We have submitted the cases to MedEdPortal.org so that others might reproduce or modify the study. CONCLUSION In conclusion, student-led simulations are feasible, effective, time and cost efficient as a training tool during EM clerkships. However, while student-led simulation was viewed positively, students were not as satisfied with peertaught simulations as with physician-taught ones. This study demonstrates the effectiveness of student-led simulations within EM clerkships, but also invites the possibility for broader use within medical education.

140

Volume XVIII, no. 1: January 2017


House et al.

Use of Peer Teaching for Medical Student Simulation 7. Ten Cate O, Van de Vorst I, Van den Broek S. Academic achievement

Address for Correspondence: Joseph B. House, MD, University of Michigan Medical School, Department of Emergency Medicine, B1 382 R. TC/ EM South Office, 1500 E. Medical Center Dr., Ann Arbor, MI 48109. Email: joshouse@med.umich.edu.

of students tutored by near-peers. Int J Med Educ. 2012;3:6-13. 8. Yu T, Wilson N, Singh P, et al. Medical students-as-teachers: A systematic review of peer-assisted teaching during medical school.

Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none. Copyright: © 2016 House et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

Adv Med Educ Pract. 2011;2:157-72. 9. Kirkpatric DL. Evaluation Training Programs: the Four Levels. 2nd edn. Berrett-Koehler, San Francisco (CA) 1998. 10. Gala R, Orejuela F, Gerten K, et al. Effect of validated skills simulation on operating room performance in obstetrics and gynecology residents: A randomized controlled trial. Obstet Gynecol. 2013;121:578-84. 11. Swamy M, Bloomfield T, Thomas R, et al. Role of simman in teaching clinical skills to preclinical medical students. BMC Med Educ. 2013;13:20. 12. Authors blinded for publication. Medical Student Peer Teaching Simulation. MedEdPortal Publications - Association of American

REFERENCES

Medical Colleges: 2015. 10181. 13. Accreditation Council for Graduate Medical Education. Program

1. American Association of Medical Colleges. Core entrustable

requirements for graduate medical education in emergency medicine

professional activities for entering residency 2013 [November 12,

2012 [January 15, 2014]. Available at: http://www.acgme.org/

2013]. Available at: https://www.mededportal.org/icollaborative/

acgmeweb/Portals/0/PFAssets/2013-PR-FAQ-PIF/110_emergency_

resource/887. 2. Okuda Y, Bryson EO, DeMaria S, et al. The utility of simulation

medicine_07012013.pdf. 14. Liaison Committee on Medical Education. Functions and structure of

in medical education: What is the evidence? Mt Sinai J Med.

a medical school 2013 [January 15, 2014]. Available at: http://www.

2009;76(4):330-43. 3. Ziv A, Wolpe PR, Small SD, et al. Simulation-based medical

lcme.org/publications/functions2013.pdf. 15. Okuda Y, Bond W, Bonfante G, et al. National growth in simulation

education: An ethical imperative. Acad Med. 2003;78(8):783-8.

training within emergency medicine residency programs, 2003–2008.

4. Ebbert MR, Morgan PM, Harris IB. A comprehensive student peerteaching program. Acad Med. 1999;74(5):583-4.

Acad Emerg Med. 2008;15(11):1113-6. 16. Henning J and Marcy M. Peer assisted learning in clinical education:

5. Glynn L, MacFarlane A, Kelly M, et al. Helping each other to learn - a process evaluation of peer assisted learning. BMC Med Educ.

Literature review. Athletic Training Educ J. 2008;3:84-90. 17. Shakar P, Singh A, Karki B, et al. Student perception about peer-

2006;6:18.

assisted learning sessions in a medical school in Nepal. Webmed

6. Robinson Z, Hazelgrove-Planel E, Edwards Z, et al. Peerassisted learning: A planning and implementation framework.

Central Medical Education. 2011;2:WMC002459. 18. Hendelman WJ and Boss M. Reciprocal peer teaching by

Guide supplement 30.7 – practical application. Med Teach.

medical students in the gross anatomy laboratory. J Med Educ.

2010;32(9):e366-e8.

1986;61(8):674-80.

Volume XVIII, no. 1: January 2017

141

Western Journal of Emergency Medicine


Original Research

Interview Day Environment May Influence Applicant Selection of Emergency Medicine Residency Programs Jason Lewis, MD* Nicole Dubosh, MD* Carlo Rosen, MD* David Schoenfeld, MD, MPH* Jonathan Fisher, MD, MPH† Edward Ullman, MD*

*Beth Israel Deaconess Medical Center, Department of Emergency Medicine, Boston, Massachusetts † Maricopa Medical Center, Department of Emergency Medicine, Phoenix, Arizona

Section Editor: David P. Way, MEd Submission history: Submitted June 14, 2016; Revision received July 29, 2016; Accepted October 10, 2016 Electronically published November 15, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.10.31245

Introduction: The structure of the interview day affects applicant interactions with faculty and residents, which can influence the applicant’s rank list decision. We aimed to determine if there was a difference in matched residents between those interviewing on a day on which didactics were held and had increased resident and faculty presence (didactic day) versus an interview day with less availability for applicant interactions with residents and faculty (non-didactic day). Methods: This was a retrospective study reviewing interview dates of matched residents from 2009-2015. Results: Forty-two (61.8%) matched residents interviewed on a didactic day with increased faculty and resident presence versus 26 (38.2%) on a non-didactic interview day with less availability for applicant interactions (p = 0.04). Conclusion: There is an association between interviewing on a didactic day with increased faculty and resident presence and matching in our program. [West J Emerg Med. 2017;18(1)142-145.]

INTRODUCTION Over the past five years the number of medical students applying to emergency medicine (EM) as well as the total number of EM residency positions has increased.1 Furthermore, the average number of ranked programs by residents matching in EM through the National Residency Matching Program now exceeds 11,2 leading to more interviews per applicant. Given the increased competition for candidates, a better understanding of the factors that influence applicants is important. Previous studies have shown that applicants frequently view the happiness of residents, program personality, ability to interact with residents, enthusiasm of the faculty, geographic location and interview day experience as most important when selecting a residency program both in EM and other specialties.3-11 Interactions with current residents

Western Journal of Emergency Medicine

and EM faculty may improve the prospective applicants’ understanding of the program. What is less evident, however, is whether the specific structure of an interview day influences student rank list. Residency programs typically offer multiple interview days per week during the interview season. This may result in two distinct interview days. Often one day coincides with residency didactics and offers increased availability for interactions with residents and faculty members. In contrast, the second interview date may occur when there are no formal lectures and less availability for interaction with residency members. The goal of this study was to determine if there was a difference in matched residents between those interviewing on a day with increased resident and faculty presence versus an interview day with less availability for applicant interactions.

142

Volume XVIII, no. 1: January 2017


Lewis et al.

Interview Environment May Influence Applicant Selection of EM Residency Programs

METHODS This was a retrospective study performed at a tertiary medical referral center in Boston, Massachusetts, that is home to a three-year academic emergency medicine residency program. The interview date sheets from the 2009-2015 match years were reviewed by three of the study’s authors and confirmed three subsequent times for accuracy. During this time period, residency interviews were offered twice a week. One of these days, referred to as “didactic day,” is when resident didactic lectures, departmental morbidity and mortality (M&M) conference, and protected faculty administrative time occur. The majority of the department core faculty and residents has protected time on these days; they are physically present in the EM administrative suite where the interview day takes place and there is an increased presence of residents and faculty at the interview day lunch. On the second interview day, “non-didactic day,” residents and the majority of faculty have no formal administrative or educational activities scheduled. While there is still a resident lunch organized on non-didactic days, often fewer faculty and residents are present. The three reviewers extracted the following information from each interview data sheet: gender, whether they interviewed on a “didactic” versus “non-didactic” day, whether or not they matched at our residency program, United State Medical Licensing Examination (USMLE) Step 1 score, and applicant competitive score, which is an aggregate measure incorporating Step 1, letters of recommendation, clerkship grades and application materials used to assess whether an interview should be granted. We excluded matched applicants who completed a clerkship or rotation at the program as they had extensive exposure to the residency. Proportions and Fisher exact tests were calculated using JMP 12 pro (Cary, NC). We used a Wilcoxon Rank Sum for USMLE and applicant scores as these were non-normally distributed data. This study was reviewed by the institutional review board at our institution and was determined to be exempt from further review.

RESULTS From 2009-2015, 1,029 residency applicants were interviewed during the regular interview season. Fifteen applicants met exclusion criteria. This left a total of 1,013 for analysis. There was no difference in the distribution of applicants by interview day or gender (Table). Applicants who interviewed on a didactic day had a 1.69 increased odds of matching (p=0.04) (Table). There was no difference in applicant or USMLE Step 1 score for residents matched on a didactic versus non-didactic Day (Table). DISCUSSION As the number of applicants continues to increase, programs have adjusted the total number of student interviews. To accommodate this increase, we created multiple interview days at our program. This study shows an association between “didactic day” and applicant matching in our program. There was no difference between the two groups with regard to variables such as USMLE Step 1 or applicant competitive scores. While previous studies have shown that a variety of subjective factors influence an applicant’s decision to rank a residency program highly, this is the first study to our knowledge looking at the influence of two distinct interview dates on residents matched in a program. Prospective residents gather information on residency programs through multiple sources, including online forums, websites and word of mouth.12,13 The brief time spent on site during the interview day, however, is integral to their decision.7 While there is evidence that specific questions asked during an interview can influence an applicant’s decision,14 there are no studies about the specific structure of the interview day and influence on rank list. Two general graduate medical education (GME) residency program studies show that the most commonly cited factors that applicants weighed in their ranking were residency work environment gleaned from quality time with the program director, faculty and chair and informal interaction with residents and the relationship between faculty and residents within the program.7, 8 Our didactic days include M&M conference during which

Table. Interviewed and matched applicants with applicant score and USMLE Step 1 score by interview day from the 2009-2015 match. Didactic day Non-didactic day P-value Included applicants

504 (49.8%)

509 (50.2%)

--

Male

296 (58.7%)

307 (60.3%)

0.61

42 (61.8%)

26 (38.2%)

0.04

35

34

0.34

239.5

240

0.95

Applicants matched at our program Median applicant score Median USMLE Step 1 score USMLE, United States Medical Licensing Examination.

Volume XVIII, no. 1: January 2017

143

Western Journal of Emergency Medicine


Lewis et al.

Interview Environment May Influence Applicant Selection of EM Residency Programs residents and faculty interact candidly in an educational setting. This experience provides applicants with a better understanding of the faculty-resident relationship as well as the teaching skills of the faculty. Furthermore, the increased presence of faculty and residents during the day provides more opportunity for informal conversations and an improved understanding of the general feel of the residency program. In contrast, the non-didactic days typically have fewer faculty and residents in attendance and the applicants are not exposed to the educational conference, which may influence the applicants’ perception of the program. EM residency-specific studies have found similar factors of importance in applicants formulating their rank list as the two GME studies.3,4,5,9 Geographic location is frequently cited as one of the most important deciding factors, but this is out of the program’s control. However, factors that programs can influence include overall happiness of residents, faculty enthusiasm, and interview day.3,4,5,9 Similar factors in radiation oncology and radiology residencies have been shown to affect applicant rank list.6,10 DeIorio et al. argue that the experience during the interview day influences the applicant’s perception of how happy the residents seemed, program personality, and faculty enthusiasm.9 Likewise, Love et al. suggest that applicants become increasingly more sophisticated about the choices and their own personal priorities with respect to selecting a program over time, which may be influenced by interviewing and communicating with other applicants, residents and faculty.5 With a greater number of faculty members present and engaging in M&M on didactic days, applicants are likely able to more fully appreciate faculty involvement in departmental activities. Furthermore, the increased number of residents present may provide a broader number of applicant-resident interactions. While both of our interview dates share a common “night out” with the residents, which have been shown to be important for applicants,11 there is an increased presence of residents on didactic days. This combined with experiencing the faculty-resident interaction can provide greater insight into the program’s personality, which may be influential in determining the applicants rank list. Our results show that increased exposure to departmental activities and increased availability of faculty and resident interactions may positively influence the applicant match. The interview-day experience and interaction with faculty and residents may be a significant modifiable factor of the overall structure of the interview day.

couples matching, city preference, and family considerations. Additionally, we could not account for applicants who did not match into our program and the reason for ranking other programs more highly. Finally, we do not have an objective measurement of the number of faculty and residents in attendance on any given interview day. CONCLUSION Our study found that the majority of residents who matched into our program interviewed on a “didactic day.” The greater presence of faculty and residents and increased availability for individual interactions among the applicants may provide better insight into the program and may prove beneficial for recruiting applicants. Larger and potentially multicenter studies would be needed to explore the full impact of increased resident and faculty presence and exposure to didactics as part of the interview day.

Address for Correspondence: Jason Lewis, MD, Beth Israel Deaconess Medical Center, Department of Emergency Medicine, One Deaconess Road, Rosenberg Building 2, Boston, MA 02215. Email: jlewis@bidmc.harvard.edu. Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none. Copyright: © 2016 Lewis et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

REFERENCES Main Residency Match. Available at: http://www.nrmp.org/wpcontent/uploads/2014/04/Main-Match-Results-and-Data-2014.pdf. Published April 2014. Accessed May 10, 2015. 2. National Resident Matching Program. Charting Outcomes in the Match: Characteristics of Applicants Who Matched to Their Preferred Specialty in the 2014 Main Residency Match. Available at: http://www.nrmp.org/wp-content/uploads/2014/09/Charting-

LIMITATIONS There are several limitations to our study. We had a relatively small sample size at only a single site. While we did account for previous rotators in our ED who matched into our program, we were unable to account for other potential confounding variables such as home institution,

Western Journal of Emergency Medicine

1. National Resident Matching Program. Results and Data: 2014

Outcomes-2014-Final.pdf. Published April 2014. Accessed May 10, 2015. 3. Yarris LM, DeIorio NM, Lowe RA. Factors applicants value when selecting an emergency medicine residency. West J Emerg Med. 2009;(3):159-162.

144

Volume XVIII, no. 1: January 2017


Lewis et al.

Interview Environment May Influence Applicant Selection of EM Residency Programs

4. DeSantis M and Marco CA. Emergency medicine residency

10. Pretorius ES and Hrung J. Factors that affect National Resident

selection: factors influencing candidate decisions. Acad Emerg Med.

Matching Program rankings of medical students applying for radiology

2005;12:559-561.

residency. Acad Radiol. 2002;9(1):75-81.

5. Love JN, Howell JM, Hegarty CB, et al. Factors that influence medical

11. Schlitzkus LL, Schenarts PJ, Schenarts KD. It was the night before

student selection of an emergency medicine residency program:

the interview: perceptions of resident applicants about the preinterview

implications for training programs. Acad Emerg Med. 2012;19(4):455-

reception. J Surg Educ. 2013;70(6):750-7.

460.

12. The Student Doctor Network: The Mother of All Residency Reviews

6. Brower JV, Mohindra P, Bradley KA, et al. Radiation oncology residency

Stickies. Available at: http://forums.studentdoctor.net/threads/the-mother-

selection: a targeted assessment of factor importance among fourth-year

of-all-residency-reviews-stickies.769419/. Published October 10, 2010.

medical students. Int J Radiation Oncol Biol Phys. 2014;88(4):967-968.

Accessed May 10, 2015

7. Nagler A, Andolsek K, Schlueter J, et al. To match or not: factors

13. Residency Secrets. The Residency Interviews Part 1: Interview Day.

affecting resident choice of graduate medical education program. J Grad

Available at: http://www.kaptest.com/blog/residency-secrets/2012/09/13/

Med Educ. 2012:159-164.

the-residency-interviews-part-1-interview-day/. Published September 13,

8. Nuthalapaty FS, Jackson JR, Owen J. The influence of quality-of-life, academic, and workplace factors on residency program selection. Acad

2012. Accessed May 10, 2015 14. Santen SA, Davis RD, Brady DW, et al. Potentially discriminatory

Med. 2004;79:417-425.

questions during residency interviews: frequency and effects on

9. DeIorio NM, Yarris LM, Gaines SA. Emergency medicine residency

residents’ ranking of programs in the National Resident Matching

applicant views on the interview day process. Acad Emerg Med.

Program. J Grad Med Educ. 2010:336-40.

2009;16(12):S67-S70.

Volume XVIII, no. 1: January 2017

145

Western Journal of Emergency Medicine


Original Research

Characteristics of Real-Time, Non-Critical Incident Debriefing Practices in the Emergency Department Nur-Ain Nadir, MD, MEHP(c)*¶ Suzanne Bentley, MD, MPH† Dimitrios Papanagnou, MD MPH‡ Komal Bajaj, MD§ Stephan Rinnert, MD¶ Richard Sinert, DO¶

*OSF St. Francis Medical Center, University of Illinois College of Medicine at Peoria, Department of Emergency Medicine, Peoria, Illinois † Elmhurst Hospital Center, Icahn School of Medicine at Mount Sinai, Department of Emergency Medicine and Department of Medical Education, Elmhurst, New York ‡ Thomas Jefferson University Hospital, Department of Emergency Medicine, Philadelphia, Pennsylvania § Jacobi Medical Center, Department of Obstetrics and Gynecology, New York, New York ¶ Kings County Hospital and SUNY Downstate Medical Center, Department of Emergency Medicine, New York, New York

Section Editor: David A. Wald, DO Submission history: Submitted July 8, 2016; Revision received October 3, 2016; Accepted October 27, 2016 Electronically published December 5, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.10.31467

Introduction: Benefits of post-simulation debriefings as an educational and feedback tool have been widely accepted for nearly a decade. Real-time, non-critical incident debriefing is similar to post-simulation debriefing; however, data on its practice in academic emergency departments (ED), is limited. Although tools such as TeamSTEPPS® (Team Strategies and Tools to Enhance Performance and Patient Safety) suggest debriefing after complicated medical situations, they do not teach debriefing skills suited to this purpose. Anecdotal evidence suggests that real-time debriefings (or non-critical incident debriefings) do in fact occur in academic EDs;, however, limited research has been performed on this subject. The objective of this study was to characterize real-time, non-critical incident debriefing practices in emergency medicine (EM). Methods: We conducted this multicenter cross-sectional study of EM attendings and residents at four large, high-volume, academic EM residency programs in New York City. Questionnaire design was based on a Delphi panel and pilot testing with expert panel. We sought a convenience sample from a potential pool of approximately 300 physicians across the four sites with the goal of obtaining >100 responses. The survey was sent electronically to the four residency list-serves with a total of six monthly completion reminder emails. We collected all data electronically and anonymously using SurveyMonkey.com; the data were then entered into and analyzed with Microsoft Excel. Results: The data elucidate various characteristics of current real-time debriefing trends in EM, including its definition, perceived benefits and barriers, as well as the variety of formats of debriefings currently being conducted. Conclusion: This survey regarding the practice of real-time, non-critical incident debriefings in four major academic EM programs within New York City sheds light on three major, pertinent points: 1) real-time, non-critical incident debriefing definitely occurs in academic emergency practice; 2) in general, realtime debriefing is perceived to be of some value with respect to education, systems and performance improvement; 3) although it is practiced by clinicians, most report no formal training in actual debriefing techniques. Further study is needed to clarify actual benefits of real-time/non-critical incident debriefing as well as details on potential pitfalls of this practice and recommendations for best practices for use. [West J Emerg Med. 2017;18(1)146-151.]

Western Journal of Emergency Medicine

146

Volume XVIII, no. 1: January 2017


Nadir et al.

Real-Time, Non-Critical Incident Debriefing Practices in the ED

INTRODUCTION The emergency department (ED) is a complicated teaching environment. Prolonged patient waiting times, frequent interruptions, a diverse set of learners and a variety of emergent, often unpredictable clinical cases compounded with understaffing and limited resources represent the major barriers to effective bedside teaching and provision of feedback to trainees. This challenging learning environment makes a strong argument for ED-specific teaching and learning strategies.1-3 Anecdotal reports suggest that one teaching tool and feedback strategy being employed by emergency medicine (EM) faculty is real-time, non-critical incident debriefing. Real-time feedback during a clinical shift in the ED is an important component of a resident physician’s medical education and can have a profound impact on clinical practice.2-5 Despite this, many residents feel they do not get adequate or useful feedback during their clinical shifts. Specific, tailored, learner-centered feedback is crucial but rarely performed.2-5 Debriefing is an educational tool based on the principles of adult learning theory that uses a simulated (or real) medical event to generate a discussion of the teachable moments within that event.6 Debriefings are critical to healthcare education because that is usually where the critical process of feedback occurs and where learning is often clarified and translated into “take-home points” and guidelines for future practice.7,8 An example of such an event would be a resident physician encountering a challenging, agitated patient. The teachable opportunity would include a debriefing of the difficulties encountered by the resident and what went smoothly versus what could have been performed differently. Debriefing can be viewed as a conversation about a medical event, where any observed clinical performance gaps are addressed.9 Learners are asked open-ended questions in order to clarify their individual thought processes and are also asked to self-critique their performance.11,13,14 By promoting constructive self-critique and self-evaluation, medical debriefing instills practices of life-long learning, considered to be important elements of “practice-based learning,” one of the six core medical education competencies required by the Accreditation Council of Graduate Medical Education..15 Research has clearly established the importance of feedback. Debriefing builds on many tenets of feedback including recommendations that it should be timely, specific, tailored, and learner centered.11,13-14 Most of this research, however, has been conducted in simulated environments. With the advent of communication tools such as TeamSTEPPS16 (Team Strategies and Tools to Enhance Performance and Patient Safety), debriefing is promoted as a means of self-reflection in order to lead to systems and process improvement.

purposes of this study. These four programs were chosen because they are large, high-volume, academic teaching hospitals within the city of New York. We contacted residency leadership from each hospital and obtained permission to distribute a questionnaire to EM staff. Questionnaire design commenced with a PubMed literature search using the terms “medical debriefing,” “simulation debriefing,” “non-critical incident debriefing” and “real-time debriefing.” We then identified major landmark articles on medical educational debriefing practices, techniques, and skills. “Critical incident debriefing” and similar psychological debriefing articles were excluded. Based on the literature search, we drafted a questionnaire examining basic characteristics of debriefing. We identified EM educators and simulation debriefing experts based on their respective research publications and/or involvement in the fields of EM and healthcare simulation and invited them to participate in a Delphi panel for further refinement of the questionnaire. Feedback from the Delphi panel of six experts was incorporated into a second version of the questionnaire that was reviewed by the Delphi panel experts. It was then pilot-tested with a group of 10 emergency physicians. Feedback regarding phraseology and question order was incorporated into the final survey (see Appendix). We sought a cross-sectional, convenience sample from a potential pool of approximately 300 physicians across the four sites with the goal of obtaining >100 responses. A sample size goal of 100 was instituted for this preliminary survey project convenience sample in order to include approximately 10 subjects per every one survey item. The survey was sent electronically to the four residency listserves from December 2012 to June 2013, with a total of six monthly completion reminder emails. We collected results electronically and anonymously using SurveyMonkey.com. All data were analyzed using Microsoft Excel. This study was deemed exempt by the local institutional review board. RESULTS We collected 157 responses, representing a response rate between 45% and 52%. Of the respondents, 52% were resident physicians and 47% were attending physicians. No other demographic data were collected. Fifty-nine percent of our respondents reported participating in non-critical incident debriefing* in clinical and simulated settings, whereas 14.6 % reported debriefing only during clinical practice (Figure 1a). *

METHODS We recruited four EM residency programs for the Volume XVIII, no. 1: January 2017

“Critical incident debriefing” or “critical incident stress debriefing” are well established terms in psychological literature, that refer to a deliberate counseling method designed to mitigate the stress response generated from emotionally traumatic cases or “critical incidents” such as pediatric deaths or mass casualty events.17 As “critical incident debriefing” focuses on stress mitigation and not education, process or systems improvement, it was excluded from the literature search.

147

Western Journal of Emergency Medicine


Nadir et al.

Real-Time, Non-Critical Incident Debriefing Practices in the ED

Figure (1a-1d). Practice of real-time debriefing a) Percentage participation in simulated and/or real-time non-critical incident debriefings b) Percentage with formal training in debriefing skills c) Percentage expressing interest in formal debriefing training d) Reported percentages of debriefings occurring per month.

When asked what debriefing meant to physicians, 87.6 % reported that it was a discussion based on real or simulated cases where participants self-reflect and self-analyze their actions and emotions to improve or sustain performance in the future. Other responses are depicted in Table 1a. With respect to whether respondents had been formally trained in any debriefing technique, only 14% reported affirmatively (Figure 1c). Several comments in this section specified that respondents had learned debriefing skills by watching colleagues or had learned it during simulation debriefing courses. There was significant interest in formal debriefing training in the group surveyed (Figure 1b). Thirty percent of our respondents reported debriefing on clinical shifts between 1-3 times monthly. Three percent reported debriefing between 4-6 times monthly. The majority of respondents answered less than one debriefing a month (Figure 1d). Perceived benefits of real-time debriefings are depicted in Table 1e. The majority of respondents indicated that they perceive debriefings to be beneficial for clearing the air after an event (47%), providing feedback to learners and colleagues Western Journal of Emergency Medicine

(66%), identifying knowledge and process gaps (55%), identifying systems errors (55%), promoting of team unity and cohesiveness (37%) and identifying medico-legal ramifications (60%). With respect to the formats of real-time debriefings conducted, (Figure 1b) 84% of respondents reported that debriefings were performed as a group, while 37.6% reported that debriefings included other professions such as nursing and ancillary staff; 22.9% reported performing individualized debriefings for each learner. Only 15.3% reported inclusion of other specialties, and in the “comments� section several respondent noted that interdisciplinary debriefings were often met with resistance from the other specialties. Table 1d reflects the different kinds of situations that emergency physicians are most likely to debrief. The majority of respondents reported debriefing about adverse events, near-adverse events, if a colleague was visibly emotionally upset, difficulties during clinical procedures, and miscommunication or poor teamwork; 24.8% reported debriefing after every cardiac code and 25.5 % after every trauma code. One respondent commented that each debriefing 148

Volume XVIII, no. 1: January 2017


Nadir et al.

Real-Time, Non-Critical Incident Debriefing Practices in the ED

Table. Characteristics of real-time debriefing as perceived and understood by emergency physicians. Characteristics of Real-Time Debriefing Practices

Percentage responses (n)

1a. Emergency physicians’ understanding of “debriefing” i) A discussion based on a real or simulated case scenario about its management. ii) A post-medical error discussion at an administrative level such as Root Cause Analysis/ or morbidity and mortality Conference iii) A discussion, based on real or simulated cases, aimed at identifying knowledge or performance gaps iv) A discussion, based on real or simulated cases, where participants self-reflect and analyze their actions and emotions, to improve or sustain performance in the future

87.9 (138)

1b. Formats of real-time debriefings being performed i) Separately for each individual learner ii) Group of learners (residents or medical students) iii) Inter-professional (with nursing and/or ancillary support staff) iv) Interdisciplinary v) Initially as a group followed by individually for learners

22.9 (36) 84.1 (132) 37.6 (59) 15.3 (24) 13.4 (21)

1c. Perceived barriers to real-time debriefing i) A lack of training in debriefing skills ii) Time constraints iii) Disinterested colleagues iv) Lack of appropriate space v) Work environment considerations (emotional/ defensive/confrontational co-workers)

48.4 (76) 85.4 (134) 34.4 (56) 35.7 (54) 29.9 (47)

1d. Situations most likely to be debriefed i) Emotionally upset colleagues ii) Adverse event iii) Near-adverse event iv) Difficulties in clinical procedure performance v) Miscommunications and poor teamwork vi) Emotionally charged resuscitations vii) All cardiac codes viii) All trauma codes ix) All of the above

66.2 (104) 68.8 (108) 59.2 (93) 59.2 (93) 65.6 (103) 58.0 (91) 24.8 (39) 25.5 (40) 24.8 (39)

1e. Perceived benefits of real-time debriefings i) Clears the air ii) Provides a venue for learner and colleague feedback. iii) Provides a venue for addressing learner and colleague knowledge and/or performance gaps iv) Promotes team cohesiveness and unity with respect to patient care v) Provides opportunity for discussion of the medico-legal ramifications of adverse or near-adverse events vi) Identifies systems errors leading to systems-process improvements vii) All of the above

42.0 (66) 65.6 (103) 54.8 (86) 55.4 (87) 15.9 (25) 59.8 (94) 36.9 (58)

was followed up with a personal email to learners to reinforce clinical points learned during debriefings. Several barriers to real-time medical debriefing were reported by respondents as illustrated in Table 1c; 85.4% reported lack of time during a busy clinical shift as a major deterrent. Other barriers included lack of appropriate training (48.4%), lack of space (35.7%), disinterested colleagues (34.4%) and work environment considerations such as confrontational or defensive co-workers (29.9%). Under “comments” for this question, it was noted by a few respondents that debriefing was not stressed enough in curricula and therefore was often not on the academic physicians’ radar. DISCUSSION Real-time feedback, such as that accomplished through Volume XVIII, no. 1: January 2017

45.9 (72) 12.7 (20) 51.6 (81)

real-time debriefing during a clinical shift in the ED is an important component of a resident physician’s medical education and can have a profound impact on clinical practice.2-5 Debriefings are significant because they provide a venue for the crucial processes of feedback, reflection and experiential learning that lead to clinical practice pearls for each learner.7,8 The results from this study confirm that real-time debriefings occur frequently in EDs despite only 14% of respondents reporting formal training in debriefing techniques. The majority of respondents would like formal training, reflecting growing awareness of the potential benefits of real-time debriefing. Although there appears to be a perceived value of the feedback from debriefing, whether there is a proven benefit to patient care, morbidity, mortality and learner education is difficult to pinpoint and remains to be investigated. 149

Western Journal of Emergency Medicine


Nadir et al.

Real-Time, Non-Critical Incident Debriefing Practices in the ED Any potential pitfalls of real-time debriefing, such as medicolegal ramifications or unstable work environment as a consequence of debriefing, also remain to be elucidated. It would also be interesting and likely beneficial to study the effects of instituting a department-wide debriefing protocol on learner education, staff interaction and systems/process improvement. The effect of non-critical incident debriefing on patient safety is another potential area of research. Finally, as there is little clarity on the format of debriefing techniques being used it would be enlightening to investigate which kind of debriefing occurs in the ED environment. Simulation debriefing is based on Kolb’s principles of experiential learning.15 Kolb’s cycle of experiential learning is based on learners’ experiencing a particular event, reflecting on that event, conceptualizing it abstractly and actively experimenting with their newly conceptualized knowledge. Experiential learning occurs in clinical practice during medical student clerkships, residency and beyond. Learners experience a particular clinical case and they reflect on the management of the case. Learners then conceptualize the knowledge and use it when seeing a similar case in the future.15 The assumption in this picture is that learners perform this learning cycle independently. While it may be true for some learners, a facilitated approach to reflection and conceptualization may aide in the learning process. Non-critical incident debriefing can be viewed as the facilitation of experiential learning in real time. It can be tailored to complex clinical cases or events. It can be applied to a diverse set of learners, focusing on learner-specific knowledge, process or procedural gaps. When involving other disciplines and professions it can also pave the way for effective teamwork. In these ways, real-time, non-critical incident debriefing has the potential to address some of the barriers to effective bedside teaching in the academic and non-academic ED mentioned before.1

formal training in actual debriefing techniques. In conclusion, further studies are needed to clarify actual benefits of realtime, non-critical incident debriefing as well as details on potential pitfalls of this practice and recommendations for best practices for use.

Address for Correspondence: Suzanne Bentley, MD, MPH, Elmhurst Hospital Center, Icahn School of Medicine at Mount Sinai, Department of Emergency Medicine and Department of Medical Education, Elmhurst Hospital Center, Emergency Department, B1-27, 79 01 Broadway, Elmhurst, NY 11373. Email: Suzannebentley@gmail.com. Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none. Copyright: © 2016 Nadir et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

REFERENCES a novel and effective faculty development program to improve emergency department teaching. Acad Emerg Med. 13, 1062. 2. Yarris LM, Linden JA, Gene Hern H, et al. Attending and resident satisfaction with feedback in the emergency department. Acad Emerg Med. 2009;16 Suppl 2:S76-81. 3. Yarris LM, Jones D, Kornegay JG, et al. The Milestones Passport: A Learner-Centered Application of the Milestone Framework to Prompt

LIMITATIONS This study is limited by the nature of any survey-based project and the potential biases introduced by self-reporting. Further, it is limited by the limited response rate. In addition, the survey data provide only a brief glimpse into the practice patterns and trends relating to debriefings in academic EDs in one metropolitan city, which may lead to regional bias and may not allow for generalization to national characteristics of this phenomenon.

Real-Time Feedback in the Emergency Department. J Grad Med Educ. 2014;6(3):555-560. 4. Ende J. Feedback in clinical medical education. JAMA. 1983;250(6):777–781. 5. Yarris LM, Fu R, LaMantia J, et al. Effect of an educational intervention on faculty and resident satisfaction with real-time feedback in the emergency department. Acad Emerg Med. 2011;18(5):504-512. 6. Rudolph, JW, Simon R, Dufresne RL, et al. (2006). There’s no

CONCLUSION This survey regarding the practice of real-time, noncritical incident debriefings in four major academic emergency programs within New York City sheds light on three major, pertinent points: 1) Real-time, non-critical incident debriefing definitely occurs in clinical emergency practice; 2) in general, real-time debriefing is perceived to be of some value with respect to education, systems and performance improvement; 3) although being practiced by clinicians, most report no Western Journal of Emergency Medicine

1. Sherbino J, Frank J, Lee C, et al. (2006). Evaluating “ED STAT!”

150

such thing as “nonjudgmental” debriefing: a theory and method for debriefing with good judgment. Simul Healthc. 1, 49. 7. Rudolph JW, Foldy EG, Robinson T, et al. Helping without harming: The instructor’s feedback dilemma in debriefing--a case study. Simul Healthc. 2013;8(5):304-316. 8. Rudolph JW, Simon R, Raemer DB, et al. Eppich. (2008). Debriefing as Formative Assessment: Closing Performance Gaps in Medical Education. Acad Emerg Med. 15, 1010. 9. Rudolph JW, Simon R, Rivard P, et al. (2007). Debriefing with

Volume XVIII, no. 1: January 2017


Nadir et al.

Real-Time, Non-Critical Incident Debriefing Practices in the ED

good judgment: combining rigorous feedback with genuine

instruct emergency medicine residents in cognitive forcing strategies.

inquiry. Anesthesiol Clin. 25, 361. 10. Ahmed M, Sevdalis N, Paige J, et al. (2012). Identifying best practice

Acad Med. 79, 438. 14. Bond WF, Deitrick LM, Eberhardt M, et al. (2006). Cognitive versus

guidelines for debriefing in surgery: a tri-continental study. Am J

technical debriefing after simulation training. Acad Emerg Med. 13,

Surg. 203, 523. 11. Okuda Y, Bryson EO, DeMaria Jr. S, et al. (2009). The Utility of

276. 15. Fanning RM and Gaba DM. (2007). The role of debriefing in

Simulation in Medical Education: What Is the Evidence? Mt Sinai J Med. 76, 330.

simulation-based learning. Simul Healthc. 2, 115. 16. Agency for Healthcare Research and Quality. TeamSTEPPS®

12. Nasca TJ, Brigham T, Philibert I, et al. (2012). The Next GME

Instructor Guide. Available at: http://www.ahrq.gov/teamsteppstools/

Accreditation System — Rationale and Benefits. N Engl J Med. 366, 1051.

instructor/fundamentals/index.html. Accessed October 10, 2015. 17. Pia F, Burkle FM, Stanley SAR, et al. (2011) ACFASP Review: Critical

13. Bond WF, Deitrick LM, Arnold DC, et al. (2004). Using simulation to

Volume XVIII, no. 1: January 2017

Incident Stress Debriefing (CISD). IJARE. 2011;5:130.

151

Western Journal of Emergency Medicine


Original Research

Continuing Medical Education Speakers with High Evaluation Scores Use more Image-based Slides Ian Ferguson, BA* Andrew W. Phillips MD, MEd† Michelle Lin, MD‡

*Washington University School of Medicine, St. Louis, Missouri † Stanford University, Department of Anesthesia, Division of Critical Care, Stanford, California ‡ University of California, San Francisco, Department of Emergency Medicine, San Francisco, California

Section Editor: David A. Wald, DO Submission history: Submitted July 8, 2016; Revision received September 12, 2016; Accepted October 27, 2016 Electronically published December 5, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.10.31484

Introduction: Although continuing medical education (CME) presentations are common across health professions, it is unknown whether slide design is independently associated with audience evaluations of the speaker. Based on the conceptual framework of Mayer’s theory of multimedia learning, this study aimed to determine whether image use and text density in presentation slides are associated with overall speaker evaluations. Methods: This retrospective analysis of six sequential CME conferences (two annual emergency medicine conferences over a three-year period) used a mixed linear regression model to assess whether postconference speaker evaluations were associated with image fraction (percentage of image-based slides per presentation) and text density (number of words per slide). Results: A total of 105 unique lectures were given by 49 faculty members, and 1,222 evaluations (70.1% response rate) were available for analysis. On average, 47.4% (SD=25.36) of slides had at least one educationally-relevant image (image fraction). Image fraction significantly predicted overall higher evaluation scores [F(1, 100.676)=6.158, p=0.015] in the mixed linear regression model. The mean (SD) text density was 25.61 (8.14) words/slide but was not a significant predictor [F(1, 86.293)=0.55, p=0.815]. Of note, the individual speaker [χ2(1)=2.952, p=0.003] and speaker seniority [F(3, 59.713)=4.083, p=0.011] significantly predicted higher scores. Conclusion: This is the first published study to date assessing the linkage between slide design and CME speaker evaluations by an audience of practicing clinicians. The incorporation of images was associated with higher evaluation scores, in alignment with Mayer’s theory of multimedia learning. Contrary to this theory, however, text density showed no significant association, suggesting that these scores may be multifactorial. Professional development efforts should focus on teaching best practices in both slide design and presentation skills. [West J Emerg Med. 2017;18(1)152-158.]

INTRODUCTION Slide-based presentations, such as Microsoft PowerPointTM and Apple KeynoteTM, serve as a common format in continuing medical education (CME) conferences. Consequently, developing effective design principles for such multimedia presentations in health professions education is essential to optimize information delivery, attendee

Western Journal of Emergency Medicine

engagement, and adult learning. Researchers have developed instructional design principles for multimedia learning based on cognitive psychology experiments on learning and instruction. Richard Mayer’s cognitive theory of multimedia learning particularly provides a conceptual framework to describe how learners process multimedia.1,2 According to Allan Paivio and modified

152

Volume XVIII, no. 1: January 2017


Ferguson et al.

CME Speaker Evaluations Associate with Image-Based Slides

by Mayer, individuals process materials into either a visual or auditory channel within their working memory, each having a finite capacity. This is known as the dual-coding theory.3 Adherence to design principles can optimize learning by balancing the cognitive load for each of these channels. Alley et al. refined Mayer’s design principles to the specific demands of scientific presentations.4 Key tenets include replacing text with visual representations of the evidence and reducing the number of words on a slide, while the presenter tells the story. Presentations using these principles have been shown to improve retention and transfer of new knowledge.5-7 Although such multimedia design principles are supported by established theoretical underpinnings and empirical learning experiments, relevant published studies primarily involved undergraduate and medical students in controlled laboratory or classroom learning environments.5-8 No research has yet determined whether these principles are generalizable to adult learners in the setting of CME conferences. The purpose of this study was to assess the response to evidence-based multimedia design principles in CME conference presentations by an audience of practicing clinicians. Our primary endpoints were the association of image fraction (percentage of image-based slides per presentation) and text density (average number of words per slide) with speaker evaluation scores. We hypothesized that presentation slides with more image-based slides and fewer words would result in higher speaker evaluation scores compared to presentations that did not adhere to these design principles.

5=excellent) in each of the domains of delivery, content, and practical value. We used the overall evaluation score, defined as the mean score across all three domains, as the primary outcome measure because multimedia, slide-based learning is a complex process that includes aspects of all three domains. Each lecture was videotaped and archived by CMEDownload.com. A single study author viewed all of them and collected study data from each lecture (image fraction, text density, and total presentation time). In the pilot phase, the author team corroborated the data and collectively clarified definitions for image fraction and text density for the data collection protocol. Image fraction was defined as the number of image-based slides divided by the total number of slides in the presentation. An image-based slide was any slide with an educationally-relevant image contributing to its teaching point, such as a graph, table, diagram, or illustrative photo. Thus, we did not count non-educational images, such as animations, institutional logos, or personal photos, as “images.” For presentations repeated by the same speaker in a different conference or year, only the most recent presentation was included. We excluded presentations by one study author and one study collaborator. Faculty demographics collected included gender and academic rank, defined as clinical instructor, assistant professor, associate professor, or full professor. This information was publically available on the conference brochure and/or an Internet search of their academic departments.

METHODS Participants and Study Design This retrospective study analyzed attendees’ evaluation scores of speakers from six sequential national emergency medicine (EM) CME conferences over a three-year period. More specifically, we extracted data from the High Risk Emergency Medicine (HREM) and Topics in Emergency Medicine (TEM) conferences for 2010, 2011, and 2012. The same institution’s academic emergency department hosted both of these conferences. A mixed linear regression model assessed whether speaker evaluations were associated with image fraction (percent of image-based slides per presentation) and text density (number of words per slide) as well as the speaker and his/her academic seniority. This study received exemption status by the institutional review board at the University of California, San Francisco. Data Collection We collected three data elements for the six conferences, which included the following: conference attendee evaluations, slide content, and demographics for each speaker. Anonymized attendee evaluations of the speakers were provided to the study group by the conference planners. Each lecture was evaluated on a five-point Likert scale (1=poor,

Volume XVIII, no. 1: January 2017

Data Collection Protocol for Slide Content The master data-collection form for slide content included the following elements: name of presenter, conference name, year, total presentation time, total number of slides (excluding the title, disclosure, objectives, and summary slides), time per slide, number of teaching points per slide, number of words per teaching point, and whether a slide included an educationally-relevant image (e.g. figure, chart, table, video). A “teaching point” was defined a priori as a discretely readable block of text, explicitly marked by bullets, numbers, or otherwise clearly separated. We excluded words embedded in figures, such as decision trees, tables, image captions, annotations, slide headers, citations, and journal article screenshots, from the final word count per slide. Statistical Analysis We analyzed initial univariate tests for factors with theoretical association with overall speaker evaluation using independent t tests, univariate ANOVA, or Pearson’s r as appropriate, followed by a fixed multivariate regression for the naïve model, as is standard.9 The naïve model included the primary endpoints of image fraction (percentage of imagebased slides, and calculated as a decimal value for analysis purposes) and text density (average words per slide). This retrospective analysis contained a large number of

153

Western Journal of Emergency Medicine


Ferguson et al.

CME Speaker Evaluations Associate with Image-Based Slides lecturers who each gave a wide range of total presentations (range 1-8), and some speakers gave more than one presentation per conference. We therefore used a mixed linear regression for the final model, a common modeling method in the general education literature.10 (It is similar to a propensity score in that multiple factors are accounted for in a single variable.) In short, the mixed linear regression allows researchers to create a single variable that describes the variance for multiple related categorical factors, rather than create a new dummy variable for each of the categorical factors, thereby retaining statistical power.9 We entered all data initially into Excel 14.2.5, Microsoft Corporation, Seattle, Washington, and conducted all analyses using SPSS v21, IBM Corporation, Armonk, NY.

conference evaluation form. Slide-set characteristics abstracted from the recorded lectures yielded an average image fraction of 0.47 ± 0.25, meaning that 47% of the slides in a presentation were image based. The mean text density (words per slide) was 25.61 ± 8.14. Univariate and Unadjusted Model Analyses We performed initial univariate analyses to assess for potential factors in the model. Slide text density did not have a significant relationship with evaluations (r=-0.084, p=0.394). In contrast, image fraction was weakly associated with overall evaluation scores (r=0.197, p=0.044). We anticipated the possibility of a polynomial relationship between slide text density and image fraction with evaluation scores since too few and too many words or images may negatively impact evaluations. However, both scatter plots demonstrated linear relationships for the available data points. The conference [F(5, 99)=3.49, p=0.006], speaker [F(48, 56)=3.30), p<0.001), and speaker seniority [F(3, 101)=5.89, p=0.001] were each associated with significant differences in mean evaluation scores in univariate tests. Total presentation time (r=0.009, p=0.928), time per slide (r=-0.072, p=0.464), and gender [t(103)=-0.963, p=0.338] were not significantly associated with mean evaluation scores. An unadjusted model with slide image fraction and text density found a trend of image fraction predicting the mean evaluation [F(105)=3.489, p=0.065], while mean text density did not [F(105)=0.016, p=0.90]. Both primary endpoints were retained for the adjusted model because of their theoretical importance.

RESULTS Table 1 summarizes the data on conference lectures, evaluation response rates, and attendee clinical experience by conference and year. We analyzed a total of 105 unique presentations given by 49 faculty members from three High Risk EM (HREM) and three Topics in EM (TEM) CME conferences (2010-2012). From the video archive of 156 lectures, we included only 105 in this study; those excluded were repeat lectures, already included in the analysis, and lectures by two speakers who were involved in the design of this study. The minimum and maximum number of lectures provided by a single presenter were one and eight, respectively, with a mean ± standard deviation (SD) of 2.14 ± 1.62 and median of two lectures. Speaker seniority was distributed by academic rank as follows: clinical instructor (n=2, 1.9%), assistant professor (n=44, 42.9%), associate professor (n=34, 32.4%), full professor (n=25, 23.8%). The mean evaluation score for all speakers was 4.50 ± 0.24 (SD) out of a maximum five points. A total of 1,222 (70.1% response rate) evaluations were completed by conference attendees who collectively had 14.9 years (mean) of clinical experience. Clinical experience information was erroneously not captured in the 2010 TEM

Adjusted Model Analysis We created a mixed linear regression model to account for violations of independence by presenters and conferences associated with the presentations that are required for a standard regression analysis. The final adjusted model included image fraction, slide text density, and speaker seniority as fixed effects. The speaker was represented as the random effects intercept. The total presentation time,

Table 1. Recorded conference lectures, evaluation response rates, and attendee clinical experience from the six included conferences. High Risk Emergency Medicine (HREM); Topics in Emergency Medicine (TEM) (* - data were not collected for that conference year). Variable

HREM 2010

HREM 2011

HREM 2012

Number of included lectures (total number of conference lectures)

15 (24)

13 (22)

22 (28)

Number of evaluations completed (% of total number of registered attendees)

266/380 (70%)

258/290 (84.2%)

14

13

Attendee mean number of years in clinical practice

Western Journal of Emergency Medicine

154

TEM 2010

TEM 2011

TEM 2012

Total

9 (19)

17 (32)

29 (33)

105 (158)

149/245 (60.8%)

262/306 (85.6%)

204/320 (63.8%)

83/202 (41.1%)

1222/1743 (70.1%)

14

*

12

16

14.9

Volume XVIII, no. 1: January 2017


Ferguson et al.

CME Speaker Evaluations Associate with Image-Based Slides

Figure 1. Unadjusted, univariate correlation between overall speaker evaluation scores (on a 5-point scale) and the fraction of imagebased slides in their presentations.

conference, time per slide, and speaker’s gender did not significantly impact the model. The text density per slide did not significantly predict overall evaluation scores, [F(1, 86.293)=0.055, p=0.815], in the adjusted model. However, the image fraction significantly predicted overall evaluation scores [F(1, 100.676)=6.158, p=0.015] and had the greatest influence of any of the factors on predicting evaluation scores (b=0.277 on a 5-point Likert scale), as illustrated in Figure 1. Seniority [F(3, 59.713)=4.083, p=0.011] and presenter

[χ2(1)=2.952, p=0.003] also significantly predicted overall evaluation scores. (Presenter significance is given as χ2 because it was the random intercept in the mixed model.) The lowest-rank academic speakers (clinical instructor) received much lower evaluations, but this was in the context of only two speakers with this rank. Table 2 and Figure 2 present the estimates for all variables in the adjusted regression model. DISCUSSION This is the first published study assessing the association

Table 2. Mixed linear regression model to predict speaker evaluations. Faculty seniority comparisons are against full professor rank. Variable Mean text density (words/slide)

Estimate of variable’s effect on the model (b)

Standard error

95% Confidence interval

-0.0001

0.004

[-0.008, 0.007]

0.277

0.112

[0.056, 0.498]

Clinical instructor (n=2)

-0.591

0.221

[-1.035, -0.146]

Assistant professor (n=44)

-0.092

0.075

[-0.242, 0.057]

Associate professor (n=34)

0.037

0.079

[-0.122, 0.196]

n/a

n/a

n/a

0.0249

0.0081

[0.0131, 0.0470]

Image fraction Faculty seniority

Full professor (n=25) Presenter

Volume XVIII, no. 1: January 2017

155

Western Journal of Emergency Medicine


Ferguson et al.

CME Speaker Evaluations Associate with Image-Based Slides

Figure 2. Interval plot showing the estimate of the variable’s effect on mixed linear regression model with 95% confidence intervals. Faculty seniority comparisons were made against full professor rank.

between slide design and CME speaker evaluations by an audience of practicing clinicians. Higher evaluation scores were associated with presentations that had more imagebased slides (image fraction) but, contrary to our hypothesis, not those with fewer words per slide (text density). Speaker seniority was also associated with higher scores. These three findings can be understood in the context of the existing literature and conceptual framework of Mayer’s theory of multimedia learning and the dualcoding theory. Our primary study finding was that image fraction was associated with higher speaker evaluation scores. The mixed linear regression model demonstrated a b estimate of 0.277 for image fraction. Although this value seems relatively low, this is in the context of a 95% confidence interval that rises as high as 0.5. Furthermore, conference attendees limited their evaluation scores to a narrow range (3.5-5.0). The functional scale was only 1.5 points, of which 0.277 represents a potential 13% absolute change, which represents practical significance. The association between the use of image-based slides and speaker scores aligns with the fundamental multimedia premise of Mayer’s theory. Several studies have demonstrated that students learn and retain knowledge

Western Journal of Emergency Medicine

better when viewing slides with written text plus graphics compared to written text alone.5-7,11 The incorporation of images, however, should be judiciously considered. Not all images are educationally valuable. Images should be used only if they are integral to the teaching point. Humorous icons or animations can distract from learning and violate the multimedia principle of coherence, which advocates for the elimination of extraneous written text, audio, or graphics.1 If included, images should be high resolution and large enough to be read by all audience members.12,13 Blurry and small images (figures or tables) may detract from the message and negatively impact learning.14 If needed, such images need to be redrawn, enlarged to the full screen size, or removed altogether. In contrast to image use, text density was not associated with higher speaker evaluation scores, which is in opposition to Mayer’s theory and our hypothesis. Excess text would seem to violate the modality principle, which states that on-screen text should not be repeated aloud. This becomes distracting and adds unnecessarily redundant cognitive loads to both the visual and auditory channels in one’s working memory. Two explanations might explain why text density showed no association in our study. First, the speakers all incorporated a similar average number of words per slide (25.61 ± 8.14) within a narrow range. This may not have allowed adequate differentiation among the presentations. Second, the modality principle is not as applicable for presentations with many technical terms or symbols.11 CME conference topics generally present more complex concepts, compared to non-medical or more basic talks. In addition to the use of image-based slides, evaluation scores were also associated with speaker seniority. Speaker qualities such as delivery, tone, and confidence may have contributed to these higher scores. Additionally, a speaker’s reputation and stature may also have influenced the evaluations. Our findings argue for more professional development training in health professions education on evidence-based multimedia design principles for slide design, as well as speaking skills. The default templates for PowerPoint encourage poor design elements such as text-heavy bullet points. Instead, the slides should be thoughtfully designed with sound multimedia principles to accompany and supplement the speaker’s message. For CME conference planners and speakers, our study illustrates that slide design should not be an afterthought in planning a presentation because it can significantly affect learner satisfaction. Subsequent research should focus on reproducing this study in CME conferences of other health professions specialties and larger audiences to ensure generalizability. Additionally one can compare the post-test knowledge from CME conference attendees whereby the same speaker gives his/her same presentation using a different slide-set at another CME conference.

156

Volume XVIII, no. 1: January 2017


Ferguson et al.

CME Speaker Evaluations Associate with Image-Based Slides

LIMITATIONS There are several limitations, primarily revolving around the study methodology. The outcome measure was the mean speaker evaluation score. This was a score derived from the domains of delivery, content, and practical value. There was no specific domain for slide design. Many confounding variables likely affected the mean score, such as lecture environment and presentation topic, for which we did not account. Additionally, the CME evaluation forms were not validated. As is common in many CME conferences, custom templates were used. In our study, all six conference events used a similar evaluation template. The response rate for the evaluation forms was 41-86% (mean approximately 70%). Although this may lead to nonresponse bias, this falls within the typical response rate range of 60-80%.15 Only one author viewed and recorded data from all of the 105 included presentations. Although this may have introduced human error and interpretive biases in the data collection process, a second author corroborated the text and image counts from sample slides in the pilot phase of finalizing the data collection protocol. Our mixed linear regression study demonstrated an association between slide design and higher speaker evaluation scores, but this does not equate to causation. Theoretically, more skilled speakers may have been trained to use more image-based slides. Our study is the first to show at least an association between CME speaker scores and slide design. Attendee evaluation scores on speaker quality do not necessarily equate to learning gains. The Institute of Medicine’s Committee on Planning a Continuing Health Care Professional Education Institute has advocated for validated evaluation forms with learning-oriented outcomes for continuing professional development. This committee identified that evaluations of the instructors are also important in the multifaceted research on professional development.16 Thus for our study, we felt that speaker evaluation scores were a reasonable initial outcome measure focusing on CME conferences. Furthermore, conference organizers can use them to assess speaker effectiveness and attractiveness for future engagement. Future studies should prospectively examine both short- and long-term knowledge retention using post-conference tests.

principles, such as incorporation of images into slides, and speaker seniority are associated with higher speaker evaluation scores. In contrast to design principles, however, text density showed no significant association with speaker evaluation scores. Formal professional development programs for health professions educators should focus on cultivating effective slide design and presentation skills.

Address for Correspondence: Michelle Lin, MD, San Francisco General Hospital, Department of Emergency Medicine, 1001 Potrero Avenue, Suite 1E21, San Francisco, CA 94110. Email: Michelle.Lin@emergency.ucsf.edu. Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. Dr. Michelle Lin is the Chief Executive Officer of Academic Life in Emergency Medicine, LLC and a Deputy Editor for EBSCO Health’s DynaMed Plus. Copyright: © 2016 Ferguson et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

REFERENCES Med Educ. 2010 Jun 1;44(6):543-9. 2. Sweller J, van Merrienboer J, Paas F. Cognitive architecture and instructional design. Educ Psychol Rev. 1998;10 (3):251–96. 3. Paivio A. Mental Representations: A Dual Coding Approach. Oxford: Oxford University Press, 1990. 4. Alley M, ed. The Craft of Scientific Presentations: Critical Steps to Succeed and Critical Errors to Avoid, 1st edition. New York, NY: Springer-Verlag 2003. 5. Issa N, Schuller M, Santacaterina S, et al. Applying multimedia design principles enhances learning in medical education. Med Educ. 2011; 45:818–26. 6. Issa N, Mayer RE, Schuller M, et al. Teaching for understanding in medical classrooms using multimedia design principles. Med Educ. 2013; 47:388–96. 7. Najjar LJ. Principles of educational multimedia user interface design.

CONCLUSION Our study contributes to the growing literature by Mayer, Issa, and others studying and refining the effectiveness of multimedia design principles on slidebased presentations. Uniquely we focused on a novel learner population, the practicing clinician, in CME conference settings. Application of evidence-based design

Volume XVIII, no. 1: January 2017

1. Mayer RE. Applying the science of learning to medical education.

Hum Factors. 1998;40:311–23. 8. Mayer RE. The promise of multimedia learning: using the same instruction design methods across different media. Learn Instr. 2003;13(2):125–39. 9. Twisk, JWR. Applied Multilevel Analysis. 1st ed., Cambridge: Cambridge University Press, 2006.

157

Western Journal of Emergency Medicine


Ferguson et al.

CME Speaker Evaluations Associate with Image-Based Slides 10. Raudenbush, SW. Educational Applications of Hierarchical Linear

14. Bartsch RA and Cobern KM. Effectiveness of PowerPoint

Models: A Review. J Educ Stat. 1988;13(2):85-116.

presentations in lectures. Comput Educ. 2003;41:77–86.

11. Clark RC, Mayer RE. E-Learning and the Science of Instruction:

15. Johnson T and Owens L. Survey response rate reporting in

Proven Guidelines for Consumers and Designers of Multimedia

the professional literature. Paper presented at the 58th Annual

Learning (3rd ed.). San Francisco, CA: John Wiley & Sons; 2011.

Meeting of the American Association for Public Opinion Research,

12. Collins and Jannette. Educational techniques for lifelong learning.

Nashville, May 2003. Available at: http://www.srl.uic.edu/publist/

Making a PowerPoint presentation. RadioGraphics. 2004;

Conference/rr_reporting.pdf. Accessed Sep 9, 2016.

24:1177–83.

16. Committee on Planning a Continuing Health Professional Education

13. Collins J, Mullan BF, Holbert JM. Evaluation of speakers at a

Institute, Institute of Medicine. Redesigning Continuing Education

national radiology continuing medical education course. Med Educ

in the Health Professions [Internet]. 1st ed. Washington. Available

Online. 2002; 7:17. Available at: http://med-ed-online.net/index.

at: https://www.nap.edu/catalog/12704/redesigning-continuing-

php/meo/article/view/4540. Accessed July 6, 2016.

education-in-the-health-professions. Accessed Sep 9, 2016.

Western Journal of Emergency Medicine

158

Volume XVIII, no. 1: January 2017


Original Research

Can Emergency Medicine Residents Predict Cost of Diagnostic Testing? Christopher R. Tainter, MD* Joshua A. Gentges, DO† Stephen H. Thomas, MD, MPH‡ Boyd D. Burns, DO†

*University of California, San Diego, Department of Emergency Medicine and Department of Anesthesiology, Division of Critical Care, San Diego, California † The University of Oklahoma, Tulsa, Department of Emergency Medicine, Tulsa, Oklahoma ‡ Weill Cornell College of Medicine in Qatar and Hamad Medical Corporation, Department of Emergency Medicine, Doha, Qatar

Section Editor: Sally A. Santen, MD, PhD Submission history: Submitted June 13, 2016; Revision received August 12, 2016; Accepted October 24, 2016 Electronically published November 21, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.10.31234

Introduction: Diagnostic testing represents a significant portion of healthcare spending, and cost should be considered when ordering such tests. Needless and excessive spending may occur without an appreciation of the impact on the larger healthcare system. Knowledge regarding the cost of diagnostic testing among emergency medicine (EM) residents has not previously been studied. Methods: A survey was administered to 20 EM residents from a single ACGME-accredited three-year EM residency program, asking for an estimation of patient charges for 20 commonly ordered laboratory tests and seven radiological exams. We compared responses between residency classes to evaluate whether there was a difference based on level of training. Results: The survey completion rate was 100% (20/20 residents). We noted significant discrepancies between the median resident estimates and actual charge to patient for both laboratory and radiological exams. Nearly all responses were an underestimate of the actual cost. The group median underestimation for laboratory testing was $114, for radiographs $57, and for computed tomography exams was $1,058. There was improvement in accuracy with increasing level of training. Conclusion: This pilot study demonstrates that EM residents have a poor understanding of the charges burdening patients and health insurance providers. In order to make balanced decisions with regard to diagnostic testing, providers must appreciate these factors. Education regarding the cost of providing emergency care is a potential area for improvement of EM residency curricula, and warrants further attention and investigation. [West J Emerg Med. 2017;18(1)159-162.]

INTRODUCTION Healthcare expenditures continue to escalate at a significant rate, now representing 17.5% of the gross domestic product (GDP) in the United States.1 Diagnostic testing is a large proportion of this increase, perhaps prompted by a desire to avoid malpractice claims.2 This desire must be balanced with cost avoidance to the patient and the healthcare system. Increased awareness of the cost of diagnostic testing may change practice patterns.3,4

Volume XVIII, no. 1: January 2017

During post-graduate training, emergency medicine (EM) residents learn how diagnostic testing (e.g., laboratory evaluation and radiologic testing) can influence their clinical decision-making. There is a focus on how these tests are interpreted based on current scientific evidence, knowledge of pathophysiology, and emulation of faculty practice patterns. However, often little attention is paid to the potentially detrimental effects of these strategies. In addition to falsepositive results, which may lead to unnecessary procedures or

159

Western Journal of Emergency Medicine


Tainter et al.

Predicting the Cost of Common Diagnostic Testing additional testing, these tests represent a significant source of resource utilization for the hospital, increased length of stay, and financial burden to the patient and/or health insurance provider. Needless and excessive spending may occur without an appreciation of the impact on the larger healthcare system. The Accreditation Council for Graduate Medical Education (ACGME) includes cost awareness as a core competency of EM training, although this is recognized as an area of improvement.5,6 Knowledge of the actual cost of diagnostic testing among EM residents has not been studied, and the specific educational needs in this area are not known. METHODS An anonymous survey was administered to 20 residents from a single ACGME-accredited three-year EM residency program at a hospital-based emergency department (ED) with approximately 60,000 annual visits. The survey consisted of a fill-in-the-blank questionnaire listing 20 commonly ordered laboratory tests and seven radiological exams. It was administered during a single didactic conference during the 2012-2013 academic year. Residents provided their best estimation of the cost of each to the patient. Cost basis was

provided by the laboratory billing coordinator and represents patient charges (not institutional cost). Radiology charges did not include a radiologist interpretation fee. We compared responses between residency classes to evaluate whether there was a difference based on level of training. All protocols were reviewed and approved by the institutional review board. RESULTS The survey completion rate was 100% (20/20 residents). There were six postgraduate year- (PGY) 1 (interns), six PGY-2, and eight PGY-3 residents. Twenty-five percent of the residents were female. We noted significant discrepancies between the median resident estimates and actual charge to patient for both laboratory testing (Table 1) and radiological exams (Table 2) among every residency class, and as a whole. Nearly all estimates were below the actual cost, with only a few estimates above. The group median underestimation for laboratory testing was $114, for radiographs $57, and for computed tomography (CT) exams was $1,058. The urine drug screen assay was noted to be a particularly expensive test, as it incorporates individual screening tests for

Table 1. Emergency medicine residents’ estimates of laboratory testing charges (U.S. dollars) compared to the actual cost to patients Test

EM-1 median

EM-2 median

EM-3 median

Group median

Group range

Actual

UA (dip)

20

40

25

25

10-100

60

UA (micro)

50

65

75

50

20-150

71

Type/Screen

60

250

187.5

162.5

10-500

102

CPK

37.5

70

77.5

50

15-300

119

Lipase

37.5

85

50

50

15-300

125

Amylase

27.5

77

62.5

50

15-175

128

ABO/Rh

62.5

175

175

87.5

25-500

135

25

35

32.5

30

10-200

140

37.5

70

100

50

15-300

164

55

80

107.5

83.5

20-200

166

Trop-I

37.5

75

137.5

50

15-300

174

BNP

37.5

57.5

147.5

55

15-800

185

ABG

30

200

125

100

20-400

185

Blood Cx

85

225

275

200

50-500

198

Rapid strep CKMB CBC

q-βhCG

57.5

80

100

75

25-300

201

etOH

57.5

67.5

100

87.5

10-250

208

APAP

50

190

150

125

20-800

245

BMP

40

70

100

77.5

20-200

255

CMP

60

97.5

150

110

30-300

274

UDS

62.5

100

225

150

10-500

1136

BMP, basic metablic panel; CMP, comprehensive metabolic panel; CPK, creatine phosphokinase; CKMB, creatine kinase-MB; Trop-I, troponin-I; BNP, brain natriuretic peptide; q-βhCG, quantitative β-human chorionic gonadotropin; UDS, urine drug screen (qualitative); etOH, serum ethanol level; APAP, serum acetaminophen level; Blood Cx, blood culture; UA, urinalysis; CBC, complete blood count; Type/Screen, blood type and antibody screen; ABO/Rh, blood type and Rh type; Rapid Strep, group-A strep lateral flow test; ABG, arterial blood gas (iStat G7 cartridge).

Western Journal of Emergency Medicine

160

Volume XVIII, no. 1: January 2017


Tainter et al.

Predicting the Cost of Common Diagnostic Testing

Table 2. Residents’ estimates of radiology testing charges (U.S. dollars) compared to the actual cost to patients Test

EM-1 median

EM-2 median

pCXR

150

175

175

175

50 - 500

200

Ankle radiograph

150

400

100

150

50 - 750

200

Humerus radiograph

EM-3 median

Group median

Group range

Actual charge

150

275

112.5

137.5

69 - 500

300

CT c-spine

1500

750

800

800

300 - 3500

1700

CT chest

1750

1000

1750

1050

400 - 5000

1800

CT brain

1125

825

1150

950

80 - 4500

2000

1257.5

400 - 10000

3000

CT abd/pel 2500 1000 1750 pCXR, portable chest x-ray; EM, emergency medicine; CT, computed tomography

eight different drug types: amphetamine/methamphetamines, barbiturates, benzodiazepines, cannabinoids, cocaine, methadone, opiates, and phencyclidine (PCP). Because the order for the test incorporates all of these screens, the charge is far higher than most other laboratory testing studied, even when excluding (as our lab does) further confirmatory testing. DISCUSSION Previous investigations have described a lack of knowledge regarding the cost of testing among internal medicine residents and faculty,5,6 as well as pediatric residents and faculty,7 and awareness may be improved with education.8 There is almost no data to describe if this knowledge deficit exists among EM trainees. The reliance on diagnostic testing in the ED, as well as the escalating costs of providing care make this a particularly important arena in which to improve this knowledge. Providers cannot make a balanced decision when ordering diagnostic tests without an understanding of costs. Decisions regarding diagnostic evaluation are particularly salient to the ED, where the focus is often diagnosis of undifferentiated complaints. Pursuit of every possible diagnosis in every patient is cost-prohibitive, but cost concerns must be weighed against the possibility of patient harm from missed diagnoses that could require immediate intervention. Charges do not always reflect hospital cost, but they do represent a cost to the healthcare system as a whole. In addition, they may have a significant impact on individuals who do not have the negotiating power of large insurance providers, and may be burdened by the entirety of these charges. Therefore, it is prudent to involve patients in shared decision-making, and this can only be achieved if that information is known by the treating provider. LIMITATIONS This investigation is limited to a single institution and has a small sample size. It reflects similar findings from previous investigations in other specialties. Knowledge about the cost of diagnostic testing is lacking in many EM training programs. However, the improved accuracy demonstrated with increased level of training is encouraging. Volume XVIII, no. 1: January 2017

It is apparent that some degree of familiarity is attained through clinical experience, even if cost estimation is not included as a part of the didactic curriculum. It is worth noting that the survey results may not reflect the thought process used for clinical decision-making among these residents. Because the participants realized that the aim of the survey was to elicit their knowledge regarding the cost of diagnostic testing, they may have minimized or exaggerated their estimates relative to what they have in mind when ordering a test in the ED. However, it is also likely that the demonstrated lack of awareness may represent the possibility that this does not play a large role in their decision-making process. Finally, it should be noted that the exact charges studied at this institution may not reflect the charges for testing at other institutions. CONCLUSION This pilot study demonstrates that EM residents have a poor understanding of the charges burdened by patients and health insurance providers. In order to make balanced decisions regarding diagnostic testing, providers must appreciate these factors. Education regarding the cost of providing emergency care is a potential area for improvement of EM residency curricula and warrants further attention and investigation.

Address for Correspondence: Christopher R. Tainter, MD, University of California, San Diego, Department of Emergency Medicine and Department of Anesthesiology, Division of Critical Care, 200 West Arbor Drive #8676, San Diego, CA 92103. Email: kittainter@gmail.com. Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none. Copyright: Š 2016 Tainter et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

161

Western Journal of Emergency Medicine


Tainter et al.

Predicting the Cost of Common Diagnostic Testing REFERENCES

on cost of care and physician practice behaviors: a systematic review. J Gen Intern Med. 2015;30(6):835-42.

1. Centers for Medicare & Medicaid Services. National Healthcare

5. Cook DJ. Physicians’ perceptions of laboratory costs in the intensive

Expenditures by type of service and sources of funds CY 1960-2014:

care unit. Hamilton Regional Critical Care Group. Clin Invest Med.

2015. Available at: https://www.cms.gov/research-statistics-data-

1992;15(5):476-81.

and-systems/statistics-trends-and-reports/nationalhealthexpenddata/

6. Sehgal RT and Gorman P. Internal medicine physicians’ knowledge

nationalhealthaccountshistorical.html. Accessed March 18, 2016.

of health care charges. J Grad Med Educ. 2011;3(2):182-7.

2. Jena AB, Schoemaker L, Bhattacharya J, et al. Physician spending

7. Rock TA, Xiao R, Fieldston E. General pediatric attending physicians’

and subsequent risk of malpractice claims: observational study. BMJ.

and residents’ knowledge of inpatient hospital finances. Pediatrics.

2015;351:h5516.

2013;131(6):1072-80.

3. Tierney WM, Miller ME, McDonald CJ. The effect on test ordering of

8. Long T, Bongiovanni T, Dashevsky M, et al. Impact of laboratory cost

informing physicians of the charges for outpatient diagnostic tests. N

display on resident attitudes and knowledge about costs. Postgrad

Engl J Med. 1990;322(21):1499-1504. 4. Goetz C, Rotman SR, Hartoularos , et al. The effect of charge display

Western Journal of Emergency Medicine

162

Med J. 2016.

Volume XVIII, no. 1: January 2017


Original Research

Exploring Scholarship and the Emergency Medicine Educator: A Workforce Study Jaime Jordan, MD*†‡ Wendy C. Coates, MD*†‡ Samuel Clarke, MD, MAS§ Daniel P. Runde, MD¶ Emilie Fowlkes, MD¶ Jacqueline Kurth, MD|| Lalena M. Yarris, MD, MCR#

*Harbor-UCLA Medical Center, Department of Emergency Medicine, Torrance, California † David Geffen School of Medicine at University of California Los Angeles, Los Angeles, California ‡ Los Angeles Biomedical Research Institute at Harbor-UCLA, Torrance, California § University of California Davis Medical Center, Department of Emergency Medicine, Sacramento, California ¶ University of Iowa Hospitals and Clinics, Department of Emergency Medicine, Iowa City, Iowa || University of California Los Angeles, Department of Emergency Medicine, Los Angeles, CA # Oregon Health and Sciences University Medical Center, Department of Emergency Medicine, Portland, Oregon

Section Editor: David A. Wald, DO Submission history: Submitted September 24, 2016; Accepted October 27, 2016 Electronically published December 5, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.10.32636

Introduction: Recent literature calls for initiatives to improve the quality of education studies and support faculty in approaching educational problems in a scholarly manner. Understanding the emergency medicine (EM) educator workforce is a crucial precursor to developing policies to support educators and promote education scholarship in EM. This study aims to illuminate the current workforce model for the academic EM educator. Methods: Program leadership at EM training programs completed an online survey consisting of multiple choice, completion, and free-response type items. We calculated and reported descriptive statistics. Results: 112 programs participated. Mean number of core faculty/program: 16.02 ± 7.83 [14.53-17.5]. Mean number of faculty full-time equivalents (FTEs)/program dedicated to education is 6.92 ± 4.92 [5.877.98], including (mean FTE): Vice chair for education (0.25); director of medical education (0.13); education fellowship director (0.2); residency program director (0.83); associate residency director (0.94); assistant residency director (1.1); medical student clerkship director (0.8); assistant/associate clerkship director (0.28); simulation fellowship director (0.11); simulation director (0.42); director of faculty development (0.13). Mean number of FTEs/program for education administrative support is 2.34 ± 1.1 [2.13-2.61]. Determination of clinical hours varied; 38.75% of programs had personnel with education research expertise. Conclusion: Education faculty represent about 43% of the core faculty workforce. Many programs do not have the full spectrum of education leadership roles and educational faculty divide their time among multiple important academic roles. Clinical requirements vary. Many departments lack personnel with expertise in education research. This information may inform interventions to promote education scholarship. [West J Emerg Med. 2017;18(1)163-168.]

Volume XVIII, no. 1: January 2017

163

Western Journal of Emergency Medicine


Jordan et al.

Exploring Scholarship and the EM Educator: A Workforce Study INTRODUCTION Education research is an important component of the advancement of any medical discipline, and recent publications have outlined a need for initiatives to improve the quality of education studies and support educators who wish to approach educational challenges, questions, and theory in a scholarly manner.1-9 Medical educators have reported being limited by the following: 1) time to develop and maintain research skills and engage in all phases of the research process; 2) funding to support time and provide research resources; 3) access to expertise for study design and statistical analyses; 4) access to mentors, both within and outside of emergency medicine (EM); and 5) a sense that education research does not result in extrinsic or intrinsic reward in our current educator paradigm.10-12 However, there is a gap in our knowledge of how EM educators perceive these barriers, and what solutions would be most helpful to them in achieving their education research goals. Although workforce studies have described the landscape of emergency physicians in general, little is known about what the academic model looks like for EM educators, and how much variability may exist between departments. 13,14 The Council of Emergency Medicine Residency Directors (CORD) Education Scholarship Task Force and CORD Academy for Scholarship in Education in Emergency Medicine recommended that the EM education research community analyze the specific needs of EM educators in a rigorous workforce study and needs assessment. Understanding the job descriptions, available resources and staffing for conducting their educational missions, and the needs of the EM educator workforce is a crucial first step to designing and implementing interventions that will improve the quality of education research and scholarship in EM. The purpose of this study was to illuminate the current workforce model for the academic EM educator. METHODS Study Setting and Participants We identified EM residency training programs through the Society of Academic Emergency Medicine (SAEM) residency directory.15 One member of the program leadership from each program was invited to participate in the study based on available contact information with preference for program director over assistant/associate program director over program coordinator. We collected data between April 2015 and October 2015. This study was deemed exempt by the institutional review board of Los Angeles Biomedical Research Institute at Harbor-UCLA Medical Center. Study Design This was a prospective mixed-methods survey. We identified contact information for potential participants through the SAEM residency directory,15 programs’ individual Western Journal of Emergency Medicine

websites, and personal knowledge by study team members. Subjects were invited to participate by email and provided with a link to an Internet-based survey. Two follow-up email invitations were sent at weekly intervals to non-responders. Informed consent was implied by those subjects who chose to click on the survey link. Instrument To optimize content validity, the instrument was developed by our study group of EM education researchers with recommendations from members of the CORD Education Scholarship Taskforce according to established guidelines for survey research.16 The survey consisted of multiple-choice, completion, and free-response type items. All items were read aloud and discussed among members of the study group to ensure response process validity, and were then piloted with a small group of representative subjects. We made revisions for clarity based on feedback from pilot testing. In order to maximize response rates, incorporate all available data, and preclude guessing answers to unfamiliar queries, completion of all questions on the survey was not required. The survey instrument is available in Appendix A. Data Analysis We calculated and reported descriptive statistics for multiple choice items and completion items with numeric values. Two members of the study team, JJ and WC, independently performed qualitative analysis on the one freeresponse item using a thematic approach. They examined data line by line to identify recurring concepts and to assign codes, which were then further refined into themes using the constant comparative method.17 Discrepancies were resolved by discussion and negotiated consensus to establish a final coding scheme, which was applied to all data. A third analyst, LY, using the agreed-upon coding scheme, independently coded the data. The third analyst had an inter-rater agreement of 86.8% with the first two analysts, and disputes were resolved by in-depth discussion. RESULTS A total of 112/158 (71%) allopathic programs completed the survey and their responses were analyzed. Because of a low response rate from osteopathic programs 9/25 (36%), we excluded their data from analysis. Characteristics of the programs included in the analysis can be found in Table 1. The mean number of faculty full time equivalents (FTEs) whose primary role is devoted to the educational mission of the program is 6.92 ± 4.92 [5.87-7.98]. These FTEs are distributed among various roles (Table 2 and Figure 1). These faculty have a mean number of 2.34 ± 1.1 [2.13-2.61] administrative FTEs dedicated to education to support them (Table 3 and Figure 2). Few participants took the provided opportunity to write in additional faculty and administrative roles under the “Other” category. For faculty, responses

164

Volume XVIII, no. 1: January 2017


Jordan et al.

Exploring Scholarship and the EM Educator: A Workforce Study Table 2. Mean full time equivalent (FTE) for faculty education roles.* Role Mean FTE ± SD [95% CI]

Table 1. Program characteristics in a study of emergency medicine educator workforce. n (Total n=112) Location

Vice chair for education

0.25 ± 0.38 [0.17–0.33] 0.13 ± 0.31 [0.06-0.19]

West

19

Director of medical education

Southwest

10

Education fellowship director

Midwest

29

Residency program director

0.83 ± 0.28 [0.77-0.89]

Southeast

21

Associate residency director

0.94 ± 0.77 [0.77-1.1]

Northeast

33

Assistant residency director

1.1 ± 1.05 [0.87-1.32]

Duration of training

Medical student clerkship director

3 years

79

Assistant/associate clerkship director

4 years

33

Simulation fellowship director

Number of residents Number of core faculty

Mean= 38.95 ± 13.96 [36.36-41.53]

Simulation director

Mean= 16.02 ± 7.83 [14.53-17.5]

0.8 ± 0.46 [0.7-0.89] 0.28 ± 0.44 [0.18-0.37] 0.11 ± 0.3 [0.05-0.18] 0.42 ± 0.42 [0.33-0.51]

Director of faculty development

included ultrasound director, other fellowship directors, resident research director, chair, medical school course director, and remediation director. For administration, responses included assistant residency coordinator, assistant fellowship coordinator, and administrative assistant. Because of the limited number of responses, these data were not formally analyzed. A total of 81/112 (72.3%) programs responded to the question about clinical hours; 67.9% (55/81) reported having an established standard for the number of hours that core faculty, as defined by the Residency Review Committee, work without grant funding or “buy down” from other internal or external sources. This includes faculty from all academic sections. For those who did have an established standard of clinical hours, the

0.20 ± 0.44 [0.11-0.3]

0.13 ± 0.31 [0.06-0.2]

Other 0.11 ± 0.52 [0-0.22] *Note: One individual fulfills one FTE. If an individual fulfills multiple roles, respondents were asked to estimate the portion of FTE that is dedicated to each role. FTE, full time equivalent.

mean number of hours/week for core faculty from all academic sections was 26.34 ± 4.64 [25.11-27.58]. When analyzing how base clinical hours for education faculty were determined, major themes that emerged (in descending order of frequency) were determination by academic/administrative position, uniform departmental base standard, individual negotiation, and adherence to Accreditation Council for Graduate Medical Education (ACGME) guidelines. See Table 4 for results of qualitative analysis. A total of 31/80 (38.8%) programs reported having a

Figure 1. FTEs for Education Faculty Roles 1.2 1 0.8

Mean FTE

0.6 0.4 0.2 0

Role Figure 1. ETEs for educational faculty roles.

Volume XVIII, no. 1: January 2017

165

Western Journal of Emergency Medicine


Jordan et al.

Exploring Scholarship and the EM Educator: A Workforce Study Table 3. Mean full time equivalent (FTE) for education administrative roles.* Role Education manager

Mean FTE ± SD [95% CI] 0.17 ± 0.36 [0.9-0.25]

Residency coordinator

1.26 ± 0.53 [1.15-1.38]

Medical student coordinator

0.59 ± 0.44 [0.49-0.68]

Education research administrative assistant

0.13 ± 0.36 [0.05-0.21]

Direct administrative assistant for education faculty

0.21 ± 0.47 [0.1-0.31]

Other 0.04 ± 0.17 [0-0.08] *Note: One individual fulfills one FTE. If an individual fulfills multiple roles, respondents were asked to estimate the portion of FTE that is dedicated to each role.

designated person in their department with expertise specific to education research study design and statistical analysis. Of these, 35.5% (11/31) have an education research director. DISCUSSION According to our results, a significant portion (approximately 43%) of core faculty are identified by their departments as “education faculty,” or faculty whose primary academic role is devoted to the educational mission of their departments. This likely represents a combination of both perceived importance and practical need. The ACGME requires that EM residency programs maintain a ratio of 1:3 between core faculty and residents.18 A critical mass of dedicated faculty is required to run a training program efficiently to develop and implement curricula, assess learners, provide mentoring and advising for trainees, participate in residency selection and clinical competency committees, provide scheduling oversight, and ensure continual quality improvement and program evaluation processes are in place. The majority of education roles described in our study had

mean FTEs of less than one, indicating that departments do not have the full spectrum of leadership positions and/or faculty are serving more than one role. This may create added strain on faculty members who strive to complete the duties of multiple key roles or fill in service gaps potentially without additional protected time or added financial benefit. With this information it is easy to see why time has been identified as a limiting factor to performing education research.10 In addition, burnout and attrition are prevalent in academic medicine.19 Administrative workload may contribute to burnout, but understanding how educators’ perceptions of the intrinsic reward garnered from their roles is crucial to guide efforts to promote wellness and career satisfaction in academic educators. Our data also suggest a similar pattern for administrative support staff dedicated to the educational mission of departments. In parallel to the many hats that some education faculty must wear, administrative support staff also appear to perform multiple jobs. It is not clear whether these tasks fall entirely within the realm of education for the administrative staff, or if the departments perceive staff members as purely

Figure 2. FTEs for education administrative roles.

Western Journal of Emergency Medicine

166

Volume XVIII, no. 1: January 2017


Jordan et al.

Exploring Scholarship and the EM Educator: A Workforce Study

Table 4. Results of qualitative analysis regarding how base clinical hours for education faculty are determined. Question: Please describe how base clinical hours for education faculty are determined in your department. Theme

Number of comments

Example

Determination by academic/administrative position

20

“based on teaching responsibilities, hours decreased depending on educational roles”

Uniform departmental base standard

17

“All full time faculty work 10-11 shifts per month, including education faculty. Vice Chairs work 6-7 shifts per month and the chair works 5-6 shifts per month.”

Individual negotiation

15

“The chair sets each faculty base hours on an individual basis. There is no set standard, and no transparency about how the systems works. It can change year to year.”

Accreditation council for graduate medical education (ACGME) guidelines

14

“ACGME maximum”

administrative support rather than an extension of the educational arm of their departments. According to our survey data, the ratio of education faculty to education administrative staff is approximately 3:1. This may represent an additional barrier that educators face in performing scholarship. Without appropriate administrative support, time that faculty could spend on education research and other scholarly endeavors may be diverted to clerical tasks that they must perform themselves. One could argue that this is not the most efficient use of resources. We found that not all programs have an established standard for clinical hours. Additionally, clinical requirements and how these requirements are determined vary among programs. This may notably contribute to variability in the time and effort that faculty can put towards research and scholarship. It would be interesting to know the degree of impact that clinical load has on scholarly productivity. The mean number of clinical hours/ week in this study is approximately 26. If faculty at programs with fewer clinical and administrative hours have a higher degree of scholarly productivity, then this would serve as evidence for allocating more protected time to those who perform education research and scholarship. We also found that many programs lack personnel with specific methodological expertise in education research, which differs from more traditional research methodology. We postulate that minimal requirements for expertise should include familiarity with qualitative study design as well as familiarity with standard educational formats of hypothesis testing, such as experimental, quasiexperimental, and survey design. This is consistent with current literature citing lack of mentorship and access to expertise in research study design and statistical analysis as barriers.10 It is conceivable that existing statistical support faculty could expand their toolkit of knowledge

Volume XVIII, no. 1: January 2017

to include these methodologies, or additional faculty who are specifically trained in this area could bridge this gap, especially as education scholarship fellowship graduates become increasingly prevalent. LIMITATIONS Because this was a survey study the results are subject to the limitations inherent to this type of data collection. Additionally, because of the exclusion of osteopathic programs due to poor response rate, the results are limited to allopathic programs. Since we were not able to obtain data from all programs it is possible that additional opinions were not represented. We desired to keep the survey brief to maximize response and in doing so we may have missed important information. We caution readers to consult multiple sources prior to assigning a specific number of hours for each position in their departments. Many questions are still left unanswered. How are departments funding their education mission? What proportion of EM educators conduct education research? Do educators have enough time to fulfill their academic and scholarly responsibilities and does administrative and clinical workload impact scholarly productivity? What rewards do educators receive for their work and how does this affect their wellness and career satisfaction? It will be important to follow up this study with a comprehensive needs assessment of all relevant stakeholder groups. CONCLUSION This study describes the current workforce of EM academic educators and provides further data to support previously identified barriers that educators face in performing scholarship. The results of this study may inform policies and interventions to promote education scholarship and support educators in their academic careers.

167

Western Journal of Emergency Medicine


Jordan et al.

Exploring Scholarship and the EM Educator: A Workforce Study Address for Correspondence: Jaime Jordan, MD, Harbor-UCLA Medical Center, Department of Emergency Medicine, 1000 W. Carson Street, Box 21, Torrance, CA 90502. Email: jordan@ emedharbor.edu.

medical education. Acad Med. 2004;79(10):955–960. 8. Lurie SJ. Raising the passing grade for studies of medical education. JAMA. 2003;290(9):1210–1212. 9. Cook DA, Bowen JL, Gerrity MS, et al. Proposed standards for

Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none. Copyright: © 2016 Jordan et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

medical education submissions to the Journal of General Internal Medicine. J Gen Intern Med. 2008;23(7):908–913 10. Yarris LM, Juve AM, Artino AR Jr, et al. Expertise, Time, Money, Mentoring, and Reward: Systematic Barriers That Limit Education Researcher Productivity – Proceedings From the AAMC GEA Workshop. J Grad Med Educ. 2014;6(3):430-6. 11. Zibrowski EM, Weston WW, Goldszmidt MA. ‘I don’t have time’: issues of fragmentation, prioritisation and motivation for education scholarship among medical faculty. Med Educ. 2008;42(9):872-8. 12. Goldszmidt MA, Zibrowski EM, Weston WW. Education scholarship:

REFERENCES

it’s not just a question of ‘degree.’ Med Teach. 2008;30(1):34-9. 13. Reinhart MA, Munger BS, Rund DA. American Board of Emergency

1. Sullivan GM, Simpson D, Cook DA, et al. Redefining Quality in

Medicine Longitudinal Study of Emergency Physicians. Ann Emerg

Medical Education Research: A Consumer’s View. J Grad Med Educ.

Med. 1999;33(1):22-32

2014;6(3): 424-429.

14. Reiter M, Wen LS, Allen BW. The Emergency Medicine Workforce:

2. Reed DA, Cook DA, Beckman TJ, et al. Association between

Profile and Projections. J Emerg Med. 2016 Apr;50(4):690-693.

funding and quality of published medical education research. JAMA.

15. Society for Academic Emergency Medicine Residency Directory.

2007;298(9):1002–1009.

Available at: http://beta.saem.org/membership/services/residency-

3. McGaghie WC, Issenberg SB, Petrusa ER, et al. A critical review

directory. Accessed March 24, 2016.

of simulation-based medical education research: 2003–2009. Med

16. Rickards G, Magee C, Artino Jr, AR. You can’t fix by analysis what

Educ. 2010;44(1):50–63.

you’ve spoiled by design: developing survey instruments and

4. Cook DA, Beckman TJ, Bordage G. Quality of reporting of

collecting validity evidence. J Grad Med Educ. 2012;4(4):407-410

experimental studies in medical education: a systematic review. Med

17. Bradley EH, Curry LA, Devers KJ. Qualitative data analysis for health

Educ. 2007;41(8):737–745.

services research: developing taxonomy, themes, and theory. Health

5. Cook DA, Levinson AJ, Garside S. Method and reporting quality in

Serv Res. 2007;42:1758-1772.

health professions education research: a systematic review. Med

18. ACGME Emergency Medicine Program Requirements and

Educ. 2011;45(3):227–238.

FAQs. Available at: http://www.acgme.org/Specialties/Program-

6. Reed DA, Beckman TJ, Wright SM. An assessment of the methodologic quality of medical education research studies published

Requirements-and-FAQs-and-Applications/pfcatid/7/Emergency%20

in The American Journal of Surgery. Am J Surg. 2009;198(3):442–

Medicine. Accessed June 1, 2016.

444.

19. Shanafelt TD, West CP, Sloan JA, et al. Career fit and burnout among academic faculty. Arch Intern Med. 2009;169(10):990-5.

7. Chen FM, Bauchner H, Burstin H. A call for outcomes research in

Western Journal of Emergency Medicine

168

Volume XVIII, no. 1: January 2017


Original Research

The Cost and Burden of the Residency Match in Emergency Medicine Aaron M. Blackshaw, MD Simon C. Watson, MD Jeffrey S. Bush, MD

The Medical University of South Carolina, Department of Medicine, Division of Emergency Medicine, Charleston, South Carolina

Section Editor: Sorabh Khandelwal, MD Submission history: Submitted June 15, 2016; Revision received October 3, 2016; Accepted October 27, 2016 Electronically published December 19, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.10.31277

Introduction: To obtain a residency match, medical students entering emergency medicine (EM) must complete away rotations, submit a number of lengthy applications, and travel to multiple programs to interview. The expenses incurred acquiring this residency position are burdensome, but there is little specialty-specific data estimating it. We sought to quantify the actual cost spent by medical students applying to EM residency programs by surveying students as they attended a residency interview. Methods: Researchers created a 16-item survey, which asked about the time and monetary costs associated with the entire EM residency application process. Applicants chosen to interview for an EM residency position at our institution were invited to complete the survey during their interview day. Results: In total, 66 out of a possible 81 residency applicants (an 81% response rate) completed our survey. The “average applicant” who interviewed at our residency program for the 2015-16 cycle completed 1.6 away, or “audition,” rotations, each costing an average of $1,065 to complete. This “average applicant” applied to 42.8 programs, and then attended 13.7 interviews. The cost of interviewing at our program averaged $342 and in total, an average of $8,312 would be spent in the pursuit of an EM residency. Conclusion: Due to multiple factors, the costs of securing an EM residency spot can be expensive. By understanding the components that are driving this trend, we hope that the academic EM community can explore avenues to help curtail these costs. [West J Emerg Med. 2017;18(1)169-173.]

INTRODUCTION According to the American Association of Medical Colleges (AAMC) the median indebtedness for a U.S, medical school graduate in the class of 2015 was $183,000.1 Although the cost of tuition at institutions is obviously a tremendous burden for medical students, the expenses incurred acquiring a residency position also represent a problem for many students. For those going into the field of emergency medicine (EM) this cost can be especially onerous. In the 2016 National Resident Matching Program (NRMP), there were 1,895 positions offered by 174 EM

Volume XVIII, no. 1: January 2017

programs across the country. This cycle, an estimated 2,476 candidates (1,693 U.S. seniors) applied for these positions.2 When comparing NRMP applicant survey data from 2008 and 2015 it is clear that current medical students are submitting more applications (28.7 vs 39, ERAS costs of $351 vs $601 at current rates), attending more interviews (11.4 vs 13), and ranking more programs (10.7 vs 13) in order to match.3,4 This is perhaps partially explained by the risk and consequences of not matching, with a 99.9% match rate for EM in the standard match. Unsurprisingly, applicants are thus preparing and planning for the process with even more fervor.

169

Western Journal of Emergency Medicine


Blackshaw et al.

Cost and Burden of the Residency Match in EM One area stressed by applicants is obtaining strong standardized letters of evaluations (SLOEs). The SLOE was developed by the Council of Emergency Medicine Residency Directors (CORD) to provide a global perspective on an applicant’s candidacy by providing meaningful comparisons to peers applying to EM and it has also been proven to have impressive interrater reliability.5,6 Program directors of EM programs recently reported that the top three factors in deciding which applicants to invite for an interview were SLOEs, grades from EM rotations, and the United States Medical Licensing Examination (USMLE) Step 1 scores.7 In order to obtain more SLOEs, hopeful medical students are encouraged to complete at least one away rotation, but more commonly two, in addition to their home EM rotation. Attending these away or “audition” rotations is obviously expensive, often requiring travel to and lodging in a new place for four weeks at a time. The application process through the Visiting Student Application Service (VSAS) is time consuming and also charges per application submitted.8 In fact, a cursory search of the VSAS EM elective catalog for the 2015-16 cycle reveals that 63% (114/155) of rotations have an additional institutional processing or application fee.9 Few data exist in the literature about the studentincurred costs of the fourth-year medical student during the final stages of their medical school training, including total number of residency interviews and number of away (audition) rotations. Two surveys administered to students seeking various residency positions in 1977 and 1986 reported the average costs incurred by students attending residency interviews to be $2,802 and $2,390 respectively (costs adjusted for inflation).10,11,12 Another study conducted on the 2006 urology match revealed the median total cost of the interview process to be $4,725, with a median perinterview cost of $390 (costs again adjusted for inflation).13 A large study of fourth-year medical students from 20 U.S. allopathic schools, which included 109 EM applicants from the 2013-14 cycle, showed that 89% completed an away rotation at great personal cost. Additionally, over 43% of the EM applicants spent more than $4,000 on the interview trail.14 A similar questionnaire administered by the AAMC to 59 EM hopefuls from the 2014-15 cycle revealed the average expenses incurred on the interview trail to be $3,936, although this only represented the cost of transportation and lodging.15 In this study, we sought to investigate both the financial and time burden that VSAS and the Electronic Residency Application Service (ERAS) applications represent, as well as the financial burden incurred by participating in away rotations and attending residency interviews. We focused on obtaining a better estimation of the total costs incurred by EM applicants during the process of matching into their

Western Journal of Emergency Medicine

desired specialty. Secondarily, we desired to gain a better understanding of the expenses incurred by interviewees at our program. METHODS The authors created a 16-item survey that asked about both the time and monetary costs of the entire EM residency application process. The questionnaire was developed and subsequently edited to be pertinent, appropriate, and easily completed in just a few minutes. By design, no identifying information of the candidate was asked or recorded. A combination of multiple-choice and free-response questions was ultimately decided upon. Revisions of the survey were agreed upon by both the associate residency director and clerkship director of the program prior to administering the questionnaire. The first six items on the survey were program specific to EM at our institution. Questions were directly related to travel and costs incurred by interviewees at our institution – a three-year residency at a quaternary care academic center in a medium-sized city located in the Southeastern United States. The rest were more generalized questions about the entire application and interview process. All medical school applicants chosen to interview for an EM residency positions at our institution were invited to complete the survey during their interview day. Participation was completely voluntary, and no incentives for participation were offered. Applicants were notified that filling out the survey would have absolutely no bearing on their application, the rest of their interview day, or the ranking process. The questionnaire was administered on paper, and then collected and securely stored by the program coordinator. After completion of all residency interviews for the 2016 cycle, we analyzed the entire collection of responses. All surveys were assigned random ID numbers and a database of all responses was created. Uncertainties uncovered during abstraction were resolved via group consensus. We converted any values reported as ranges to the exact midpoint for calculation purposes. Our institutional review board granted a waiver for this study. RESULTS In total, 66 out of a possible 81 residency applicants (an 81% response rate) partially or fully completed a survey. Sixty-one of these candidates completed the institution-specific questions, items related specifically to our unique location. It was found that roughly 54% of applicants arrived for our interview by air with the remaining 46% traveling by car. Most students chose to stay at a local hotel (67%), followed by staying with friends and family (28%), and then bed and breakfasts (5%). Our invitation email, which included information about

170

Volume XVIII, no. 1: January 2017


Blackshaw et al.

Cost and Burden of the Residency Match in EM

local lodging options was the most-used method (39% of respondents) for finding lodging. Hotel chain websites (26%), friends/family (26%), and AirBnB (8%) represented the other sources of finding a place to stay. The total cost of attending an interview at our program averaged $342, median cost was $325, with an interquartile range (IQR) of $185-500. Almost 60% (36/61) of applicants spent $200 or less on transportation to our program. A breakdown of lodging costs revealed that 61% of students paid $100 or less, and the rest paid between $101-200. The remaining portion of the questionnaire related to applicant’s costs and experience with other, non-institutional specific aspects of their application process. Students were asked about multiple areas of monetary and time burdens as summarized in Table 1 and Table 2. We found that fourthyear students applied to an average of 4.4 away rotations and ultimately completed 1.6 of these. Completing VSAS took 20 hours or less for 88% of respondents. The total cost of each away rotation was estimated to be $1,065, and a breakdown of this total cost revealed the most expensive portion of the process to be lodging ($526), followed by transportation at $251 (Figure). Fourth-year medical students were advised to apply to an average of 38.5 residency programs and chose to actually apply to 42.8. This application process was more arduous compared to completing VSAS, with 64% of students

taking longer than 20 hours to complete ERAS. On average, students were planning on attending 13.7 residency interviews and each one would cost applicants an average of $414. Transportation accounted for an average of $226 of this total cost, with lodging next at $122 (Figure). The final question of the survey was open ended and asked applicants about other costs not listed on the questionnaire. The most commonly listed cost was that of buying interview attire and paying for dry cleaning. Others mentioned using airline miles to pay for tickets, as well as the indirect cost that many miles of travel would have on the wear and tear of their automobiles. DISCUSSION The absolutely “average” applicant that interviewed at our residency program for the 2015-16 cycle would have applied to 4.4 programs through VSAS and subsequently completed 1.59 away rotations. VSAS fees assessed would total $86. These rotations would cost an average of $1,704 to complete. This student would then take a great deal of time to complete ERAS, applying to 42.8 programs, totaling $700 in ERAS fees. Additional costs for submitting the USMLE transcript ($80) and to register for the NRMP ($70) would also be assessed.16,17 Our average applicant would then attend 13.7 interviews at a cost of $414 per program. In total, this average applicant would have spent $8,312 in his or her pursuit of an EM residency. In context,

Table 1. Selected fourth-year medical students survey response data regarding costs incurred for away rotations and application to residency. No. of responses Mean (SD) Median (IQR) No. of away rotations applied to

66 (100%)

4.4 (2.8)

4 (2-5)

No. of away rotations completed

66 (100%)

1.6 (0.7)

2 (1-2)

Total cost of each away rotation

63 (96%)

$1,065 (818)

$900 (490-1400)

No. of residency programs advised to apply to

64 (97%)

38.5 (13.1)

37.5 (30-41.3)

No. of residency programs applied to via ERAS

65 (98%)

42.8 (14)

40 (32.5-40)

No. of interviews applicants plan to attend

64 (97%)

13.7 (3.2)

13 (11.8-15.5)

Total cost of each interview

63 (96%)

$414 (167)

$410 (260-520)

SD, standard deviation; IQR, interquartile range; ERAS, electronic residency application service

Table 2. Total time spent completing VSAS and ERAS by medical students. VSAS

ERAS

Time (hrs)

No. of Responses (%)

Time (hrs)

No. of Responses (%)

0-10

35 (54)

0-20

23 (35)

10-20

22 (34)

20-40

28 (43)

20-30

5 (8)

40-60

10 (15)

>30

3 (5)

>60

4 (6)

VSAS, visiting student application service; ERAS, electronic residency application service

Volume XVIII, no. 1: January 2017

171

Western Journal of Emergency Medicine


Blackshaw et al.

Cost and Burden of the Residency Match in EM

Figure. Breakdown of each away rotation and residency interview costs: Applicants were asked to breakdown the total costs of completing away rotations and attending residency interviews into individual components. *Away rotations often charge fees that can include VSAS application fees, additional individual school fees, vaccination or titer requirements, drug screens, and malpractice insurance. VSAS, visiting student application service.

the amount for residency application represents an approximate 4.4 % of the average medical school debt incurred.

Performing interviews online and helping to standardize the application process may be ways to accomplish this.

LIMITATIONS Limitations of this study include the small number of respondents since we only included those students applying to our program, an academic center in a medium-sized city located in the Southeastern United States. Because the medical center is not in a large city, the cost to get here may be higher since the airport is not a major hub. We would expect the cost of travel to be less if there were a denser distribution of EM programs in this region of country. For example, travel distances between programs in the New England area or other large metropolitan area would be significantly less. Furthermore, the costs reported by the medical students are not all inclusive as the recollections of the survey respondents are probably less than 100% accurate and subjective.

CONCLUSION In today’s economy, it is almost understood that going into the medical field means that the young physician will most likely incur debt due to medical school and postgraduate training. However, limited research has been done in looking at the expenses incurred while acquiring a residency position. Due to the increase in popularity of EM, current medical students entering the field are submitting more applications, attending more interviews, and ranking more programs in order to match. Furthermore, in order to obtain the SLOEs needed to make a student more competitive, they are being encouraged to complete away rotations, which add to their expenditures, and the cost of the application process through VSAS continues to climb. Due to all of these factors, the costs of securing an EM residency spot are escalating at an alarming rate. By understanding the components that are driving this trend, we hope that the academic EM community can explore avenues to help curtail these costs.

Future Directions By understanding the costs of applying to residency programs, we hope to encourage discussions about ways to decrease the financial burden on future emergency physicians.

Western Journal of Emergency Medicine

172

Volume XVIII, no. 1: January 2017


Blackshaw et al.

Cost and Burden of the Residency Match in EM

Address for Correspondence: Jeffrey S. Bush, MD, The Medical University of South Carolina, Department of Medicine, Division of Emergency Medicine, 169 Ashley Avenue, MSC 300, Charleston, SC 29425. Email: bushjs@musc.edu.

1999;6:1141-6. 6. Girzadas DV, Harwood RC, Dearie J, et al. A Comparison of Standardized and Narrative Letters of Recommendation. Acad Emerg Med. 1998;5:1101-4. 7. Love JN, Smith J, Weizberg M, et al. Council of Emergency Medicine

Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none.

Residency Directors’ Standardized Letter of Recommendation: The Program Director’s Perspective. Acad Emerg Med. 2014;21:6807. 8. VSAS FAQ. Available at: https://students-residents.aamc.org/

Copyright: © 2016 Blackshaw et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

attending-medical-school/faq/vsas-faq/. Accessed April 3, 2016. Visiting Student Application Service. Available at: https://services. aamc.org/20/vsas/. Accessed April 3, 2016. 9. Gardner P and Herbstman B. Rites of fall: the costs and utility of the internship interview. J Med Educ. 1978;53:929-31. 10. Weinberg E and Casty F. Results of a survey concerning application for residency training. J Med Educ. 1987;62:763-5.

REFERENCES

11. Consumer Price Index Inflation Calculator. Bureau of Labor Statistics

1. Medical Student Education: Debt, Costs, and Loan Repayment Fact

Website. Available at: http://www.bls.gov/data/inflation_calculator.

Card. Available at: http://members.aamc.org/eweb/upload/2015%20

htm. Accessed April 22, 2016.

Debt%20Fact%20Card.pdf. 2015. Accessed April 10, 2016.

12. Kerfoot BP, Asher KP, McCullough DL. Financial and Educational

2. Advance Data Tables 2016 Main Residency Match. Available at:

Costs of the Residency Interview Process for Urology Applicants.

http://www.nrmp.org/wp-content/uploads/2016/03/Advance-Data-

Urology. 2008;71:990-5.

Tables-2016_Final.pdf. 2016. Accessed April 22, 2016.

13. Benson NM, Stickle TR, Raszka WV. Going “Fourth” From Medical

3. National Resident Matching Program, Data Release and Research

School: Fourth-Year Medical Students’ Perspective on the Fourth

Committee: Results of the 2008 NRMP Applicant Survey by Preferred Specialty and Applicant Type. Available at: http://www.nrmp.org/wp-

Year of Medical School. Acad Med. 2015;90:1386-93. 14. 2015 AAMC Annual Report: Cost of Applying to Residency

content/uploads/2013/08/applicantresultsbyspecialty2008.pdf. 2009.

Questionnaire Report. Available at: https://www.aamc.org/

Accessed April 3, 2016.

download/430902/data/costofapplyingtoresidency.pdf. 2015.

4. National Resident Matching Program, Data Release and Research

Accessed May 6, 2016.

Committee: Results of the 2015 NRMP Applicant Survey by Preferred

15. AAMC for students, applicants, and residents. Available at: https://

Specialty and Applicant Type. Available at: http://www.nrmp.org/wp-

students-residents.aamc.org/attending-medical-school/article/fees-

content/uploads/2015/09/Applicant-Survey-Report-2015.pdf. 2015. Accessed April 3, 2016.

eras-residency-applications/. Accessed April 25, 2016. 16. The Match - National Resident Matching Program. Available at:

5. Keim SM, Rein JA, Chisholm C, et al. A Standardized Letter of

http://www.nrmp.org/match-process/match-fees/. Accessed April 25,

Recommendation for Residency Application. Acad Emerg Med.

Volume XVIII, no. 1: January 2017

2016.

173

Western Journal of Emergency Medicine


Original Research

Differences in Self-expression Reflect Formal Evaluation in a Fourth-year Emergency Medicine Clerkship §

Michael Chary, MD, PhD* Amy Leuthauser, MD† Kevin Hu, MD‡ Braden Hexom, MD§

* New York Presbyterian/Queens, Department of Emergency Medicine, Flushing, New York † University of Otago, Dunedin Hospital, Dunedin, New Zealand ‡ Icahn School of Medicine at Mount Sinai, Department of Emergency Medicine, New York, New York § Rush Medical College, Department of Emergency Medicine, Chicago, Illinois

Section Editor: David A. Wald, DO Submission history: Submitted June 15, 2016; Revision received September 21, 2016; Accepted November 29, 2016 Electronically published December 19, 2016 Full text available through open access at http://escholarship.org/uc/uciem_westjem DOI: 10.5811/westjem.2016.11.31262

Introduction: Medical schools have begun to incorporate self-reflection exercises into their curricula, with the belief that these exercises help students master the material more deeply and perform better. Reflection may be a potential learning tool for emergency medicine (EM), but there are few data supporting this hypothesis. The authors evaluated the relationship between a linguistic marker of the degree of reflection after a student’s shift in an emergency department and that student’s clerkship performance. Methods: The authors conducted a retrospective case series by analyzing the performance and reflective statements of 116 students from a single medical school who participated in a required EM clerkship at one or two of four clinical sites from 2013-14. After each shift, an attending emergency physician evaluated the student according to the RIME (Reporter-Interpreter-ManagerEducator) scheme. The authors developed software to extract the text from those comments, remove uninformative words and standardize the remaining words. The authors determined the most common words and two-word phrases that students used to describe their shift. The correlation between students’ final clerkship grades and the fraction of student comments with at least one content word was analyzed. Results: Of the 145 possible students, 116 were included for analysis. The other 29 were excluded as they were visiting students who did not receive a final numeric grade. The correlation between final grade and the number of completed self-reflections was 0.32. The correlation between final grade and the average number of words in each self-reflection was 0.21. The first correlation is significantly greater than 0 (p=0.03, t-test), but the second correlation is not (p=0.16, t-test). The median final grade of those who wrote reflections on more than half of their shifts was significantly greater than those who wrote reflections half of the time, 83.675 versus 79.23 (p=0.05, 2-sample Kolmogorov-Smirnov test). Conclusion: Students who reflected more frequently received a higher grade in an EM clerkship for fourth-year medical students. The number of words in each reflection was not significantly correlated with grade performance. The most common words and phrases students wrote were associated with learning and managing patients. [West J Emerg Med. 2017;18(1)174-180.]

Western Journal of Emergency Medicine

174

Volume XVIII, no. 1: January 2017


Chary et al.

Self-expression and Evaluation in 4th Year EM Clerkship

INTRODUCTION Undergraduate medical education is becoming, increasingly, self-directed. Reflection is an essential aspect of self-directed learning.1 Reflection has been defined as “the process of analyzing, questioning, and reframing an experience to make assessment of it for the purpose of learning or to improve practice.”2 The hope is that forming personal and emotive connections with facts will reinforce those facts for the student. Students in clerkships often reflect upon their performance, but it is unclear if self-reflection leads to better academic performance.3 Structured reflection may be a useful teaching and learning tool in emergency medicine (EM).4 Reflection with standardized patients leads to better performance by students.5,6 Reflection may promote the development of professionalism and empathy in EM.7 We could find no study demonstrating that reflection leads to better patient care.8 The purpose of this study was to determine whether self-reflection in medical students correlates with academic performance in EM. We studied fourth-year medical students in a required EM clerkship. We used natural language processing to correlate the depth of reflection with clerkship performance in an objective and generalizable way. METHODS We conducted a retrospective analysis of all students in a required EM clerkship at a single U.S. medical school from October 2014 to October 2015. Students completed 14 shifts while rotating in 1-2 of four hospitals - one city trauma center, two community hospitals, and one academic tertiary care hospital. All didactics were held together at a central location. The objectives and evaluation processes were uniform across sites. At the end of each shift, students were encouraged to complete a written reflective exercise in a logbook. The reflective exercise asked students the following: “Reflect on your experience today. What was particularly challenging? Did you learn something unexpected? How will this change your practice for the future? Write a few sentences reflecting on your shift.” An attending physician evaluated each student after each shift according to the Reporter-Interpreter-Manager-Educator (RIME) scheme, with “Educator” replaced with “Superior” in our evaluation tool. Students were required to submit these evaluations for grading by the end of the rotation. Additional components of the final grade included participation and final examination score. Students rotating from other institutions were not included. Software All analysis was written by MC and performed in Python.9 Natural language processing was performed using the Natural Language Toolkit for Python, version 3.0.10 Bootstrapping and the calculation of Jaccard similarity were performed using NumPy/ SciPy.11 Figures were made using the matplotlib plugin.12

Volume XVIII, no. 1: January 2017

Multinomial Naive Bayes classification was performed using NLTK and sci-kitlearn.13 All code used to analyze and generate the figures as well as supporting documents are available at the following repository: https://github.com/mac389/leuthauser. Authors transcribed the booklets into a database. All medical abbreviations and contractions were replaced with their long form, and illegible comments were ignored. A deidentified version of the database is available at the repository referenced above. The text of each student comment was processed as follows: 1. 2. 3. 4.

Text converted to lower case Comments tokenized into words Stopwords removed Remaining words lemmatized

Stopwords The term “stopwords” refers to words that occur frequently in a corpus but are unlikely to be informative. The removal of stopwords is a common preprocessing step in natural language processing to increase the sensitivity and specificity of analyses.14 The list of stopwords depends on the task. We used an amalgamation of the English stopwords list in the Natural Language Toolkit 3.0 package for Python and the 10,000 most frequently occurring words in the transcript of all episodes of The Simpsons. The list is available in our GitHub repository. Lemmatization The term “lemmatization” refers to the mapping of all inflected forms of a word to a base form so that they can be analyzed as a single item. Words like “infect,” “infected,” “infection,” “infections,” become “infect.” To lemmatize words in our study we used the WordNetLemmatize function in NLTK 3.0. This function is a thin wrapper to WordNet’s morphy function, which removes all suffixes that occur in the WordNet database. WordNetLemmatize is more accurate if it is known which part of speech the word it is asked to lemmatize. For example, patient and patients should only be considered one item if patient is a noun. To identify the part of speech of each word, we used the pos tag function in NLTK 3.0. Pos tag is trained on the treebank corpus.15 Tokenization. The term “tokenization” refers to breaking a string of words into those words. Tokenization can be difficult when abbreviations and nonstandard punctuation are used. We used the word tokenize function in NLTK 3.0. This tokenizer uses regular expressions and is appropriate for pieces of text that do not have emoticons nor use contractions extensively. Jaccard similarity. The “Jaccard similarity”16 quantifies the similarity between two sets of objects. The Jaccard similarity is defined as the ratio of number of objects two sets have in common to the total number of unique objects across both sets.

175

Western Journal of Emergency Medicine


Chary et al.

Self-expression and Evaluation in 4th Year EM Clerkship Bootstrapping In statistics, “bootstrapping” refers to a process of resampling without replacement to generate an empirical probability density function.17 It allows the estimation of the statistical significance of a parameter when the underlying distribution is not known. In this paper we use it to estimate the statistical significance of Jaccard similarities. RESULTS Demographics Figure 1 shows the distributions of faculty ratings and final grades. Hyphenated categories refer to evaluations where the attending circled two adjacent categories evenly. Figure 2 shows the 50 most common single words (unigrams) and nine most common two-word phrases (bigrams) in all student comments that remained after processing. As an example, after processing the phrase loss of consciousness becomes (loss, consciousness). That phrase has two unigrams (loss and consciousness) and one bigram (loss consciousness). Correlation Between Reflection and Final Grade Figure 3 shows the correlation between the fraction of comments each student completed or the average lengths of each student’s comments and that student’s performance in the clerkship. We considered a comment completed if the comment had at least one legible word that was not a stopword. The correlation between the fraction of comments each student completed and clerkship grade was significant (two-tailed t-test; p=0.03). The correlation between the average length of comments and clerkship grade was not significant (two-tailed t-test; p=0.16). There are two clusters in the right panel of Figure 3. The cluster

of hollow circles corresponds to students who commented on their experience more than half of the time. The cluster of solid circles corresponds to those who commented less than half of the time. The median final grades of the completers and noncompleters, 83 +/-4 and 79 +/- 3 (median +/- interquartile range) are not significantly different, and the 95% confidence intervals for the medians overlap (Figure 4). The clusters do come from different distributions (Kolmogorov-Smirnov test; D statistic 0.5; p=.005) and the Jaccard similarity between the two was 0.06. Figure 5 shows the most common words in each RIME category. We excluded students rated as inadequate because they were too few. We also excluded hybrid categories. The Jaccard similarity between the pairs of panels in Figure 5 was the following: Reporter-Interpreter (0.989, p=0.436), Reporter-Manager (0.1494, p=0.2694), Reporter-Educator (0.1364, p=0.3166), Interpreter-Manager (0.3699, p<0.001), Interpreter-Educator (0.3158, p<0.01), Manager-Educator (0.4286, p<0.01). It suggests that those rated “Manager” used more words in common with those rated “Superior/ Educator” than did those rated “Reporter” or “Interpreter.” Those rated “Reporter” used mostly different words to describe their experiences as compared with those rated “Interpreter,” “Manager,” or “Superior/Educator.” DISCUSSION The main result of this study is that the number of comments medical students make after their shifts correlate with their endof-rotation grades in an EM clerkship. Students who wrote more reflective or complex comments scored higher, although this increase was not statistically significant. We propose several theories to explain these findings. As

Figure 1. Demographics in a study looking at the impact of the frequency of medical students’ post-shift reflective comments on their final grade in an emergency medicine clerkship. Study demographics. Left: Distribution of attending ratings. Hyphenated ratings indicate that an attending circled two categories. Right: Distribution of final grades.

Western Journal of Emergency Medicine

176

Volume XVIII, no. 1: January 2017


Chary et al.

Self-expression and Evaluation in 4th Year EM Clerkship

Figure 2. Left: Most common words in all student comments. Right: Most common bigrams.

Figure 3. Correlation between clerkship grade and reflection. Left: Scatter plot of grades versus fraction of completed comments. Each point represents one student. Dashed line indicates regression of fraction of completed reflections against grades. Inset: Top line shows equation of regression line. Bottom line shows coefficient of determination and p-value that the slope of the regression line is significantly different from zero. Right: Scatter plot of grades versus average length of comments. Each point represents one student. Dashed line and inset indicate the same as in the left panel. In both panels solid circles represent those who completed less than half of the comments. Hollow circles represent those who completed more than half of the comments.

Volume XVIII, no. 1: January 2017

177

Western Journal of Emergency Medicine


Chary et al.

Self-expression and Evaluation in 4th Year EM Clerkship

Figure 4. Comparison of final grade in students who commented on more than half of their shifts with those who commented on less than half. Tukey boxplot. Black horizontal line denotes median. Dimple denotes 95% confidence interval for median. Box denotes interquartile range. Whiskers denote 2nd and 97th percentiles. Dot indicates an outlier.

Figure 5. Most common words in each RIME category. Text in upper right of each inset denotes category. Label on x-axis details how many evaluations and total number of words used before lemmatization and removing stopwords.

Western Journal of Emergency Medicine

178

Volume XVIII, no. 1: January 2017


Chary et al.

Self-expression and Evaluation in 4th Year EM Clerkship

described previously, reflection has correlated with improved student performance with standardized patients.5 This may be extrapolated to the clinical setting although no previous studies have explored this hypothesis. Because students were evaluated on their clinical performance, and the reflection instructions prompted students to reflect immediately after their shift, they may simply have been trying to demonstrate diligence, even though their reflective comments were not included in the grading scheme. This suggests that some students who obtain higher clerkship grades do so because they are simply “completionists.” Final grades may still be more reflective of the completion of a list of tasks rather than the achievement of competencies, despite the move more generally in medical education towards a competency-based model. Reflection has also been described to promote professionalism and empathy in EM.6-7 Professionalism and empathy are also explicit anchors used in our faculty evaluation and therefore are qualities upon which final grades are, in part, based. Furthermore, as the reflection exercise asked students to consider their most challenging cases of the day, this may have prompted students to review clinical material more than they otherwise might have done. Students who have been prompted to be more engaged with the material may therefore perform better in the clerkship. Finally, students who wrote more comprehensive and profound reflective pieces may tend to have attributes (i.e., stronger work ethic, greater attention to detail) that earn them better grades – these attributes could be confounders. Thus, the correlation does not necessarily imply causation. A secondary result is that “Reporters,” “Interpreters,” “Managers,” and “Educators” use different words to describe their reflections.18 The word frequencies were tabulated once the attending evaluations were known. We could not determine the statistical significance of this association. The observations are not independent. Each student generated 14 comments and attending evaluations. This study is the first to use natural language processing to quantify the relationships between how medical students perceive their performance, how faculty perceive their performance, and student performance on an objective measure, a final grade. Natural language processing has been used previously in more restrictive cases, such as to partially automate detecting when medical students have met procedural requirements.19 This suggests that undergraduate medical education can use natural language processing to quantify an important but as yet difficult pair of variables to operationalize, motivate, and reflect. Quantifying the degree of reflection is challenging in medical school. What constitutes reflection changes as students progress through their education. Students may use similar words to describe different levels of reflection. Our study demonstrates a relationship between the pattern of exposition of medical students and their academic performance, these limitations notwithstanding.

Volume XVIII, no. 1: January 2017

LIMITATIONS The association between the reflection exercise completion and summative performance may simply reflect an underlying attribute, such as diligence. Counting comments with single legible words as “complete” may have diluted our statistical power. Students pursuing EM may perform better than students interested in other specialties. They may schedule the clerkship earlier to obtain letters of recommendation. With only one year’s data, we could not control for seasonality or specialty. Additionally, we did not control for the site in which a student rotated, which may have had effects on reflection, grading, or both. It was unknown whether students wrote their reflections after each shift as instructed, or in bunches. Over 200 reflections were evaluated as “Superior.” Disproportionate representation of some categories at the expense of others makes it harder to find distinguishing features of each category. We excluded hybrid categories as it was unclear whether the evaluator intended to circle both categories, indicating performance between two categories. Excluding those comments may have decreased the power of the study. There was insufficient data to train a naive Bayes classifier to predict attending evaluations from student comments. CONCLUSION Fourth-year medical students who reflected more frequently received a higher grade in a required EM clerkship. The number of words in each reflection was not significantly correlated with grade performance. A more formal reflection program could help identify students struggling to master the content before the end of the clerkship. More profound reflection may potentially augment student learning and clinical performance through increased self-engagement and awareness. However, future studies that target these particular outcomes need to be conducted.

Address for Correspondence: Braden Hexom, MD, Rush Medical College, Department of Emergency Medicine, 1750 West Harrison Street, Suite 108, Chicago, IL, 60612. Email: braden_hexom@ rush.edu. Conflicts of Interest: By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none. Copyright: © 2016 Chary et al. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/ licenses/by/4.0/

179

Western Journal of Emergency Medicine


Chary et al.

Self-expression and Evaluation in 4th Year EM Clerkship REFERENCES

10. Bird S, Klein E, Loper E. Natural language processing with Python. O’Reilly Media, Inc; 2009.

1. Sandars J. The use of reflection in medical education: AMEE Guide No. 44. Med Teach. 2009;31(8):685-95.

11. Van Der Walt S, Colbert SC, Varoquaux G. The NumPy array: a structure for efficient numerical computation. Comutp Sci Eng.

2. Aronson L. Twelve tips for teaching reflection at all levels of medical

2011;13(2):22-30.

education. Med Teach. 2010;33(3):200-5. 3. Carr SE and Johnson PH. Does self reflection and insight correlate

12. Hunter JD. Matplotlib: A 2D graphics environment. Computing in Science and Engineering. 2007;9(3):90-5.

with better academic performance in medical students? BMC Med Ed. 2013;13:133.

13. Pedregosa F, Varoquaux G, Gramfort A, et al. Scikit-learn: Machine learning in Python. J Mach Learn Res. 2011;(12):2825-30.

4. Bernard AW, Gorgas D, Greenberger S, et al. The use of reflection in emergency medical education. Acad Emerg Med. 2012;19(8):978-82.

14. Rajaraman A and Ullman JD. Data Mining. Mining of Massive Datasets. Cambridge University Press. 2011;1–17.

5. White CB, Ross PT, Gruppen LD. Remediating students’ failed OSCE performances at one school: the effects of self-assessment, reflection,

15. Marcus MP, Marcinkiewicz MA, Santorini B. Building a large annotated corpus of English: The Penn Treebank. Comput Ling. 1993;19(2):313-

and feedback. Acad Med. 2009;84(5):651.

30.

6. Blatt B, Plack M, Maring J, et al. Acting on reflection: the effect of reflection on students’ clinical performance on a standardized patient

16. Lapointe FJ and Legendre P. A classification of pure malt Scotch whiskies. Appl Stat, 1994;237-57.

examination. J Gen Intern Med. 2007;22(1):49-54. 7. Bernard AW, Malone M, Kman NE, et al. Medical Student

17. Davison AC and Hinkley DV. Bootstrap methods and their application.

professionalism narratives: A thematic analysis and interdisciplinary

Cambridge Series in Statistical and Probabilistic Mathematics.

comparative investigation. BMC Emerg Med. 2011;11:11.

Cambridge University Press. 1997.

8. Mann K, Gordon J, MacLeod A. Reflection and reflective practice in

18. Pangaro L. A new vocabulary and other innovations for improving descriptive in-training evaluations. Acad Med. 1999;74(11):1203-7.

health professions: a systematic review. Adv Health Sci Educ Theory Pract. 2009;14(4):595-621.

19. Denny JC, Bastarache L, Sastre EA, et al. Tracking medical students’ clinical experiences using natural language processing. J Biomed

9. Van Rossum G and Drake FL. Python language reference manual.

Inform. 2009;42(5):781-9.

Network Theory Ltd.; 2003.

Western Journal of Emergency Medicine

180

Volume XVIII, no. 1: January 2017






Important Dates to Remember... Date

Ac�vity

December 31, 2016

CORD Board of Directors Nomina�ons Deadline

January 3, 2017

CPC Semi‐Finals Top 72 Programs No��ed of Acceptance

January 3, 2016

CORD Academy for Scholarship & Longevity Awards Nomina�ons Deadline

January 13, 2017

CORD Academic Assembly Early Bird Registra�on Ends

January 27, 2017

CORD Research & Innova�ons Forum Abstracts Authors No��ed of Acceptance

February 1, 2017

CORD Board of Directors Elec�on Ballots Begins

February 3, 2017

Academic Announcements for CORD February Newsle�er Due

February 15, 2017

CORD Board of Directors Elec�on Closes

March 24, 2017

Deadline for CORD Coaching Requests at Academic Assembly

April 14, 2017

CORD Academic Assembly Pre Registra�on Ends

April 27‐30, 2017

CORD Academic Assembly —Fort Lauderdale, Florida

April 28, 2017

CPC Semi‐Final Compe��on at Academic Assembly

May 1, 2017

CORD Spring Resident Applica�ons to Commi�ees Begin

June 1, 2017

CORD Spring Resident Applica�ons to Commi�ees Close

cordem.org

cord_em

emergencymedcord


Will your residents be ready for the new eOral Exam Format? Now Available from CORD:

�oc� eOral �oard So�ware�

�ases �nclude dynamic s�muli� video clips o� pa�ents or procedures, moving ultrasounds and rhythm strips and dynamic vital signs. �nterac�ve, computeri�ed inter�ace will �acilitate e�aminer� candidate interac�ons Twelve cases currently available with more being contributed soon

Sign up today at www.cordem.org/eOrals




Mark yourfor calendar

ACOEP’s Upcoming Spring Seminar!

April 18-22, 2017 NEW LOCATION!

HIGHLIGHTS INCLUDE:

Bonita Springs, FL

• COLA Review

Ft. Myers Coast

• A mix of CME activity and time to relax on your own • FOEM 5K & 1 Mile DO Dash

OVER 30 HOURS OF CME CREDIT

• Expanded breakout lectures • New tracks and events

Visit www.acoep.org for more details!


Western Journal of Emergency Medicine

Volume XVIII, Number 1, January 2016

West

Open Access at www.westjem.com

ISSN 1936-900X

Western Journal of Emergency Medicine: Integrating Emergency Care with Population Health Special Issue from CDEM and CORD Indexed in MEDLINE

VOLUME XVIII, NUMBER 1, JANUARY 2017

Special Issue in Educational Research and Practice

PAGES 1-81

A Peer-Reviewed, International Professional Journal


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.