Virtual learning experiences have become widely used in all settings during the ongoing COVID-19 pandemic and are likely to remain a part of the training experience after the pandemic ends, given its cost-effectiveness, accessibility, and flexibility. However, little is known about how to measure the effectiveness of these interventions.
A team of researchers from pulmonary, critical care, and sleep medicine specialists in the Mayo Clinic in Rochester, Minnesota, devised a longitudinal framework to evaluate critical care quality improvement and remote training programs. They shared their findings at the 2021 American Thoracic Society International Conference.
First, the authors convened a topic review group that included individuals with diversity in clinical expertise, nationality, and experience in medical education, quality improvement, implementation science, and research methodology. “We recruited individuals with experience designing and participating in various medical remote training programs, including teleconferences, tele-consults, online video/chat platforms, and virtual simulation classrooms,” they explained.
Next, the team set up a series of facilitated discussions in which they directed the group to develop a conceptual framework to guide the development of remote learning programs and accompanying evaluation tools to measure their impact.
The group developed a conceptual framework based on the context-input-process-product, or CIPP, quality evaluation model that included three points of evaluation (before, during, and after the remote training). Their proposed evaluation tools blend the Proctor taxonomy and an expansion of the popular RE-AIM framework used to categorize implementation outcomes to include early (i.e., acceptability, appropriateness, feasibility), middle (i.e., adoptions, fidelity), and late (i.e., sustainability) stage outcomes. This provides a more complete understanding of the implementation process, the ream noted.
“The outcomes from these mixed-methods analyses will provide a robust toolbox to guide the design, delivery, implementation, and sustainment of remote medical educational programs,” the authors concluded.