Electronic Health Record Usability Associated with Physician Cognitive Performance Levels

Usability improvements in electronic health records (EHRs) are associated with improved cognitive workload and performance levels among clinicians, according to a study published in JAMA.

In this qualitative improvement study, researchers allocated 38 participants (66% female) to perform tasks in a simulated EHR environment. From April 2016 to December 23, 2016 residents from a large academic institution were enlisted and designated to use either a baseline EHR (n=20) or an enhanced EHR (n=18). The baseline EHR (without enhanced interface usability) displayed all new abnormal test results and identified critical test results for patients with a no-show status (did not attend or cancelled their follow-up appointment).

The EHR with enhanced usability segregated previously identified critical test results for patients who did not appear for a scheduled follow-up evaluation and provided policy-based decision support instructions for next steps. Also, the enhanced EHR clearly displayed information pertaining to status and policy-based decision support instructions for next steps. The baseline EHR comprised all patients with abnormal or critical test results in a general folder and provided no decision support instructions for next steps.

The researchers conducted data analyses from January 9, 2017 to March 30, 2018. Cognitive workload was subjectively quantified using NASA-Task Load Index, and physiological workload was assessed using blink rates. Researchers quantified performance according to the percentage of appropriately managed abnormal test results.

Enhanced EHR Group Performed Better

Results of the study showed the 20 participants allocated to the baseline EHR compared with the 18 allocated to the enhanced EHR demonstrated statistically significantly higher cognitive workload as quantified by blink rate (mean [SD] blinks per minute, 16 [9] vs 24 [7]; blink rate, –8 [95% CI, –13 to –2]; P=0.01). Moreover, the baseline group exhibited statistically significantly poorer performance compared with the enhanced group who appropriately managed 16% more abnormal test results (mean [SD] performance, 68% [19%] vs. 98% [18%]; performance rate, –30% [95% CI, –40% to –20%]; P<0.001). No statistical difference was noted between the two groups in fatigue (mean [SD] fatigue level, 2.7 [1.4] vs. 2.8 [0.9]; fatigue level, -0.1 [95% CI, -0.8 to 0.7]; P=0.84).

Findings Support Proactive Evaluation

Authors of the study did note several limitations, such as the need for a larger sample size to consider confounding factors, as well as the fact that the study was performed in a simulated environment where patients were privy to their work being assessed.

The authors also said that “the intervention might have manipulated both the ease of access to information through a reorganized display and learning because it provided a guide to action by clearly showing information on patient status and policy-based decision support instructions for next steps. Future research could more accurately quantify the association of usability and learning with cognitive workload and performance. Nevertheless, the intervention provided the necessary basis to conduct this study.”

They concluded by writing that the “findings from this study support the proactive evaluation of other similar usability enhancements that can be applied to other interfaces within EHRs.”