Recent research has found that smartphones and speakers may be able to analyze sound to detect cardiac arrest. This approach focuses on the abnormal breathing one exhibits after having a heart attack and has the potential to prevent deaths caused by unwitnessed cardiac arrest. This work comes from a group of researchers from the University of Washington and was published today in npj Digital Medicine.
Cardiac arrest is one of the leading causes of death globally, with almost 300,000 yearly fatalities occurring in North America alone. Prompt diagnosis and CPR are crucial in saving cardiac arrest patients, but unfortunately two out of every three cardiac arrests occur with no witness. With the high prevalence of commodities like the iPhone and Amazon Echo, however, researchers have begun exploring tech solutions to this issue.
After one experiences cardiac arrest, they express abnormal breathing known as agonal breathing. This is caused by a brainstem reaction to severely low oxygen levels and is characterized by unique gasping. To detect cardiac arrest events when no one is around, these researchers hypothesized that the voice recognition software in smartphones and speakers could be used to detect this agonal breathing. See the video below for a reenactment of cardiac arrest and the succeeding agonal breathing.
It was challenging for the researchers to train these devices, being that these breaths are rare and cannot be reproduced in a lab. To overcome this obstacle the scientists obtained audio from 911 calls regarding cardiac arrests, being that agonal breathing is present in 50% of 911 calls for cardiac arrest. They listened to each one and carefully extracted the audio of agonal breaths to program an algorithm to detect the specific breathing pattern that follows cardiac arrest. The specific devices used were the Amazon Echo (Alexa), iPhone 5s, and Samsung Galaxy S4.
In total, 19 hours of audio were used from 162 calls. 2.5 seconds of each agonal breath were extracted to yield 236 total clips. To augment this data, the researchers played these recordings over distances of 1, 3, and 6 meters to create more samples. 83 hours of regular sleep audio was also included to train the system to differentiate between agonal breathing and snoring/sleep apnea sounds that may be similar. The team used 1 hour of this audio to train the system, and the other 82 to test it.
Overall, this approach to detecting cardiac arrest was found to have a sensitivity of over 97% and a specificity of 99.51%. A low false positive rate was observed as well at only 0-14% over 82 hours of the sleep lab data. To test their system outside of the sleep lab, the researchers also recruited 35 participants to record themselves sleeping using a smartphone. After retraining their classifier with this data, they found a sensitivity and specificity of 91.17% and 99.38%, respectively, and a false positive rate of roughly 0.22%. Adding a frequency filter reduced this rate to less than 0.01%.
The authors note that privacy is a pressing concern with this approach, being that it would require smart devices to continually monitor audio in the absence of intentional activation. To address this, they feel that running the system locally on the smart devices without storing data is the best approach. They also recognize the need for more samples of agonal breathing to improve their system’s accuracy, with potential sources being hospice care and other end-of-life settings. Additional testing with seizure, overdose, and hypoglycemia audio is also something the authors feel would improve the system as well, as well as settings outside of the bedroom.
— Avis Favaro (@CTV_AvisFavaro) June 19, 2019