Researchers Develop First AI-Enabled Wearable Camera To Detect Drug Errors
Media Inquiries
Every year, roughly 1.2 million patients experience adverse outcomes associated with injectable medications, and these errors are estimated to cost about $5 billion. A team including a researcher from Carnegie Mellon University has developed the first wearable camera that uses artificial intelligence to help prevent such errors.
"The goal of our system is to catch these drug administration errors in real-time, before the injection, and provide an alert so the clinician has a chance to intervene before any patient harm," said Justin Chan(opens in new window), an assistant professor in the School of Computer Science's Software and Societal Systems Department(opens in new window) and the College of Engineering's Electrical and Computer Engineering Department(opens in new window).
In addition to Chan, the team included researchers from the University of Washington, Makerere University and the Toyota Research Institute.
The error rate for all drugs given in a hospital is about 5% to 10% and errors can happen at all levels of care. To design the wearable camera system, researchers focused on training deep learning algorithms that could detect errors when a clinician transfers a drug from a vial into a syringe. These could be vial swap errors, which occur when the wrong vial is used or the drug label on the syringe is incorrect; or syringe swaps, when the label is correct but the clinician administers the wrong drug. To prevent errors, hospitals use safeguards like requiring barcode scanning for syringes, but in high-pressure situations, clinicians can forget to scan the drug's barcode or manually record its contents.
In their study, published today in npj Digital Medicine(opens in new window), researchers demonstrated how the AI-enabled wearable camera system could detect vial swap errors with a sensitivity of 99.6% and a specificity of 98.8%.
Chan said creating the deep learning system to detect errors as they happen was challenging because syringes, vials and drug labels are small and clinicians can inadvertently obscure them when handling them. Researchers collected a large training dataset from different operating room environments with different backgrounds and lighting conditions. They collected footage using a small head-mounted camera strapped to physicians' foreheads, and the camera was tilted down to film the providers' hands. Over 55 days, they collected 4K video footage of drug preparation events from 13 anesthesiology providers and 17 operating rooms in two hospitals.
"We designed the algorithm so that instead of reading the label text, which can be obscured, it only needs to catch a glimpse of visual cues like vial and syringe shape, label color, and font size for a short period of time to classify what the drug is," Chan said.
Now that researchers have shown the accuracy of the system, their next step is incorporating the system into smart eyewear that can provide visual or auditory warnings to clinicians before a drug is delivered to a patient.
"This work demonstrates how AI-enabled systems can serve as a second set of 'eyes' to improve health care practices and patient safety," Chan said. "When integrated into an electronic medical system, our system also opens up opportunities for automatic documentation of drug information and can reduce the overhead of manual record-keeping."