Continuous EEG (cEEG) is an important tool in the critical care setting, as it allows for the detection of seizure activity and other neurological anomalies even when a patient is comatose or anesthetized. However, the manpower available to monitor cEEG limit its utility. Whether a clinician, a neurodiagnostic technologist (NDT), or a nurse is doing the monitoring, there are only so many hours available in the day to watch the cEEG reading. A computerized algorithm to identify abnormalities would be an ideal solution to this problem. But how do we create an algorithm with the required accuracy to achieve this task?

As is often the case, solutions come with their own set of problems. In this case, if the solution is to create an algorithm using annotation from those who regularly read EEGs, how can we be certain the annotators are correctly reading the cEEG? This is the problem Andrew Nguyen, Assistant Professor; Director, Master of Science in Health Informatics at the University of San Francisco, has been trying to solve. He presented his solution at the 2018 ASET Annual Conference, to an enthusiastic group of NDTs, and elaborated on it in 2019 in his presentation, “How Blockchain Could Improve Crowdsourced EEG Annotations.”

Nguyen’s solution is a modern answer to this modern problem: crowdsourcing. His project, also presented in abstract form for the American Medical Informatics Association (AMIA), uses this problem to provide not one, but two solutions:

  1. Create training opportunities for those who read EEGs, and
  2. Improve accuracy of datasets for interpretation of cEEG.

His prototype is a system that the EEG professional can access securely to start annotating multichannel tracings of random EEG records. They have the ability to zoom in or scroll to examine the full record. Users are supplied with a list of standardized terminology from the American Clinical Neurophysiology Society that they use to annotate the record once they have finished reviewing it.

To fulfill the individual’s training needs, they are provided with feedback that shows how others responded to the same record. The system also calculates a “consensus vote” from the pool of annotators and an estimate of its reliability based on the expertise and accuracy of these contributors. Some of the annotations may be considered the gold standard response, and if so, it is also shown to the trainee. The consensus vote is recalculated with every new annotation. Those using the system for training can compare their own performance to other annotators, to the consensus vote, and to existing gold standards, as well as track their performance over time.

This training exercise then produces a crowdsourced, annotated dataset that may be used by researchers in a number of settings. The dataset may eventually prove robust enough to create that elusive algorithm for cEEG monitoring.

Stay up-to-date with the latest Neurology Insights content