Skip to content Skip to navigation

Deep learning for automated seizure localization

Deep learning for automated seizure localization

Current automated seizure detection software is slow, inaccurate, and rarely precise enough for clinicians to rely upon, while commercial algorithms for automated seizure localization, a key component in diagnosis and treatment plan formation, are effectively nonexistent. In the first year of this project, we have leveraged unique resources in Stanford Neurology and Biomedical Informatics to create an automated seizure detection algorithm using cutting-edge AI methods that surpasses the current industry leader. Our success to date is based on an approach combining deep neural networks, electroencephalography (EEG) data, and text records created in the normal clinical workflow to allow for rapid and efficient machine learning. Concurrently, we have constructed what we believe to be the largest existing video-EEG dataset in academic medicine within a PHI-compliant computing infrastructure and have begun integrating valuable video data into our promising automated detection model. In the second year of this project, we propose to take the next step towards commercialization by (a) leveraging recent advances in weak supervision to build models that incorporate video and EEG data and (b) building a software platform for machine-assisted video EEG annotation and interpretation which puts our algorithms in users’ hands. This platform will speed algorithm development and allow for crucial interactions and feedback from clinicians who represent a major segment of our target market.

This project is a renewal of a 2019 Neuroscience Translate Award.

Participants

Team members:

Emel Alkim (Programmer)

Funding Type: 
Neuroscience:Translate
Round: 
2
Award Year: 
2020