Multilevel Data Fusion for atrial fibrillation detection in wearables

Arlene John, Barry Cardiff, Deepu John

Research output: Contribution to conferencePosterAcademic

Abstract

Background: Atrial fibrillation (AF), a common arrhythmia, can lead to serious health issues if undetected. Electrocardiogram (ECG) and photoplethsymogram (PPG) signals can be used to detect AF events, and fusion of these signals can improve detection performances. Traditional fusion methods rely on designer input and struggle to optimize fusion information abstraction levels under varying data quality. Integrating signal quality indicators (SQIs) into fusion models enhances reliability, yet adaptive SQI-based fusion for AF detection in wearable applications remains largely unexplored.

Methods: A data-driven, multi-level fusion model that self-learns the optimal fusion stage for AF detection from ECG and PPG signals using 1-dimensional convolutional neural networks (1D-CNNs) is created. The fusion model incorporated SQIs per signal sample to prioritize cleaner signals during the fusion process. Using a subset of the MIMIC III database, 20-second non-overlapping windows of ECG and PPG data with associated SQIs were utilized. Fourstreams of 1D-CNNs were employed: two streams each for ECG and PPG signals, and the individual SQI inputs for weighted feature fusion. A central fusion network was used to fuse the signals at varying stages in the feed-forward path based on the SQIs. The model was trained with two loss functions: combined network losses (average loss) and a central network loss. Simulated noisy signals were also used to assess the fusion architecture.

Results: The model trained with average loss outperformed the central-loss-only model, achieving an accuracy of 99.33% accuracy and sensitivity of 99.74% under clean conditions. In simulated noisy conditions, the inclusion of SQI information improved accuracy by 3.80% compared to models without SQIs, demonstrating enhanced robustness. Compared to singlesignal models, the multi-level fusion approach showed significant improvements, with accuracy gains of 3.51% over the ECG-only model and 14.55% over the PPG-only model.

Conclusions: The fusion model provides a robust and flexible approach to AF detection in wearable devices by optimizing fusion levels through a data-driven approach and integrating SQIs for noise resilience. This self-learning, multi-level fusion architecture shows superior performance over existing fusion methods and holds promise for real-world applications in multimodal AF detection, potentially setting a new standard for AI-driven health monitoring systems.
Original languageEnglish
Publication statusPublished - 30 Jan 2025
Event10th Dutch Biomedical Engineering Conference, BME 2025 - Hotel Zuiderduin, Egmond aan Zee, Netherlands
Duration: 30 Jan 202531 Jan 2025
Conference number: 10
https://www.bme2025.nl/

Conference

Conference10th Dutch Biomedical Engineering Conference, BME 2025
Abbreviated titleBME 2025
Country/TerritoryNetherlands
CityEgmond aan Zee
Period30/01/2531/01/25
Internet address

Fingerprint

Dive into the research topics of 'Multilevel Data Fusion for atrial fibrillation detection in wearables'. Together they form a unique fingerprint.

Cite this