Multimodal fusion for robust respiratory rate estimation in wearable sensing

Arlene John*, He Wang, Barry Cardiff, Keshab K Parhi, Deepu John

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

22 Downloads (Pure)

Abstract

Respiratory rate is recognized as an important physiological marker in many healthcare scenarios, including COVID-19. Accurate, real-time respiratory rate estimation on wearable IoT devices is challenging because ambulatory motion introduces substantial noise to the acquired signals. This paper proposes a framework to overcome this limitation by fusing data from multiple sensors. The proposed fusion technique uses discrete wavelet transform (DWT) to extract relevant time-frequency features from electrocardiogram (ECG) and photoplethysmogram (PPG) signals and fuses these extracted features in real time to improve respiratory rate estimation accuracy. The instantaneous signal quality of ECG and PPG signals is estimated and used as weights for achieving real-time signal fusion to obtain respiration rate estimates. The proposed fusion technique achieved a performance either better than or comparable to current state-of-the-art methods. The framework was tested on the CapnoBase TBME RR benchmark dataset, and the median absolute error was 0.34 breaths per minute (bpm), with a maximum error spread of up to 1.72 bpm in the -50 dB to 50 dB signal-to-noise ratio (SNR) range for all noise scenarios considered on a single channel. These results exceed current stat-of-the-art performance and makes the framework well-suited for wearables operating in noisy, real-world environments.
Original languageEnglish
Article number103253
JournalInformation Fusion
Volume123
DOIs
Publication statusPublished - Nov 2025

Keywords

  • UT-Hybrid-D

Fingerprint

Dive into the research topics of 'Multimodal fusion for robust respiratory rate estimation in wearable sensing'. Together they form a unique fingerprint.

Cite this