Abstract
Respiratory rate is recognized as an important physiological marker in many healthcare scenarios, including COVID-19. Accurate, real-time respiratory rate estimation on wearable IoT devices is challenging because ambulatory motion introduces substantial noise to the acquired signals. This paper proposes a framework to overcome this limitation by fusing data from multiple sensors. The proposed fusion technique uses discrete wavelet transform (DWT) to extract relevant time-frequency features from electrocardiogram (ECG) and photoplethysmogram (PPG) signals and fuses these extracted features in real time to improve respiratory rate estimation accuracy. The instantaneous signal quality of ECG and PPG signals is estimated and used as weights for achieving real-time signal fusion to obtain respiration rate estimates. The proposed fusion technique achieved a performance either better than or comparable to current state-of-the-art methods. The framework was tested on the CapnoBase TBME RR benchmark dataset, and the median absolute error was 0.34 breaths per minute (bpm), with a maximum error spread of up to 1.72 bpm in the -50 dB to 50 dB signal-to-noise ratio (SNR) range for all noise scenarios considered on a single channel. These results exceed current stat-of-the-art performance and makes the framework well-suited for wearables operating in noisy, real-world environments.
Original language | English |
---|---|
Article number | 103253 |
Journal | Information Fusion |
Volume | 123 |
DOIs | |
Publication status | Published - Nov 2025 |
Keywords
- UT-Hybrid-D