Abstract
Sharing breathing signals has the capacity to provide insights into hidden experiences and enhance interpersonal communication. However, it remains unclear how the modality of breath signals (visual, haptic) is socially interpreted during collaborative tasks. In this mixed-methods study, we design and evaluate BreatheWithMe, a prototype for real-time sharing and receiving of breathing signals through visual, vibrotactile, or visual-vibrotactile modalities. In a within-subjects study (15 pairs), we investigated the effects of modality on breathing synchrony, social presence, and overall user experience. Key findings showed: (a) there were no significant effects of visualization modality on breathing synchrony, only on deliberate music-driven synchronization; (b) visual modality was preferred over vibrotactile feedback, despite no differences across social presence dimensions; (c) BreatheWithMe was perceived to be an insightful window into others, however included data exposure and social acceptability concerns. We contribute insights into the design of multi-modal real-time breathing visualization systems for colocated, collaborative tasks.
Original language | English |
---|---|
Title of host publication | CHI EA '23: Extended Abstracts of the CHI Conference on Human Factors in Computing Systems |
Place of Publication | New York, NY |
Publisher | Association for Computing Machinery |
Pages | 1-8 |
Number of pages | 8 |
DOIs | |
Publication status | Published - 19 Apr 2023 |
Event | CHI Conference on Human Factors in Computing Systems, CHI 2023 - Hamburg, Germany Duration: 23 Apr 2023 → 28 Apr 2023 |
Conference
Conference | CHI Conference on Human Factors in Computing Systems, CHI 2023 |
---|---|
Abbreviated title | CHI 2023 |
Country/Territory | Germany |
City | Hamburg |
Period | 23/04/23 → 28/04/23 |