TY - JOUR
T1 - Hearing Status Affects Children's Emotion Understanding in Dynamic Social Situations
T2 - An Eye-Tracking Study
AU - Tsou, Yung Ting
AU - Li, Boya
AU - Kret, Mariska E.
AU - Frijns, Johan H.M.
AU - Rieffe, Carolien
N1 - Publisher Copyright:
© 2021 Lippincott Williams and Wilkins. All rights reserved.
PY - 2021/7
Y1 - 2021/7
N2 - Objectives: For children to understand the emotional behavior of others, the first two steps involve emotion encoding and emotion interpreting, according to the Social Information Processing model. Access to daily social interactions is prerequisite to a child acquiring these skills, and barriers to communication such as hearing loss impede this access. Therefore, it could be challenging for children with hearing loss to develop these two skills. The present study aimed to understand the effect of prelingual hearing loss on children's emotion understanding, by examining how they encode and interpret nonverbal emotional cues in dynamic social situations. Design: Sixty deaf or hard-of-hearing (DHH) children and 71 typically hearing (TH) children (3-10 years old, mean age 6.2 years, 54% girls) watched videos of prototypical social interactions between a target person and an interaction partner. At the end of each video, the target person did not face the camera, rendering their facial expressions out of view to participants. Afterward, participants were asked to interpret the emotion they thought the target person felt at the end of the video. As participants watched the videos, their encoding patterns were examined by an eye tracker, which measured the amount of time participants spent looking at the target person's head and body and at the interaction partner's head and body. These regions were preselected for analyses because they had been found to provide cues for interpreting people's emotions and intentions. Results: When encoding emotional cues, both the DHH and TH children spent more time looking at the head of the target person and at the head of the interaction partner than they spent looking at the body or actions of either person. Yet, compared with the TH children, the DHH children looked at the target person's head for a shorter time (b = -0.03, p = 0.030), and at the target person's body (b = 0.04, p = 0.006) and at the interaction partner's head (b = 0.03, p = 0.048) for a longer time. The DHH children were also less accurate when interpreting emotions than their TH peers (b = -0.13, p = 0.005), and their lower scores were associated with their distinctive encoding pattern. Conclusions: The findings suggest that children with limited auditory access to the social environment tend to collect visually observable information to compensate for ambiguous emotional cues in social situations. These children may have developed this strategy to support their daily communication. Yet, to fully benefit from such a strategy, these children may need extra support for gaining better social-emotional knowledge.
AB - Objectives: For children to understand the emotional behavior of others, the first two steps involve emotion encoding and emotion interpreting, according to the Social Information Processing model. Access to daily social interactions is prerequisite to a child acquiring these skills, and barriers to communication such as hearing loss impede this access. Therefore, it could be challenging for children with hearing loss to develop these two skills. The present study aimed to understand the effect of prelingual hearing loss on children's emotion understanding, by examining how they encode and interpret nonverbal emotional cues in dynamic social situations. Design: Sixty deaf or hard-of-hearing (DHH) children and 71 typically hearing (TH) children (3-10 years old, mean age 6.2 years, 54% girls) watched videos of prototypical social interactions between a target person and an interaction partner. At the end of each video, the target person did not face the camera, rendering their facial expressions out of view to participants. Afterward, participants were asked to interpret the emotion they thought the target person felt at the end of the video. As participants watched the videos, their encoding patterns were examined by an eye tracker, which measured the amount of time participants spent looking at the target person's head and body and at the interaction partner's head and body. These regions were preselected for analyses because they had been found to provide cues for interpreting people's emotions and intentions. Results: When encoding emotional cues, both the DHH and TH children spent more time looking at the head of the target person and at the head of the interaction partner than they spent looking at the body or actions of either person. Yet, compared with the TH children, the DHH children looked at the target person's head for a shorter time (b = -0.03, p = 0.030), and at the target person's body (b = 0.04, p = 0.006) and at the interaction partner's head (b = 0.03, p = 0.048) for a longer time. The DHH children were also less accurate when interpreting emotions than their TH peers (b = -0.13, p = 0.005), and their lower scores were associated with their distinctive encoding pattern. Conclusions: The findings suggest that children with limited auditory access to the social environment tend to collect visually observable information to compensate for ambiguous emotional cues in social situations. These children may have developed this strategy to support their daily communication. Yet, to fully benefit from such a strategy, these children may need extra support for gaining better social-emotional knowledge.
KW - Child development
KW - Deaf and hard of hearing
KW - Dynamic social scenes
KW - Emotion understanding
KW - Eye tracking
KW - Hearing loss
KW - Sensorineural
KW - Social information processing
UR - http://www.scopus.com/inward/record.url?scp=85108940531&partnerID=8YFLogxK
U2 - 10.1097/AUD.0000000000000994
DO - 10.1097/AUD.0000000000000994
M3 - Article
C2 - 33369943
AN - SCOPUS:85108940531
SN - 0196-0202
VL - 42
SP - 1024
EP - 1033
JO - Ear and Hearing
JF - Ear and Hearing
IS - 4
ER -