Back to Journals » Psychology Research and Behavior Management » Volume 17

A Comparative Study Recognizing the Expression of Information Between Elderly Individuals and Young Individuals

Authors Ma J, Liu X, Li Y 

Received 27 May 2024

Accepted for publication 26 August 2024

Published 5 September 2024 Volume 2024:17 Pages 3111—3120

DOI https://doi.org/10.2147/PRBM.S471196

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 2

Editor who approved publication: Dr Bao-Liang Zhong



Jialin Ma, Xiaojing Liu, Yongxin Li

Faculty of Education, Henan University, Kaifeng, Henan Province, People’s Republic of China

Correspondence: Yongxin Li, Faculty of Education, Henan University, Kaifeng, Henan Province, 475000, People’s Republic of China, Tel +8613503780519, Email [email protected]

Background: Studies have shown that elderly individuals have significantly worse facial expression recognition scores than young adults. Some have suggested that this difference is due to perceptual degradation, while others suggest it is due to decreased attention of elderly individuals to the most informative regions of the face.
Methods: To resolve this controversy, this study recruited 85 participants and used a behavioral task and eye-tracking techniques (EyeLink 1000 Plus eye tracker). It adopted the “study-recognition” paradigm, and a mixed experimental design of 3 (facial expressions: positive, neutral, negative) × 2 (subjects’ age: young, old) × 3 (facial areas of interest: eyes, nose, and mouth) was used to explore whether there was perceptual degradation in older people’s attention to facial expressions and investigate the differences in diagnostic areas between young and older people.
Results: The behavioral results revealed that young participants had significantly higher facial expression recognition scores than older participants did; moreover, the eye-tracking results revealed that younger people generally fixated on faces significantly more than elderly people, demonstrating the perceptual degradation in elderly people. Young people primarily look at the eyes, followed by the nose and, finally, the mouth when examining facial expressions. The elderly participants primarily focus on the eyes, followed by the mouth and then the nose.
Conclusion: The findings confirmed that young participants have better facial expression recognition performance than elderly participants, which may be related more to perceptual degradation than to decreased attention to informative areas of the face. For elderly people, the duration of gaze toward the facial diagnosis area (such as the eyes) should be increased when recognizing faces to compensate for the disadvantage of decreased facial recognition performance caused by perceptual aging.

Keywords: facial expressions, age, areas of interest, eye movement

Introduction

Recognizing facial expressions is important and indispensable in daily social activities. By interpreting their facial expressions, we can easily understand individuals’ attitudes and the information they convey, thus facilitating communication. However, across the human life span, the ability to recognize facial expressions is not static but rather evolves with age; there is a large difference in expression recognition ability between older and younger individuals.1–6

Researchers have often used behavioral tasks and eye-tracking techniques to explore the reasons underlying differences in how well younger (18–30 years) and elderly people recognize facial expressions and have suggested a role of perceptual degradation,7–9 differences in processing patterns,10,11 and variability in emotional diagnostic strategies.12

First, perceptual degradation has been shown to decrease facial expression recognition task scores in elderly adults.7,13 For example, researchers utilized an emotion classification task and asked both younger and elderly participants to categorize faces into six basic emotions (happiness, surprise, disgust, sadness, fear, and anger).12 The results showed that, compared with younger participants, elderly people were less likely to correctly identify the emotion of the face and had slower reaction times. Researchers suggested that this difference is related to elderly participants processing less visual information related to facial expressions, as significantly fewer elderly participants gaze at the face than younger (18–30 years) participants do.12 Researchers used a study-recognition paradigm in which participants were asked to learn about facial expressions, and their learning was subsequently tested; they reported that elderly participants had significantly lower facial recognition scores than younger (18–30 years) participants did.8 Researchers suggested that perceptual degradation is the main reason for the decreased facial recognition scores observed in elderly participants.8 Elderly participants experience worsening perceptions, attention, and memory abilities as they age. Perceptual degradation is an important reason for the difference in facial expression recognition ability between elderly and young individuals.

Variability in the diagnostic region for expression recognition is likely another important reason for the difference in facial recognition ability between young and elderly participants. Researchers utilized eye-tracking techniques to examine the gaze characteristics of young and elderly individuals who scanned different facial expressions regarding their facial features (primarily the eyes and nose).12 The results revealed that elderly participants focused less on the whole face and more on the mouth region than young participants did. This result is supported by the findings of a study by researchers,13 which also compared the characteristics of attention given to facial features (primarily the eyes and nose) between elderly and young (20–38 years) participants during the recognition of facial expressions; they reported that elderly participants had significantly worse facial expression recognition performance than young participants, and elderly participants typically spent more time gazing at the mouth region and less time gazing at the eye region than young participants. Researchers suggested that the increase in gazing at the mouth in elderly participants was the main reason for the decline in their expression recognition scores.12 Several limitations exist in the studies of researchers 12 and researchers,13 as described below:

  1. Researchers 12,13 compared only the differences in gaze between elderly and young participants at the eyes and mouth and did not consider the whole set of facial features (eyes, nose, and mouth); thus, these studies cannot comprehensively reveal the differences in the scanning patterns of elderly and young participants in recognizing facial expressions.
  2. Researchers 12,13 attributed the decrease in expression recognition scores in elderly participants to gazing at more uninformative areas (ie, the mouth); however, many studies have shown that, in addition to the eyes, ie, the primary area providing information about facial expressions, the mouth is also an important area for expression recognition.14,15 For example, studies have shown this for happy,16,17 disgusted,16,18,19 fearful,20,21 sad,22,23 and angry faces.21,24 The recognition of expressions is closely related to the mouth; therefore, the view held by researchers13 that elderly people’s increased gaze at uninformative areas causes a decrease in their ability to recognize expression is unconvincing and contradicts the observed real-life experience of elderly people being more sensitive to a variety of emotional cues.25 Researchers’s study may have overlooked the fact that perception degradation reduces emotion recognition performance in elderly people.13 Although elderly people can gaze very accurately at the regions that provide the most information from facial expressions, perceptual degradation may nonetheless lead to reduced recognition scores. Therefore, the current study should distinguish whether the reduced expression recognition performance of elderly participants is an effect of perceptual degradation or a bias toward gazing at nondiagnostic areas of the face.

To address the above limitations, this study refers to the experimental paradigm of researchers.8 The facial expression recognition performance of elderly and young participants was further compared via behavioral tasks and eye-tracking techniques; additionally, the characteristics of the two age groups’ gaze on each facial feature (the eyes, nose, and mouth) were analyzed to reveal the scanning patterns of expression recognition in elderly and young participants. First, we explored whether perception degradation occurred in elderly participants by comparing individuals’ attention to facial features between the two age groups. When the duration of the facial stimulus presentation is the same, elderly people’s attention to facial features (fixation duration and counts) is significantly less than that of young people, suggesting perceptual degradation in elderly people.12 Second, we explored whether elderly participants have worse expression recognition performance than young participants because they focus on more uninformative areas of the face. A previous study showed that young participants tended to process more horizontal facial information than elderly participants and that elderly participants tended to process more vertical facial information than young participants.10 Therefore, we hypothesized that young participants process more of the vertical information conveyed by facial expressions, primarily considering the eyes and nose rather than the mouth as informative regions, and pay significantly more attention to the eyes and nose than to the mouth. In addition to looking at the eyes, elderly participants may allocate more attention to the mouth region than young participants because of their better processing of longitudinal information, that is, elderly participants may primarily attend to the eyes, followed by the mouth and nose. The eyes and mouth are important regions for conveying information to discriminate both positive and negative emotions.14 Therefore, in the present study, we speculated that facial expression recognition performance would be worse in elderly participants than in young participants and that this effect would generally be caused by perceptual degradation rather than incorrect diagnostic areas.

Methods

Participant

Eighty-five Han people were recruited, including 43 young participants (21.30±3.27 years old, 19 males, 24 females) and 42 elderly participants (64.52±6.3 years old, 24 males, 18 females). None of the participants had a history of mental illness or insomnia, or had suffered anxiety or other symptoms in the past month. All the participants were right-handed, had normal or corrected-to-normal vision, and provided written informed consent. This research was approved by the ethics committee of Henan University.

Stimuli

A total of 192 images of facial expressions used in the experiment were selected from the “facial-expression database of Chinese Han, Hui and Tibetan people”.26 In the learning stage, there were 96 images depicting positive, neutral and negative facial expressions (32 of each type). The negative facial expression images were categorized into 4 types, namely, disgust, sadness, anger and fear, with 8 faces of each type. In addition, 96 new facial images were presented in the recognition stage; these similarly included 32 images of positive, neutral and negative facial expressions. All the hair, ears and other irrelevant features were removed from the facial images via Photoshop CS6. The images had a resolution of 640 × 480 pixels and were subsequently converted to grayscale. The facial images were divided into eye, nose and mouth regions, as shown in Figure 1. The total area of each face was approximately 8 cm². The facial images were divided into three AOIs (eyes, nose and mouth), with the area of the eyes approximately 1.7 cm², the area of the nose approximately 1.8 cm², and the area of the mouth approximately 1.6 cm².

Figure 1 Areas of interest.

Equipment

Experiment Builder 1.4.0 software was used during the experiment, and eye movement data were recorded using the EyeLink® 1000 Plus eye tracker with the following parameters: the sampling rate was 1000 Hz and the line-of-sight error was accurate within 5°. The screen resolution was 1024 × 768 pixels, and the distance between the participants and the display screen was 65 cm. The screen angle of view was 15.5° × 11.7°, and the facial image angle of view was 9.85° × 7.5°.

Procedure

All the participants completed the experiment independently in a quiet environment, with the same experimenter using unified instructions. The experiment adopted the “study–recognition” paradigm. In the learning stage, instructions were first presented to the participants; they were instructed to carefully study the 96 randomly presented face pictures and were told that their learning performance would be tested later. After the participants carefully read the instructions, they pressed any key on the keyboard to start learning the faces. In this phase, a facial expression image was presented on the screen for 5 s, followed by the presentation of a fixation point for 1 s14 and then the facial expression image. This process was repeated for a total of 96 trials. To ensure the accuracy of the data, a single-point calibration was added to each trial. The positions of single-point calibration dots were randomly assigned to the top, middle, bottom, left and right of the screen. The EyeLink 1000 Plus eye tracker was used to record the eye movement data (fixation counts and fixation duration) of the participants in the learning stage.

In the recognition stage, the participants were required to recognize 192 faces (96 were learned, and 96 were new). The participants were instructed to press the F key if the face was a learned face and to press the J key otherwise. The participants were instructed to press the key as quickly as possible while ensuring accuracy, and their recognition accuracy and response time were recorded.

Experimental Design

The experiment had a 3 (emotional valence: positive, neutral, negative) × 2 (subject age: younger, older) × 3 (facial areas of interest: eyes, nose and mouth) design, with participants’ age as the between-participants variable and facial expression and area interest as the within-participants variables, as shown in Figure 1. The dependent variables were the recognition accuracy rate and response time, as well as the fixation count and fixation duration for each face AOI.

Results

Behavioral Data

Repeated-measures ANOVAs with 3 emotion valences (positive, neutral, negative) × 2 age groups (younger, older) were conducted to assess recognition accuracy rates. The results revealed that the main effect of emotion valence was significant (F(2, 83)=3.27, p<0.05, η²=0.04), with positive facial expressions (p<0.05, M=0.61, SD=0.08) yielding significantly greater recognition scores than neutral facial expressions (M=0.59, SD=0.07). There was a significant main effect of participant age (F(1, 84)=8.98, p<0.01, η²=0.10), with the recognition scores of the young participants (M=0.62, SD=0.07) being significantly greater than those of the elderly participants (M=0.57, SD=0.08). The interaction effect between age and emotional valence was not significant, F(2, 166)=2.05, p=0.13, η²=0.02.

A repeated-measures ANOVA with 3 emotional valence conditions (positive, neutral, negative) × 2 age groups (younger, older) was conducted to assess response time; the results revealed that the main effect of emotional valence was not significant (F(2, 83)=.70, p=0.50, η²=0.01). There was a significant main effect of age (F(1, 84)=21.52, p<0.001; η²=0.82), with the young group (M=0.1920.68, SD=944.34) showing significantly faster response times than the elderly group (M=3200.72, SD=1521.47). See Table 1 for details.

Table 1 Means and Standard Deviations (M ± SD) of the Recognition Accuracy and Response Times at Different Ages

Eye Movement Data

Fixation for a Whole Face

A repeated-measures ANOVA considering the 3 emotional valence conditions (positive, neutral, negative) × 2 age groups (younger, older) was conducted for the counts of fixations on the whole face. The results revealed that the main effect of age was significant (F(1, 84)=54.63, p<0.001, η²=0.41), and the number of fixations for the whole face was significantly greater in the young group (M=17.39, SD=2.30) than in the elderly group (M=17.39, SD=2.30). There was no main effect of emotional valence (F(2, 83)=.01, p=0.99, η²=0.001), and the interaction effect of age and emotional valence was not significant (F(2, 166)=0.08, p=0.93, η²=0.001).

A repeated-measures ANOVA considering the 3 emotional valence conditions (positive, neutral, negative) × 2 age groups (younger, older) was conducted for the duration of fixation across the whole face. The results revealed that the main effect of age was significant (F(1, 84)=16.36, p<0.001, η²=0.17), with young participants (M=4460.72, SD=312.18) showing significantly longer fixation durations across the whole face than elderly participants (M=4196.47, SD=377.13). There was no main effect of emotional valence (F(2, 83)=1.23, p=0.29, η²=0.016), and the interaction effect of age and emotional valence (F(2, 166)=.04, p=0.96, η²=0.001) was not significant.

Fixation for Facial AOIs

Fixation Counts

A repeated-measures ANOVA considering the 3 emotional valence conditions (positive, neutral, negative) × 2 age groups (younger, older) × 3 AOIs (eyes, nose and mouth) was conducted to analyze fixation counts on the facial AOIs. A main effect of AOI was found (F(2, 83)=94.77, p<0.001, η²=0.53). The number of fixations on the eyes (M=5.64, SD=1.97) was significantly greater than that on the nose (p<0.001, M=3.98, SD=0.81) and mouth (p<0.001, M=3.56, SD=0.72), and the number of fixations on the nose was significantly greater than that on the mouth (p<0.001).

The interaction effect between age and AOI was significant (F(2, 166)=6.38, p<0.01, η²=0.07). According to simple effect analysis, the fixation counts for the eyes were significantly greater than those for the nose (p<0.001) and mouth (p<0.001), and the fixation counts for the nose were significantly greater than those for the mouth when young participants recognized facial expressions (p<0.001, F(2, 82)=119.57, p<0.001, η²=0.75). Similarly, the fixation counts for the eyes were significantly greater than those for the nose (p<0.001) and mouth (p<0.001), and there was no significant difference between those on the nose and mouth when elderly participants recognized facial expressions (p=0.57, F(2, 82)=19.29, p<0.001, η²=0.32).

The interaction effect between emotional valence and AOI was significant (F(4, 332)=39.59, p<0.001, η²=0.32). Simple effect analysis revealed that the fixation count for the nose was significantly greater than that for the mouth when young participants recognized positive, neutral or negative facial expressions. Elderly and young participants had different results; the fixation count for the eyes was significantly greater than that for the nose (p<0.001) and mouth (p<0.001), and the fixation counts for the nose and mouth were not significantly different (p=0.48, F(2, 82)=35.88, p<0.001, η²=0.47) when elderly participants recognized positive facial expressions. The fixation count for the eyes was significantly greater than that for the nose and mouth, and the fixation count for the mouth was significantly greater than that for the nose (p<0.01, F(2, 82)=6.67, p<0.001, η²=.14) when elderly participants recognized neutral facial expressions. The fixation counts for the eyes were significantly greater than those for the nose (p<0.001) and mouth (p<0.001), and there was no significant difference between the counts for the nose and mouth (p=0.23, F(2, 82)=24.23, p<0.001, η²=0.37) when the participants in the elderly group recognized negative facial expressions. See Table 2 for details.

Table 2 Means and Standard Deviations (M ± SD) of Fixation Counts of AOIs for Different Facial Expressions

Fixation Duration

A repeated-measures ANOVA considering the 3 emotional valence conditions (positive, neutral, negative) × 2 age groups (younger, older) × 3 AOIs (eyes, nose and mouth) was conducted to analyze the fixation durations on each of the facial AOIs. The results revealed that the main effect of AOI was significant (F(2, 83)=94.77, p<0.001; η²=0.53); specifically, the duration of fixation on the eyes (M=5.64, SD=1.97) was significantly greater than that on the nose (p<0.001, M=3.98, SD=0.81) and mouth (p<0.001, M=3.56, SD=0.72), and the duration of fixation on the nose was significantly greater than that on the mouth.

The interaction effect between age and AOI was significant (F(2, 166)=2.91, p=0.057, η²=0.03). Through simple effect analysis, the duration of fixation on the eyes was significantly longer than that on the nose (p<0.001) and mouth (p<0.001); moreover, the duration of fixation on the nose was significantly longer than that on the mouth when young participants recognized facial expressions (F(2, 82)=65.65, p<0.001, η²=0.62). The duration of fixation on the eyes was significantly longer than that on the nose and mouth, and the duration of fixation on the nose and mouth was not significantly different when elderly participants recognized facial expressions.

The interaction effect between emotional valence and AOI was significant (F(4, 332)=41.10, p<0.001, η²=0.33). Through simple effect analysis, the duration of fixation on the eyes was significantly longer than that on the nose (p<0.001) and mouth (p<0.001); additionally, the duration of fixation on the nose was significantly greater than that on the mouth when participants recognized positive facial expressions. The duration of fixation on the eyes was significantly greater than that on the nose and mouth, and the duration of fixation on the nose and mouth was not significantly different when the participants recognized neutral facial expressions. The duration of fixation on the eyes was significantly longer than that on the nose and mouth, and the duration of fixation on the nose was significantly longer than that on the mouth when participants recognized negative facial expressions (F(2, 82)=95.59, p<0.001, η²=0.70).

The interaction effect of expression, age and AOI was significant (F(4, 332)=3.26, p<0.01, η²=0.04). Through simple effect analysis, the duration of fixation on the eyes was significantly longer than that on the nose and mouth, and the duration of fixation on the nose was significantly longer than that on the mouth when young participants recognized positive, neutral and negative facial expressions. The duration of fixation on the eyes was significantly longer than on the nose and mouth, and the duration of fixation on the mouth and nose was not significantly different when the participants in the elderly group recognized positive facial expressions (F(2, 82)=40.49, p<0.001, η²=0.50). The duration of fixation on the eyes was significantly longer than on the nose and mouth, and the duration of fixation on the mouth was significantly longer than that on the nose when elderly participants recognized neutral facial expressions (F(2, 82)=7.49, p<0.001, η²=0.15). The duration of fixation on the eyes was significantly longer than on the nose and mouth, and the duration of fixation on the mouth and nose was not significantly different when elderly participants recognized negative facial expressions (F(2, 82)=26.17, p<0.001, η²=0.39). See Table 3 for details.

Table 3 Means and Standard Deviations (M ± SD) of the Fixation Durations of AOIs for Different Facial Expressions

Discussion

This study utilized behavioral tasks and eye-tracking techniques to explore the differences in the scanning patterns of elderly and young participants while they recognized various facial expressions on the basis of emotional valence, AOI and subject age. The behavioral results revealed that the accuracy and response time of young participants were significantly greater than those of elderly participants. For the eye-tracking results, the fixation count and duration were significantly greater in young participants than in elderly participants, which indicated that there was perceptual degradation in elderly participants. Both elderly and young participants used the eyes as the fixation area most often when facial expressions of different valences were recognized, followed by the nose and mouth. The eyes are the most critical diagnostic area for facial recognition. Therefore, young and elderly participants used similar diagnostic areas when recognizing facial expressions. The above results indicate that the decrease in facial recognition scores among elderly participants is due to perceptual aging rather than differences in facial diagnosis areas.

Relationship Between Facial Recognition Performance and Perceptual Degradation in Elderly Participants

Is the worse facial recognition performance of older participants than that of younger participants the result of perceptual degradation? The eye-tracking data used in the present study support this view. Specifically, the fixation duration on each face was 5 s, and the young participants gazed at the faces significantly more than the elderly participants over the same presentation time, demonstrating perceptual degradation in the elderly participants. With aging, especially in old age, individuals’ perceptual organs age to different degrees; in this case, individuals experience visual aging (eg, presbyopia). Visual aging causes elderly people to require more effort to gaze clearly at facial features, which results in them gazing at faces less than young individuals over the same presentation time. Decreased attention to faces reduces facial recognition performance in elderly participants. For example, researchers suggested that perceptual degradation reduces facial recognition scores in elderly people.8 Similarly, researchers reported that elderly people pay less attention to faces than young people do, and they suggested that this change is a sign of perceived degradation.12

Notably, the difference in overall gaze between elderly participants and young participants was not affected by the emotional valence of the faces. These results are consistent with those of researchers. In terms of facial expression processing, there were six stages, with stages 1–3 proceeding simultaneously: 1. early visual analysis (example, size, length, direction); 2. groups of feature units (eyes, nose, mouth); 3. facial recognition units, directed visual processing, expression analysis, and facial speech analysis; 4. person identity nodes; 5. phonological lexicon; and 6. hearing one’s name or name production. In these 6 stages of facial processing, individuals must first process faces perceptually (stages 1 and 2) and then recognize information such as emotions and facial context on the basis of this perceived information.27 In this study, owing to the observed visual perceptual aging in elderly participants, the processing of perceptual information about faces by elderly individuals was impaired, and the facial information they perceived was probably incomplete, resulting a limited ability to extract emotional information from faces.12 Therefore, the difference in the overall gaze between elderly and young participants was not affected by the emotional valence of the face.

Relationship Between Facial Recognition Performance and Scanning Patterns in Elderly Participants

Young participants gazed the most at the eyes, followed by the nose and, finally, the mouth, when recognizing facial expressions. However, while elderly participants also gazed mostly at the eye region, their gaze on the nose decreased relative to that of young participants, and their gaze on the mouth increased relatively. Additionally, the fixation counts for the nose and mouth were not significantly different when gazing at positive and negative faces, and the fixation duration for the mouth was longer than that for the nose when gazing at neutral faces. We believe that the reasons for this difference are as follows: (1) there is a difference in the pattern of visual processing of faces between elderly and young participants, and (2) elderly and young participants use different regions for recognizing facial expressions.

The differences in visual facial processing patterns between elderly participants and young participants are as follows. First, young and elderly participants probably differed in their processing of information about the upper and lower parts of the face. Murphy and Isaacowitz’s eye-tracking study divided faces into areas of interest corresponding to the upper and lower parts of the face.11 Young participants gazed at the upper half of the face more than elderly participants, whereas elderly participants gazed at the lower half of the face more than young participants. Elderly participants’ tendency to process the lower half of the face was probably responsible for their increased gaze on the mouth, whereas young participants gazed more on the upper half of the face, and, thus, focused more on the eyes, above the nose, and on the nose. The findings of researchers also supported this view.8 Their study suggested that young Chinese participants specifically process horizontal facial information (conveyed by the eyes and the upper bridge of the nose area between the eyes) and process vertical facial information (from the lower part of the nose and mouth area) less. In contrast, elderly participants process more vertical facial information than young participants, which leads elderly participants to gaze less on the nose and mouth.

Elderly and young participants not only have different patterns of facial processing but also likely rely on different facial regions when recognizing facial expressions. This study revealed that elderly and young participants most commonly gaze at the eyes when identifying facial expressions, regardless of age or expression type. Research has shown that individuals of various ages pay significantly more attention to the eyes than to other facial features when recognizing facial expressions, as the eyes are the most predominant diagnostic area for all expression recognition.28 The variations among the elderly and young participants in their attention to facial expressions of different valences were related mainly to the nose and mouth. The study revealed that young participants gaze at the nose significantly more than at the mouth when recognizing all facial expressions, which is related to the observed scanning pattern from the center of the face to the surrounding area in young participants. Researchers reported that young Chinese participants tended to adopt a scanning pattern from the center of the face to the surrounding area when identifying Chinese faces, with the center of the Chinese face being at the upper bridge of the nose in between the eyes.29 Additionally, individuals focused more on the eyes and the upper bridge of the nose region, which is in the center of the face, resulting in young participants mainly focusing on the eyes and the nose as the main informative areas of the face rather than the mouth. Elderly participants employed a more comprehensive region of the face, providing information about facial expressions, and recognized emotions primarily by gazing at the eyes and mouth areas for both positive and negative faces.23

Notably, the informative regions of faces used by elderly participants correspond to the optimal informative regions for facial recognition. Researchers’ facial action coding system (FACS) study revealed that eye and mouth muscle movements are the main areas driving facial expressions;30 research on expression recognition has also identified the eyes and mouth as the primary informative regions for recognizing different categories of emotions.14 It follows that elderly participants adopted a more scientific approach to face analysis than young participants did; however, elderly participants’ facial expression recognition scores were significantly lower than those of young participants were. The results of the present study showed that the better recognition performance of young participants was not caused by the fact that the elderly participants focused on the uninformative areas of the facial expressions but was probably due to inadequate perceptual processing caused by perceptual degradation in the elderly participants, which led to worse recognition performance.

Although the results of this study revealed that young and elderly people use similar diagnostic regions to recognize faces, perceptual aging often reduces older individuals’ facial recognition performance. Moreover, the significantly shorter gaze duration toward facial diagnostic areas (such as eyes) among elderly individuals than younger individuals confirms their perceptual aging. Therefore, for elderly people, the duration of gaze toward the facial diagnosis area (such as the eyes) should be increased when recognizing faces to compensate for the disadvantage of decreased facial recognition performance caused by perceptual aging.

Conclusion

In the present study, elderly participants performed worse at recognizing facial expressions than young participants, which was probably the result of perceptual degradation in elderly participants rather than the result of gazing more in uninformative areas. For elderly people, the duration of gaze toward the facial diagnosis area (such as the eyes) should be increased when recognizing faces to compensate for the disadvantage of decreased facial recognition performance caused by perceptual aging.

Data Sharing Statement

The datasets generated during and analyzed during the current study are available at https://pan.baidu.com/s/1CpvjtpxuEdyHJpa716nl_A?pwd=lcgi, Extract code: lcgi.

Ethics Statement

This research was approved by the ethics committee of Henan University [20231011015], and all the subjects provided informed consent prior to their inclusion in the study. The study was performed in accordance with the ethical standards of the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards.

Funding

This study was supported by Science and Technology Projects in Henan Province [242102320165].

Disclosure

The authors declare that they have no conflicts of interest in this work.

References

1. Fölster M, Werheid K. ERP evidence for own-age effects on late stages of processing sad faces. Cogn Affect Behav Neurosci. 2016;16(4):635–645. doi:10.3758/S13415-016-0420-9

2. Ebner NC, He Y, Johnson MK. Age and emotion affect how we look at a face: visual scan patterns differ for own-age versus other-age emotional faces. Cogn Emot. 2011;25(6):983–997. doi:10.1080/02699931.2010.540817

3. Fernandes C, Gonçalves AR, Pasion R, et al. Age-related changes in social decision-making: an electrophysiological analysis of unfairness evaluation in the ultimatum game. Neurosci Lett. 2019;692:122–126. doi:10.1016/j.Neulet.2018.10.061

4. Phillips LH, Slessor G. Moving beyond basic emotions in aging research. J Nonverbal Behav. 2011;35(4):279–286. doi:10.1007/S10919-011-0114-5

5. Ruffman T, Henry JD, Livingstone V, Phillips LH. A meta-analytic review of emotion recognition and aging: implications for neuropsychological models of aging. Neurosci Biobehav Rev. 2008;32(4):863–881. doi:10.1016/j.Neubiorev.2008.01.001

6. Gonçalves AR, Fernandes C, Pasion R, Ferreira-Santos F, Barbosa F, Marques-Teixeira J. Effects of age on the identification of emotions in facial expressions: a meta-analysis. PeerJ. 2018;6:E5278. doi:10.7717/Peerj.5278

7. Boutet I, Shah DK, Collin CA, Berti S, Persike M, Meinhardt-Injac B. Age-related changes in amplitude, latency and specialization of ERP responses to faces and watches. Neuropsychol Dev Cogn B Aging Neuropsychol Cogn. 2021;28(1):37–64. doi:10.1080/13825585.2019.1708253

8. Ma J, Zhang R, Li Y. Age weakens the other-race effect among han subjects in recognizing own- and other-ethnicity faces. Behav Sci. 2023;13(8):675. doi:10.3390/Bs13080675

9. Monge ZA, Madden DJ. Linking cognitive and visual perceptual decline in healthy aging: the information degradation hypothesis. Neurosci Biobehav Rev. 2016;69:166–173. doi:10.1016/j.Neubiorev.2016.07.031

10. Qian H, Zhu M, Gao X. Configural processing of faces in old adulthood. Adv Psychol Sci. 2017;25(2):230–236. doi:10.3724/SP.J.1042.2017.00230

11. Murphy NA, Isaacowitz DM. Age effects and gaze patterns in recognising emotional expressions: an in-depth look at gaze measures and covariates. Cognition & Emotion. 2010;24(3):436–452. doi:10.1080/02699930802664623

12. Wong B, Cronin-Golomb A, Neargarder S. Patterns of visual scanning as predictors of emotion identification in normal aging. Neuropsychology. 2005;19(6):739–749. doi:10.1037/0894-4105.19.6.739

13. Sullivan S, Ruffman T. Emotion Recognition Deficits in the Elderly. Int J Neurosci. 2004;114(3):403–432. doi:10.1080/00207450490270901

14. Cangz B, Altun A, Akar P, Baran Z, Mazman SG. Examining the visual screening patterns of emotional facial expressions with gender, age and lateralization. J Eye Movement Res. 2013;63(3):1–15. doi:10.16910/Jemr.6.4.3

15. Calvo MG, Fernández-Martín A. Can the eyes reveal a person’s emotions? Biasing role of the mouth expression. Motivation Emotion. 2013;37(1):202–211. doi:10.1007/S11031-012-9298-1

16. Calder AJ, Young AW, Keane J, Dean M. Configural Information in Facial Expression Perception. J Exp Psychol Hum Percept Perform. 2000;26(2):527–551. doi:10.1037//0096-1523.26.2.527

17. Eisenbarth H, Alpers GW. Happy mouth and sad eyes: scanning emotional facial expressions. Emotion. 2011;11(4):860–865. doi:10.1037/A0022758

18. Aviezer H, Hassin RR, Ryan J, et al. Angry, disgusted, or afraid? Studies on the malleability of emotion perception. Psychol Sci. 2008;19(7):724–732. doi:10.1111/j.1467-9280.2008.02148.x

19. Smith ML, Cottrell GW, Gosselin F, Schyns PG. Transmitting and decoding facial expressions. Psychol Sci. 2005;16(3):184–189. doi:10.1111/j.0956-7976.2005.00801.x

20. Roy-Charland A, Perron M, Beaudry O, Eady K. Confusion of fear and surprise: a test of the perceptual-attentional limitation hypothesis with eye movement monitoring. Cogn Emot. 2014;28(7):1214–1222. doi:10.1080/02699931.2013.878687

21. Schurgin MW, Nelson J, Iida S, Ohira H, Chiao JY, Franconeri SL. Eye movements during emotion recognition in faces. J Vis. 2014;14(13):14. doi:10.1167/14.13.14

22. Poulin-Dubois D, Hastings PD, Chiarella SS, et al. The eyes know it: toddlers’ visual scanning of sad faces is predicted by their theory of mind skills. PLoS One. 2018;13(12):E0208524. doi:10.1371/Journal.Pone.0208524

23. Yuki M, Maddux WW, Masuda T. Are the windows to the soul the same in the east and west? Cultural differences in using the eyes and mouth as cues to recognize emotions in Japan and the United States. Journal of Experimental Social Psychology. 2007;43(2):303–311. doi:10.1016/j.Jesp.2006.02.004

24. Horstmann G, Lipp OV, Becker SI. Of toothy grins and angry snarls--open mouth displays contribute to efficiency gains in search for emotional faces. J Vis. 2012;12(5):7. doi:10.1167/12.5.7

25. Yu L, Li S, Liu S, Pan W, Xu Q, Zhang L. The positivity effect of ambiguous facial expressions recognition and its mechanism in older adults. Psychol Dev Educ. 2024;40(2):196–206. doi:10.16187/j.Cnki.Issn1001-4918.2024.02.06

26. Ma J, Yang B, Luo R, Ding X. Development of a facial-expression database of Chinese Han, Hui and Tibetan people. Int J Psychol. 2020;55(3):456–464. doi:10.1002/Ijop.12602

27. Brunsdon R, Coltheart M, Nickels L, Joy P. Developmental prosopagnosia: a case analysis and treatment study. Cogn Neuropsychol. 2006;23(6):822–840. doi:10.1080/02643290500441841

28. Yitzhak N, Pertzov Y, Aviezer H. The elusive link between eye‐movement patterns and facial expression recognition. Soc Personal Psychol Compass. 2021;15(7):e12621DOI. doi:10.1111/Spc3.12621

29. Ma J, Yang B, Li Y. The left side of the face may be fixated on more often than the right side: visual lateralization in recognizing own- and other-race faces. Heliyon. 2022;8(12):E11934. doi:10.1016/j.Heliyon.2022.E11934

30. Cohn JF, Ambadar Z, Ekman P. Observer-based measurement of facial expression with the facial action coding system. The Handbook of Emotion Elicitation and Assessment. 2007;1(3):203–221.

Creative Commons License © 2024 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, 3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.