History and development of facial recognition

Facial Recognition is the process where the brain recognizes, understands and interprets the human face (Face Recognition, n.d.). The face is essential for the identification of others and expresses significant social information. The face reveals significant social information, like intention, attentiveness, and communication. Goldstein (1983) (as cited in Chung & Thomson, 1995) stated that, “The face is the most important visual stimulus in our lives probably from the first few hours after birth, definitely after the first few weeks”. The loss of the ability to recognize faces, like those who have prosopagnosia, greatly affects the individual’s life. The primary focus of this review is to provide an overview of the development of facial recognition, gender and age differences, facial identity and expression, memory, prosopagnosia, and hemispheric advantages in facial recognition. It is also my intention to review past and contemporary theories of development and understanding of facial recognition.

The Birth of Facial Recognition

Best services for writing your paper according to Trustpilot

Premium Partner
From $18.00 per page
4,8 / 5
Writers Experience
Recommended Service
From $13.90 per page
4,6 / 5
Writers Experience
From $20.00 per page
4,5 / 5
Writers Experience
* All Partners were chosen among 50+ writing services by our Customer Satisfaction Team

The human face has sparked interest in various disciplines within the arts and sciences for centuries (Darwin, 1872 as cited in Nelson, 2001). This fascination of the human face may reflect the psychological significance of the face and the recognition of other faces. Cognitive psychologists, neuroscientists and developmental psychologists are interested in facial recognition due to evidence that faces are somehow perceived differently than other patterned objects, the ability is controlled by a distinct neural circuit, and that faces provide an early means of communication between infants and caretakers. Regardless of the wide-ranged and continued interest in the subject matter, it still remains unclear how facial recognition becomes specialized, and what neurological systems are involved in the development process (Nelson, 2001).

The number of research with faces used as stimuli has increased dramatically over the past decades (Chung & Thomson, 1995). This may be a result of a change in the cognitive studies from fragmented verbal materials to more meaningful nonverbal memory. It is also noteworthy that the majority of the research on facial recognition has been focused on infants and adults, giving little attention to the developmental changes during childhood (two through five years of age).

Studies of Development
Studies in Newborns

In the early stages of facial recognition (1960s) there were contrasting results as to whether newborns had any preference towards faces over other patterned stimuli. Over the next few decades of research, the view that newborns are capable of recognizing faces and discriminating between their mothers and unfamiliar faces was supported by researchers (Nelson, 2001). Although the findings that newborns can distinguish between faces and may show preferences, evidence for ability to recognize faces earlier than 1 to 2 months of age is extremely weak and not regularly supported. Newborns possess poor visual acuity, contrast sensitivity, and cannot determine the high spatial frequencies that make up the fine details of faces (de Schonen and Mathivet, 1990; Simion et al., 1998 as cited in Nelson, 2001 ). Another criticism of newborn studies is that they have used schematized stimuli (having eye sockets and opening for a mouth and nose used as a model of a real face), questioning the validity of the stimuli used to serve as a real face.

In more current literature by Gava, Valenza, Turati and de Schonen (2008), they found evidence that newborns may have the ability to detect and recognize partially occluded faces. They believe their findings highlight the importance the eyes play in newborns’ facial detection and recognition. Newborns detected faces even if some low-information portions were missing from the face. The only exception was the eyes-once the eyes were removed, detection and recognition of the stimuli was impaired. This is found in both newborns and adults. The findings of the study were in line with Morton and Johnson’s structural hypothesis (Gava, Valenza, Turati and de Schonen, 2008) that states, “faces are special for newborns because human infants possess a device that contains structural information concerning the visual characteristics of conspecifics-hiding the eyes implies that the typical face pattern (three high contrast blobs in the correct positions of the eyes and the mouth) would be disrupted”.

There are two hypotheses offered by Gava, Valenza, Turati and de Schonen (2008) explaining how newborns recognize the difference between the non-obstructed and obstructed faces. The first hypothesis states, “Newborns might have filled in the partly hidden surface, thus perceiving the obstructed stimulus as connected behind the obstructers, or might have simply perceived only what is immediately visible of the obstructed face”. The second hypothesis suggests that newborns might have perceived the similarities between the non-obstructed and the obstructed face, perceiving only what is immediately visible of the obstructed face. The results found do not explain the perceptual operations of the ability of the newborns to detect and recognize occluded faces. Nonetheless, it demonstrates that the degree of salience highly affects the competence of the obstructed information.

Both past and present literature shows a difference in opinions when it comes to newborns and facial recognition. In recent literature the main consensus is that newborns can certainly recognize faces, but the perceptual operations of the newborns ability to detect and recognize are still yet unknown.

Studies in Infants

In 1972, Fagan (as cited in Nelson, 2001) demonstrated that infants around 4 months old have excellent recognition of upright faces in comparison to upside down faces. This finding suggests that infants around the age of 4 months have developed a face schema and view faces as a special class of stimuli (Nelson, 2001). Infants between the ages of 3 to 7 months can identify their mothers from strangers and recognize faces by gender and facial expression. These findings demonstrate the development over the first 6 months in facial recognition, where infants not only identify but also discriminate faces.

Carlsson, Lagercrantz, Olson, Printz & Bartocci (2008) measured the cortical response in the right fronto-temporal and right occipital areas of healthy 6 to 9 month old children by showing an image of their mothers’ faces compared to that of an unknown face. A double-channel NIRS (near infrared spectroscopy) device monitored concentration changes of oxygenated hemoglobin and deoxygenated hemoglobin. The mother was asked not to talk to their children during the trials. The children were exposed to four types of visual stimuli: a grey background, a photograph of the mother, a second grey background and a photo of the unknown female face. Eight children (Group A) were presented with a picture of their mother before that of the unknown female face. In Group B, 11 children were presented in the reversed order. Each stimulus lasted a period of 15 seconds.

The results showed that Group A (the mother image first) elicited an increase in the right fronto-temporal area, which is statistical different from responses to the unknown image. In Group B, (the unknown female’s face first) there was an insignificant increase in cortical response in the right fronto-temporal area when shown the unknown female and then spiked when the maternal facial image was presented. The findings in this study show that there was a greater increase in the right fronto-temporal region when the picture of the mother was shown in comparison with the unknown female photo. The effect of this hemoglobin change is most likely due to a discriminatory and recognition process.

In addition to the right fronto-temporal region they also illuminated the right occipitotemporal pathway, part of the right prefrontal cortex, the right medial temporal lobe and the right fusiform area. These have been identified as specific target areas involved in face recognition. By looking at the mothers, the facial image is suspected to be an accurate result of the activation in the right occipitotemporal pathway. Difficulties in face recognition among infants born prematurely may be caused by a change or delay in the development of this pathway. The results show that the connectivity between the occipital cortex and the right prefrontal area are present and functional at the age of 6 to 9 months. These findings are extremely valuable to understanding the developmental mechanisms in infant social adaptation.

Studies in Children

It is highly likely that as we age, one’s level of accuracy for facial recognition increases, but the evidence for the underlying processes of age differences is less certain. One of the techniques used was showing inversed pictures of faces to both adults and children. It was found that inversion disproportionately impairs the recognition of faces more so than other objects (Tanaka, Kay, Grinnell, Stansfield & Szechter, 1998). Evidence by Carey and Diamond (1977) revealed that children at the ages of 8 and 10 years recognized a face with better accuracy if it was in the upright position in comparison to inverted position, like adults. However, children at age 6 recognized the inverted faces equally as well as the upright faces. These findings led to the hypothesis that children at the age 6 use a “featural encoding strategy” for processing faces. This is called the “encoding switch hypothesis”, where children 6 and under encode upright faces according to features such as the nose, mouth and eyes, and around the age of about 8 to 10 years, they begin to process faces holistically.

In a second experiment when testing their encoding hypothesis, Carey and Diamond (1977) found that 6 year olds were misled more by changes in clothing, hairstyle, eyeglasses and facial expressions than 8 and 10 year olds. These results suggest that children at younger ages process faces according to their parts until they are about the age of 10, where they switch to a holistic approach.

Carey and Diamond received criticism by a researcher named Flin, who believed their results were due the level of difficulty used in the task for 6 year olds and that their poor performance might have obscured the possible inversion effects. Flin (1985) (as cited in Tanaka, Kay, Grinnell, Stansfield & Szechter, 1998) found that the 6 year olds’ recognition was below the older age group as an overall. He argued that there is little evidence to support the encoding switch hypothesis when taking age related performance differences into account.

In more recent research, Tanaka, Kay, Grinnell, Stansfield & Szechter (1998) stated that although face inversions may reveal performance difference, they provide little insight into the cognitive operations attributable to these differences. Tanka reasoned that if upright faces are encoded holistically, the whole-face test item should serve as a better retrieval cue than isolated-part test items, and if inverted faces are encoded only in terms of their parts, there should be no difference in the isolated part and whole face test conditions. Over a series of three experiments, their findings failed to support Carey and Diamond’s (1977) predictions of the encoding switch hypothesis. If young children rely on featural information to encode faces, one would expect differences in their parts and whole performances than older children, which were not found. Their results suggest that by the age of 6 years old, children use a holistic approach to facial recognition and that the holistic approach remains relatively stable from ages 6 to 10.

Recent research by Baenninger (1994) and Carey & Diamond (1994) (as cited in Tanaka, Kay, Grinnell, Stansfield & Szechter, 1998) also supports the idea that children do not encode faces based on features and then switch to a more configural encoding strategy, but instead encode normal faces holistically from the beginning. In fact, Carey and Diamond (1994) suggest that the Age X Inversion interaction may be attributed to a norm-based coding scheme (relational properties of the face that is encoded relative to the norm face in the population), which may explain experimental factors in changing the absolute levels of holistic processing. The norm-based coding model predicts that as one ages, facial recognition improves, whereas facial recognition should remain constant. The inversion task used by Carey and Diamond (1977, 1994) eliminated capability advantages by blocking norm-based encoding of relational properties, which could attribute to the lack of evidence for the holistic model. The single process that configural and featural information are encoded together supports the holistic approach to face recognition (Tanaka, Kay, Grinnell, Stansfield & Szechter, 1998).


A large amount of facial recognition research comes from the assessment of patients with prosopagnosia. Prosopagnosia is “[a] visual agnosias that is largely restricted to a face recognition, but leaves intact recognition of personal identity from other identifying cues, such as voices and names” (Calder & Young, 2005). Regardless of who they are looking at, face recognition can be severely impaired. Patients typically recognize people by paraphernalia (voice or distinct features, such as a mole). Patients often cannot distinguish men from women, but hair length is a good retrieval cue for recognition. Areas related to prosopagnosia have been found the left frontal lobe, bilateral occipital lobes, bilateral parieto-occipital regions, and in the parieto-temporo-occipital junction (Ellis, 1975). It is possible to have several areas of damage for the specific function, but most occur in the right hemisphere.

Gloning et al. (1970) (as cited in Ellis, 1975) found it is common for patients to exhibit symptoms of other agnosias. Such as foods looking the same, difficulty identifying animals, and inability to locate themselves in space and time. Some other, typically uncommon defects include visual field defects, constructional apraxia, dyspraxia for dressing, and metamorphosia (Ellis, 1975).

The symptoms attributed with identifying faces are described as overall blurring, difficulties in interpreting shades and forms, and the inability to infer emotions in the face. Gloning et al. (1966) (as cited in Ellis, 1975) reports some patients have the most difficulty with the eye regions and others found the eyes the easiest to recognize. Regardless of the symptoms, an interesting aspect of prosopagnosia is that patients can always detect a face, but are unable to recognize it. This suggests that there is a two-part process in facial recognition. First, faces are detected, and then undergo further analysis where information such as age and sex are analyzed and compared in long-term memory.

In comparing left posterior hemisphere to the right posterior hemisphere, Yin (1970) (as cited in Ellis, 1975) found that those with damage on the right side were poorer at face memory tasks than those with left side damage. They found that visual categories may all be difficult to recognize because they all have a high degree of inter-item similarity. De Renzi & Spinnler (1966) (as cited in Young, 2001) found similar evidence, showing that patients with right-hemisphere damage were worse at recognizing faces, and other abstract figures than those with left hemisphere damage. These significant findings led them to believe that those with right-hemisphere damage are limited in high level integration of visual data. It also led to the hypothesis that prosopagnosia patients have lost the ability to recognize the individual members of categories with items of similar appearance (Young, 2001).

The finding of covert recognition (Bauer, 1984 as cited in Ellis, Lewis, Moselhy & Young, 2000) helped the cases of prosopagnosia as a domain-specific impairment of facial memory, showing parallels to priming effects. Bauer tested his patient LF by measuring his skin conductance while he viewed a familiar face and listened to a list of five names. Skin conductance was shown to be greater when the name belonged to the face LF was looking at. However, when asked to choose the correct name of the face, LF was unable to do so. These results showed a significant difference between the inability to overtly identify the face and the higher levels of skin conductance in the covert recognition.

Bauer believed that there were two routes in the recognition of faces that both began in the visual cortex and ends in the limbic system, but each taking a different pathway (Bauer, 1984 as cited in Ellis, Lewis, Moselhy & Young, 2000). Although Bauer’s neurological hypothesis was dismissed shortly after, his psychological hypothesis of a separation between overt recognition and orienting responses has been generally accepted (Ellis, Lewis, Moselhy & Young, 2000).

Models of Facial Recognition
Bruce & Young Functional Model

Bruce and Young (1986) have proposed a functional model suggesting that the structural codes for faces are stored in memory and then connected with the identity and name of the matching face. The model mainly supports how individuals recognize familiar faces. This is one of the better models for face recognition. Their model is outlined in a box and arrow format, where face recognition is completed in stages. In the first stage, structural encoding, individuals encode visual information from a face into information that can be used by the other stages of the face recognition system. Within the structural encoding are two separate processes, ‘view-centred description’, and ‘expression-independent descriptions’. These two are in a serial position where expression-independent descriptions take input information from the view-centred descriptions process. These allow for identification of facial features when viewed from various angles.

The next few stages are part of a series of parallel processes after the structural encoding stage. The ‘expression analysis’ stage takes its input from the view-centred descriptions processes, allowing facial expression to be analyzed. The next stage is ‘facial speech analysis’. The last branch is ‘directed visual processing’, which targets more general facial processing such as distinguishing between faces. These sets of parallel processes take input from both structural encoding processes. All of these four links of parallel face processing feed into the general cognitive system, where all are bidirectional links receiving some input back from the cognitive system (Bruce & Young, 1986).

The last three stages of Bruce and Young’s (1986) model are the recognition, identification and naming stages. The recognition stage involves face recognition units, also known as FRUs. They are individual nodes associated with familiar faces. When facial features are detected, nodes are activated and fed into the FRU system. Whichever node reaches the threshold activation level is the one that corresponds to the face being observed, and is then recognized. The face recognition units interact with person identity nodes, also known as PINs. PINs and FRUs bidirectionally share input information, with a two-way interaction. Activation of the PIN for a person can create some activation in the FRU, allowing recognition time for the face to be faster. Last is the name generation process. Both the PINs and name retrieval interact with the cognitive system. However, only the PINs have a two-way interaction, whereas name retrieval process solely sends input information to the cognitive system.

IAC Model

Burton, Bruce and Johnston’s (1990) adaptation of McClelland’s Interactive Activation and Competition model of concept learning is an extremely basic form of a connectionist model, consisting from pools of simple processing units. The goal of the model is to explain repetitive priming, associative priming, distinctiveness and face naming. All of the units within a pool inhibit each other. There are excitatory links connecting individual units across different pools, where activation passes between these links (all links are bidirectional). Each FRU is paired to a known face and any form of recognition will activate the appropriate FRU. The second level of classification occurs at the Person Identity Nodes (PIN), where one unit is paired to each known person.

Familiarity is signaled when any PIN reaches a common activation threshold. This implies that there is one decision mechanism used for all person familiarity judgments, regardless if they are faces or other kinds of information. The third level of classification is the pool labeled Semantic Information Units (SIUs), where information about known individuals are coded in the form of a link between the person’s PIN and SIU. The fourth level of classification is a pool of units labeled “lexical output”, which capture the first stage of processes involved in speech and other output modalities. The fifth and final stage is a pool of units labeled WRUs (Word Recognition Units), where code names link directly to a pool of Name Recognition Units (NRUs). Finally, all Word Recognition Units are connected directly to the lexical output units, in which the model contains the elements of a “dual route” model of reading.

The IAC Model is different from the Functional model because FRUs signal face familiarity, pins are modality-free gateways to semantic information, and that the details and spread of activity are more clarified. This model has had success in simulating phenomena such as relative timing of familiarity, repetition, semantic and cross modal semantic priming. Both the Bruce & Young (1986) and Burton, Bruce and Johnston (1990) models show how activation levels are used in recognition processes. These two models help us theorize exactly what is happening in the mind as we analyze and recognize facial features and faces as a whole. The main idea of the model is the idea that facial identity and expression are recognized by functionally and neurologically independent systems. These models have started great advances in the research of facial recognition.

Memory Load on Facial Recognition

Memory in facial recognition has had limited research, which is surprising considering its importance to understanding facial recognition and how it could impact research. Goldstein and Chance (1981) (as cited in Lamont, Williams & Podd, 2005) found two critical variables that have received little attention when reviewing laboratory settings: memory load and delay. Memory load is defined as the number of faces shown in the study phase and delay is defined as the delay between study and recognition phase.

Researchers have found that increasing age is associated with a decline in facial recognition ability. However, the variables interacting with age are still unknown. Nevertheless, mixed evidence on the question of whether face age has any impact on elderly participants is still debated. Evidence by Shapiro & Penrod (1986) (as cited in Lamont, Williams & Podd, 2005) reveals that as memory load increases, face recognition performance decreases.

Due to the limited research on the subject matter, Podd (1990) wanted to inquire about the possible effects that it has on the field of research for facial recognition. Podd tested subjects in small groups, where they were asked to look carefully at a series of faces that the subjects were asked to identify at a later time. Subject had to discriminate between faces that they had seen previously and those that had yet to be seen in the recognition phase.

The results showed that an increase in both memory load and delay correlate to a decrease in recognition accuracy. Podd believes this could be contingent on the fact that increased memory load decreases accuracy by decreasing the portion of targets correctly identified, while delay decreases accuracy by increasing the likelihood that a distractor will be called a target. Depending on how similar the target is from the distractor, there will be fewer attributes to use to differentiate between the targets.

In more current literature, Lamont, Williams & Podd (2005) have tested both aging effects and memory load on face recognition. They looked at two interacting variables: the age of the target face and memory load. They were curious in finding out if memory load had a greater impact in the elderly than in younger individuals. Another variable they looked at was recognition load, the total number of target and distractor faces seen in the recognition phase. The main objective was to see if they could determine whether the effects of memory load could be teased out from recognition load.

In the results they found that, as expected, older age was correlated with a decrease in accuracy of facial recognition. Surprisingly, older people had a decrease in accuracy for younger faces but not in older faces. The results of the study were not consistent with past research, which found that recognition accuracy in the younger groups was higher with younger faces than with older faces. The current study showed the exact opposite results. One possibility of these results is that with increasing age, features of the face fade more quickly. Also, with increasing retention intervals, there is more time for people’s memories of the target to fade, where the least salient feature fades the fastest (Podd, 1990). They believe that the elderly have fewer distinctive facial features available in memory to make the judgment, meaning an increase in judgment time. It is also noteworthy to say these findings are consistent with Podd’s earlier work, (1990) showing that increased memory load is associated with a reliable decrease in performance in recognition accuracy. The findings show that recognition load produced the decrease, which is independent of age.

Another important finding is that recognition load is the true source of the association between increased memory load and decreased face recognition. Lamont, Williams & Podd (2005) state that, “[f]ew studies dealing with memory load have taken account of this potential confound, and our results challenges the interpretation of all such research”. Crook & Larrabee (1992) (as cited in Lamont, Williams & Podd, 2005) suggest that the present studies’ implications are of considerable value to future research, since some authors do not report age of their target faces. Therefore, the results are crucial for proper interpretation of facial recognition research.

Sex Differences & Hemispheric Advantages in Facial Processing

Extensive research has been completed on facial recognition’s hemispheric advantages. Unfortunately, little has been concluded due to contradicting evidence. Patterson and Bradshaw (1975) (as cited in Turkewitz & Ross, 1984) found that when drawings of faces varied by only one feature, participants showed an advantage in the left hemisphere; however, when all features varied, there was an advantage in the right hemisphere. Prior studies have shown that advantages in both hemispheres are contingent on the conditions being used, which produces different results. Even when the conditions are held constant, conflicting results emerge, resulting in individuals showing both right and left hemisphere advantages. Ross and Turkewitz (1981) (as cited in Turkewitz and Ross, 1984) found hemispheric advantages were associated with the nature of the information process strategy being used by the participant. Those with a right-hemisphere advantage showed signs of decline when inversion of faces was being tested, whereas those with left hemisphere advantages showed a decline while omission of selected facial features were tested. They suggest that these results show that those with a right-hemisphere advantage recognize faces based on gestalt qualities (whole) and those with left hemisphere recognize faces based on a more individual and distinctive features.

Turkewitz and Ross (1984) were interested in researching age-related changes in hemispheric advantages in recognition of presented faces and determining whether a dual-mode of right hemisphere processing exists and if it associates with differences of age and gender. The participants were students ages 8, 11 and 13 years old. Participants were seated in a chair in front of a screen, where facial stimuli were presented. The objective was to point to the face presented in the response sheet for each trial.

The data found suggest that there are age- and gender-related differences in the nature of hemispheric advantages shown when confronted with the task of identifying unfamiliar faces. The findings also support the hypothesis of processing stages, where different hemispheric advantages are associated with the stages. Both adults and older girls exhibited a right-hemisphere advantage, suggesting an age-related shift, responding to the undifferentiated and global characteristics of the faces. Younger girls showed no advantage which suggests they use right and left hemisphere strategies equally well. This suggests that girls are using more advanced and integrated right hemisphere modes of functioning, which tends to be more effective when engaging in facial recognition.

Everhart, Shucard, Quatrin & Shucard (2001) tested 35 prepubertal children in facial recognition and facial affect processing. They were trying to find similar results to those found in the previous literature stating that males show higher levels of activation in the right hemisphere, where females tend to show higher levels in the left. They were also looking to see if this change developed before puberty, similar to those of adults, and to see if gender-related differences would be present in cortical processing during the performance of face recognition. Auditory probes were used to gather ERPs during a Facial Recognition Memory task. They used a facial identification task to gather data on matching and recognition of facial affect, reaction time and accuracy.

Their results showed that boys show greater levels of ERP amplitude in the right hemisphere, where girls showed greater levels of activation in the left hemisphere. The findings also showed that boys might process faces at a global level, which is in the right hemisphere, and girls might process faces at a more local level, in the left hemisphere. This study states that its findings have potential clinical implications. Due to the finding that boys use more resources in their right hemisphere and girls use more in their left, then sex related differences will be evident following lesions to the right hemisphere, suggesting that males may be more at risk to have prosopagnosia.


Facial recognition has interested humans for centuries. Although all evidence out there on the subject matter is useful and important, I selected the findings I believe to be the most important. Based on the research in the development of facial recognition we can conclude that, humans, from newborn age through adulthood, can identify faces. By the age of 6 months, people can discriminate between faces. It has also been found that children do not encode faces based on features and then switch to a more configural model, but rather encode faces on a more holistic level. Other aspects looked at were prosopagnosia and different models of face recognition. Some of the most important research on facial recognition comes from comparing prosopagnosia patients to normal adults. The last two topics examined in this review were memory load and hemispheric advantages. Both help us understand where we process facial information and also how our memory works to store faces. The location of facial recognition has been narrowed down to specific areas of the brain and pathways, further research must be done to get a better idea of how it truly works. Overall, facial recognition

You Might Also Like

I'm Alejandro!

Would you like to get a custom essay? How about receiving a customized one?

Check it out