The Development Of Multisensory Integration In Humans Psychology Essay

Understanding how multisensory integration develops in children and how it assists them to understand the ambiguous information in the environment is now a question at the forefront of Science. This study set out to examine and compare three different age groups of children 4-5 years old attending Reception, 6-7 years old attending Year 2 and 8-9 years old attending Year 4 in a local Primary School, to see if they benefit from multisensory information (e.g. integration of sound and vision) to disambiguate ambiguous figures that has more than one representation provided in an experiment. The results revealed that children in Year 4, 8-9 years old, showed an advantage in reaction time in congruent trials (where sound is assisting the participants to see the ambiguous figures facing the direction of the target, incongruent trials (where ambiguous figures were presented with a simultaneous acoustic cue and the auditory cue is incongruent to the subsequent target presented), and finally neutral trials (where sensory auditory cue was not related to ambiguous figures at all). The results of this study showed that children’s increasing age is having an important and positive impact in processing multisensory information by enabling and enhancing children’s ability to understand and to recognise ambiguous figures more effectively.

Key words: Multisensory integration, Ambiguous figures, Visual attention.

Best services for writing your paper according to Trustpilot

Premium Partner
From $18.00 per page
4,8 / 5
4,80
Writers Experience
4,80
Delivery
4,90
Support
4,70
Price
Recommended Service
From $13.90 per page
4,6 / 5
4,70
Writers Experience
4,70
Delivery
4,60
Support
4,60
Price
From $20.00 per page
4,5 / 5
4,80
Writers Experience
4,50
Delivery
4,40
Support
4,10
Price
* All Partners were chosen among 50+ writing services by our Customer Satisfaction Team

Introduction:
The development of multisensory integration in human beings:

Humans and animals are hardwired with a sophisticated and unique multisensory system which enhances their understanding of the environment that they live in (Stein et al., 1996; Gillmeister & Eimer, 2007) and allows the integration of information between various senses. These different senses are touch, sound, vision, smell, taste and self motion. These extraordinary senses not only exclusively provide us information about our surroundings (e.g. assists us in hearing, seeing etc), but also complex understanding that cannot always be understood through just a single modality but requires multimodal integration. Multisensory or multimodal integration refers to the idea that multiple senses interact with each other to help provide us a coherent representation of various objects, events or situations to promote better understanding of our perceptual environment. We tend to recognise an object or an event better when it is represented through more than one modality (Gondal et al; 2005; Molholm, Ritter, Murray, Javitt, Schroeder & Foxe 2002). Scientists and Psychologists have been studying how multiple senses integrate to support us to make sense of complexity of our environment for centuries. In the early years these senses were studied independently (e.g. Berkeley 1709; Locke 1690). In 1980s scientists began to study the in depth processes involved within and how these senses interact together at the level of the single neuron. The recent research has been improved immensely which has contributed productively in order for us to understand the processes involved in multisensory integration (e.g. Campbell 1987; Stein & Meredith 1994; Naumer & Kaisar 2010). New and improved methods like functional imaging, transcranial magnetic stimulation etc has enabled us to better grasp the under lying processes involved in multisensory integration in the human brain. Psychologists and Researchers are at a stage where newly developed methodologies are being applied to different questions in development of multisensory integration at a neural level (Wallace, Meredith & Stein 1998). There have been empirical studies in the past showing how multisensory stimuli benefit adults, but there is a gap in research with regards to when and how it develops in children. To date there has not been any research, which has explored the role of multisensory information in recognizing ambiguous figures in children.

As human adults our multisensory system integrates various signals from our senses to unify functional representations. Electrophysiological, behavioural and neuroimaging studies has made it evident that different senses through our nervous system that are related to a same situation or event and are congruent in time and space increase the possibility of accurate and effective encoding a lot more than individual senses. Ernst and Banks (2002) & Alias and Burr (2004) have suggested that human adults integrate excessive information in a statistically optimal manner. A fundamental question that stands is whether the optimal multimodal integration is present in children at the time of birth or does it develop during their childhood and when do children start to use multimodal integration to understand their ambiguous environment? It is fundamental to learn if early multisensory development could benefit the developing brain.

The human sensory system is immature at birth, but refines as it develops essentially. Paus (2005) pointed out that brain mapping between sensory and motor conformity is updated frequently and that it is a continuous process where neural reorganisation and cognitive changes occur up until early adolescence. (Neil et. al; 2006; Barutchu, Danaher et. al 2009) pointed out that if adults benefit from the multisensory inputs; naturally children are expected to have an advantage of multisensory inputs via their nervous system as well. Numerous behavioural studies reported that human infants can identify relationships between various multisensory inputs (Bahrick and Lickliter 2000, 2004; Bahrick et al. 2002; Lewkowicz 1988a, 1996; Neil et al. 2006). Research has shown that in very early development multisensory binding is formed (Kohl and Meltzoff 1982). During the phase of gestation between 6-7 months, touching its lips results in foetal Humphrey (1964). Streri & Gentaz, (2004) suggested that even though infants are able to transfer the multisensory information across the senses at birth the advantage of multisensory integration is not observed generally until after birth (Gogate and Bahrick, 1998, Hollich et al., 2005, Bahrick et al., 2002, Walker Andrews, 1997). At the age of 8 months an infant shows multisensory facilitation of reflexive head and eye movements during spatial localization and this theory is consistent with the co-activation models Lewkowicz & Shimojo, (2006). Patterson and Werker (2003) performed a preferential looking paradigm study on 2 month old infants and observed that infants were able to match voices with faces showing that infants are integrating some multisensory information. Lewkowicz (1992) studied development of multisensory information in infants 4, 6, 8 and 10 months of age, he presented the participants audio visual stimulus, (e.g. a bouncing object on the monitor) the results revealed that infants were sensitive to the temporal associations amongst the visual and auditory stimuli.

Processes that involve multisensory facilitation tend to develop with postnatal experiences in humans and other species (Jamie & Lickliter, 2006; Lickliter et al., 2006, Wallace & Stein, 1997, Wallace & Stein, 2001). On the contrary studies’ using the McGurk effect has shown that speech perception is not influenced by our vision as much in infants or young primary school age children as in adults (Massaro, 1984, McGurk & MacDonald, 1976). The leading question is when do children start combining multisensory information to understand their complex environment? Two classical theories shed light in this area, “The developmental integration view”, which states that in newborns the ability to perceive multisensory coherence develops gradually through child’s exploration experiences of the world (Piaget 1952). The second theory is the “Developmental differentiation view” which states that at the time of birth some of the multisensory perceptual abilities are present in infants but the other more complex abilities emerge later in life through perceptual learning Gibson (1969, 1984). Recent research has showed us evidence that “neural and behavioural limitations and the relative experience play a central role in the typical development of multisensory processing” (Walker 1997).

Another complexity in humans is that different senses are developed at different rates. For example, senses like touch, vestibular chemical and auditory senses begin to function before birth and finally vision develops (Gottlieb 1971). The differential rates in developmental period could worsen the challenges for adjustment and cross modal integration for example eye length, intraocular distances, growing limbs etc in humans. In contrast, some perceptual skills do not develop early in life (e.g. auditory frequency discrimination), Olsho (1984); Olsho et al; (1988). Brown et al (1987) suggested that projective size and shape are not understood until children are about 7 years of age, and research has shown that contrast sensitivity and visual acuity carries on developing until the age of 5-6 years of age. (Rentschler, et al 2004) suggested that the understanding of object manipulation also carries on developing until the age of 8-14 years. (Morrongiello et al 1994) suggested that tactile object recognition in sighted and blind children does not develop until the age 5-6 years. Various other complicated capacities that are dependent on experiences e.g. (Elliot 1979; Johnson 2000) facilitation of speech perception in noise is immature throughout their childhood.

The developmental time frame when audio visual integration is developed in children is still unclear. Hearing and vision are two of the most important multisensory modalities that humans constitute. Audio visual integration plays a vital role in many tasks e.g. understanding of speech in noisy environments or orientation towards a novel stimulus. Development of auditory system begins before vision but is not certain when these two senses begin to integrate in humans. When presented with auditory and visual stimuli, it can be perceived as a same unitary event or as two separate unimodel events. Radeau & Bertelson (1977). The binding and segregation of unimodal stimuli is dependent on low level structural factors (e.g. the temporal and spatial co-occurrence of the stimulus), as well as more cognitive factors (e.g. If the stimuli are semantically congruent or not and whether the person observing is assuming that the two stimuli should go together). Numerous recent studies have shown evidence that auditory stimuli can be mislocalized towards visual stimuli when they are presented at the same time Welch & Warren 1980, P. Bertelson & Gelder (2004). It has been argued in the past that when two or more sensory inputs are presented and that they are highly consistent the observers tend to treat them as a single audio visual event (Welch & Warren, 1980, Jackson, 1953) therefore it is more likely to assume that they share a common spatiotemporal origin and consequently there are more chances of them to bind them in to a single multisensory event. The binding of a specific pair of visual and auditory stimuli is dependent on various different factors. Spatiotemporal coincidence plays a vital role in different forms of audio visual integration (Slutsky & Recanzone 2001, Zampini, Guest, and Shore & Spence 2005) but research has also shown that there are exceptions Vroomen & Keetels (2006).

Neil et al, (2006) examined reflexive orienting in infants, 8-10 months old. The infants showed reaction time advantage for single visual auditory cues over combined cues. On the contrary, Barutchu et al, (2009) performed a study with young children by testing them in a manual button pressing task, and revealed that most children are unable to show the same multisensory advantages until the age of 7 years old. It was proposed that the differences showed in development of audio visual integration reveals the possibility of “differential development of reflexive orienting, which depends on the superior colliculus and sensory decision making, is dependent on cortical integration of sensory evidence”. Barutchu et al (2009) performed a similar study in order to examine the development of multisensory orienting and button pressing for the same audio visual stimuli where eye movements were recorded of children aged 4-13 years old, N = 19 in response to auditory beeps, visual flashes showed at 20° eccentricity. It was observed from the results that the total mean “AV saccadic latencies were significantly shorter than either Audio or Video” and the results revealed a trend towards shorter Audio visual latencies than those hypothesised by statistical support or facilitation Miller (1982). Results of this experiment showed that children aged 4 years old when examined in a saccadic orienting task are capable of showing reaction time advantage consistent with cue integration and that this ability is dependent on the early development of sub cortical multisensory processing” Wallace & Stein (1997).

Research on children in their later childhood around 6 years and older showed the influence of multisensory information on speech precepts, balance and size judgements Gori et. al (2008). In the brain multisensory integration occurs across various different levels which involves “sub cortical areas like the superior colliculus, early cortical areas like the primary auditory and visual cortices and higher cortical areas like the superior temporal sulcus and intraparietal areas”. For example freezing effect Vroomen & de Gelder (2000) or pip and pop effect Van der Burg et al (2008) in which auditory temporal information is needed to form illusory visual onsets tend to occur in the primary visual areas while illusions for example Mc Gurk effect, McGurk & Mac Donald (1976) takes place at a higher cortical areas due to the complexity of information. The areas involved in brain that facilitates audio visual integration in humans can be seen in figure no 1.

C:UsersLocstaPictures1-s2_0-S0001691810000715-gr2.jpg

Figure number 1: Showing brain areas involved in audiovisual attention

Studying the sensory system and multimodal integration development matters to humans as it plays a very important role in cognitive processes. Numerous anecdotal reports from clinicians and parents have stated that significant percentage of sensory impairment, atypical ties are found in children and adults suffering from autism spectrum disorder (Cesaroni & Graber, 1991; Grandin 1992; O’Neill & Jones 1997). In 1970’s scientists dedicated a large amount of research in sensory processing whilst exploring the field of ASD, and researchers found evidence of impaired sensory modulation (Stroh & Buick, 1964), this study provided initial evidence for anecdotal and clinical reports of problems in multisensory integration among individuals with ASD. Multisensory processes facilitate children in numerous cognitive processes that are important in learning. Fifer et. al (2011) tested the link between auditory noise in the background, multisensory integration and children’s general cognitive abilities in children. Eighty eight children participated in this study with the mean age of 9 years and 7 months. A simple audiovisual paradigm was used for detection. The results showed that children who have enhanced ability for multisensory integration in both quite and noisy conditions are more likely to score above average on the Full Scale IQ of Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV). 45%. Children with low verbal and non verbal ability showed reduced multisensory integration in either quite or noisy condition. About 20% showed better multisensory integration when there was background noise present. The findings of the experiment showed evidence that consistent multisensory integration in quiet and noisy conditions is some ways related to the development of general cognitive abilities.

Ambiguous figure recognition:

Ambiguous figures are figures that represent themselves in more than one way. In the past decades idea of ambiguous figure reversal has been meticulously studied by psychologists. The earliest designs of picture ambiguity may be as old as prehistoric cave art Melcher & Wade, (2006). Another famous example of ambiguous figures is the Necker cube, the founder of Necker cube was a Swiss naturalist Necker (1832) and after that era other ambiguous figures were seen e.g. duck/ rabbit (Jastrow 1900) and the vase faces (Rubin 1958). Psychologists have been very interested in ambiguous figures as it provides insights to cognitive and sensory processing by means of visual processing. (Toppino 2004) performed a thorough review of ambiguous figures research, and stated that ambiguous figures opens a wide window in the fundamental mechanisms involved in the processing of the visual system which includes sensory, cognitive, motor and physiological processes. Perceptually ambiguity is the norm with regards to its special features. Particular features of an object for example distance or size cannot be seen only by our retinal input, our experiences drive our perception and information about our environment that we live in so in other words our knowledge of past experiences derived helps us in disambiguation of precepts. These experiences could be visual or involve other senses for example taste, smell, hearing, temperature or pain (Gregory, 1966).

The history has showed two main theories of reversing representations of bi stable figures satiation theory and cognitive theory. Toppino et al (2005). Satiation theory states that reversing two different representations of ambiguous figure happens through a process analogous that leads to neuronal exhaustion due to tiredness when images of colours are perceived (Kohler, 1940; Long & Toppino, 1981). When participants stared at a green patch and then shift their view to a white patch they eventually see red colour. What actually happens is that staring at a green colour patch fatigues the green neurons being fired in the brain and when the attention is shifted to white colour patch then the red neurons that are not fatigued dominate. Keeping this theory in mind when participants perceive a duck as an ambiguous figure, will weaken the neurons that represents the duck, and then representation of rabbit is perceived. Cognitive theory states that reversal of ambiguous figure can only happen if the person observing the figure is aware consciously that the figure is ambiguous. (Girgus, Rock, & Egatz, 1977; Rock & Mitchener, 1992; Rock, Gopnik, & Hall, 1994; Rock, Hall, & Davis, 1994). Satiation theory and Cognitive theory map on to top down vs. bottom up processing debate. In a study performed by Girgus et al, (1977), high school students were shown ambiguous figures and they were made aware beforehand that the figures are reversible but they were not told the possible alternatives. Results showed that one half of the students made spontaneous reversal. In another study performed by Rock & Mitchener, (1992), about one third of participants were able to reverse spontaneously.

Cognitive development in children is a very complex developmental process and certainly is not as simple as it seems. Martin J. Doherty and Marina C. Wimmer looked at which cognitive processes and developments are important for children to experience reversal to understand ambiguous figures in children. 138, 3-5 year old children participated in these two studies to test the idea that a complicated understanding of ambiguity is needed to learn bistable stimuli (Gopnik et al 2001) Duck or rabbit? In the first experiment a novel Production task measured the ability to recognise ambiguity of the figures. The children found this task easier than the Droodle task and the level was similar to the False Belief task and was significantly correlated to the False Belief Task. The same findings were tested again in second study and the results showed that it was much more difficult to perceive the reversal of ambiguous figures than the Production or the False Belief task. The results revealed very interesting findings that children only try to reverse the figures when they understand the representational relationship amongst the figure and its ambiguity. The process that helps in reversal of figures is difficult, and most probably need developments in areas such as executive functioning and imagery abilities.

Ambiguous figure reversal studies are also been found to be useful in showing indications of the presence of autistic traits in a big number of population. In a study performed by (Best, Owens, Moffat, Power and Johnstone 2008) showed evidence that the performance of adolescents in reversing ambiguous figures has showed in advance, the probability of participants to have characteristics of autism, poor mental abilities and superior visio- spatial attributes. (Best et al) has emphasized that there is clear evidence that ambiguous figures studies is a very important modality to be studied in understanding autism on the contrary there is also evidence that even though autistic children who are unable to reverse ambiguous figures appropriately later in life they develop the ability to reverse Ropar, et al (2003). Capps, Lisa, Gopnik Alison, Soble David (2005), performed a study on young children to examine ambiguous figure perception and theory of mind. They observed that about one third of 5-9 year old children were successfully spontaneously able to reverse the ambiguous figures where as autistic children’s did not perform well in reversing ambiguous figures as normal children. It is surprising though that ambiguous figure studies and multisensory integration being such an important modality in understanding cognition and visual processing etc there has not been extensive research done on children’s understanding and perceiving of ambiguous figures. Gopnik, Rock and Hall (1994) studied the perception of ambiguous figure task in children and suggested that figure reversal is much more complex than just low level perceptual process, they also found that even though children were informed of the ambiguity of the figures, 3 year old children still failed to reverse and only 50 percent of the 4 year old children were successfully able to reverse the main result is that young children aged under 5 are unable to reverse ambiguous figures Gopnik and Rosati (2001), Rock Gopnik and Hall (1994).

Centuries of long term research with adult participants suggests that bottom up (lower level) processing in our brain and higher level cognitive processes (top down) processes play a fundamental role in assisting us to disambiguate ambiguous figures. Top down processing theory suggests that there is a voluntary control over the ability to reverse; knowing that we are dealing with ambiguous figures which have more than 1 interpretation to them is an important element and the willingness to reverse the ambiguous figure. On the contrary bottom up processes in our brain assists us in disambiguating ambiguous figures are related with neural weakness/ satiation as predicted by Gestalt Psychologists. Marina et al (2005) performed four studies with 63 children, 3, 4 and 5 year olds, results showed evidence that in young children the concept of more than 1 interpretation develops around the age of 4 but the perception of ambiguity develops around the age of 5.

The role of visual attention in processing multisensory information in humans:

Visual attention plays an important role in processing multisensory information which helps humans to select information across the visual field. It is considered that genes are somewhat or partly responsible for the development of our attentional networks in the brain but there are other important factors (e.g. particular experiences provided by caregivers and also the culture that we live in play a vital role). We attend to the visual information in our surroundings by simply looking at various locations. The centre portion of our eyes is called fovea, as fovea tend to have better vision it provides us a benefit when viewing different locations. There are two types of attention covert attention and overt attention. Simply looking at different locations e.g. finding your motorcycle in the parking lot or your friend in a restaurant this type of attention is called “overt attention” when it’s easier to observe their eye movement, another type of attention which enables us to attend to various locations without the movement of our eyes is called “covert attention”. According to John Colombo (2001) “Rudimentary forms of various attention functions are present at birth, but each of the functions exhibits different and apparently dissociable periods of postnatal change during the first years of life”. Susan E. Bryson (2010) suggested that human’s ability to move attention in space effectively plays a vital role in our ever changing world. “From very early in life, our ability to selectively orient or redirect attention allows us to connect with key others, to learn about and make sense of the world, and to regulate our emotional reactions”.

The functional anatomy reveals that orienting system is connected to areas of the parietal and frontal lobes in our brain. Posner (1980) suggested that orienting can be implied by showing a cue where you want the participant’s attention at a specific space which provides a platform for the participant to pay attention towards the cued position by moving or not moving their eyes. FMRI studies have showed evidence that superior parietal lobe is connected with orienting after the presentation of cue Corbetta et al (2000). The alerting mechanism tends to be associated with parietal and frontal regions of the brain. It has been seen that ongoing vigilance and performance tasks activates specific levels of alertness and these tasks has the ability to activate parietal and frontal areas of the right hemisphere in the brain Coull et al (1996); & Marrocco et al (1994). Neuropsychological experiments have shown evidence in animals that an unexpected sound can enhance perceptual processing of succeeding visual stimuli. Recent studies Nadia et al (2002) have shown that perceptual processing enhancement also exists in humans. This phenomenon can be explained by means of cross modal interaction effects. Nadia et al (2002) showed in a study that auditory stimuli can enhance visual system in a detection task in humans as well. Michael Posner (1994) has performed very interesting research in order to study attention in humans and the three attention networks using the ANT (Attention network test) flanker task, which is an effective tool and allows us to test voluntary and involuntary attention. It helps us to study how brain pays attention to emotional events Fan et al. (2002), Posner and Peterson (1990). In this study subjects were asked to keep their eyes fixated at a point when flanking stimuli are presented on the right or left side of the fixated points. Posner stated that flanking stimuli can be detected easily even when their eyes are fixated on the cross hairs Posner (1994).

In conclusion Multisensory facilitation starts at a very early age and continues to develop throughout the childhood. Nardini et al (2006) suggested that children automatically combine auditory and visual information and this multimodal integration is matured around the age of 9-10 years. One possible domain is when children use auditory and visual information to disambiguate ambiguous figures in order to understand how multisensory integration assists young children to disambiguate ambiguous figures. Therefore this study employed an experimental design similar to Posner Michael (1994), flanker task.

The role of multisensory integration in understanding ambiguous figures can be very useful for young children and atypically growing children suffering from (e.g. autistic spectrum disorder or dyslexia). Thus I proposed a study to examine what role does multisensory integration specifically audio and visual integration play in disambiguating ambiguous figures in young children. Hence it was decided to investigate the possibility that participants’ performance will be faster in congruent trials (where an ambiguous figure is shown with a simultaneous sound, and the auditory cue was congruent with reference to subsequent target). For example an ambiguous figure showing a duck and a rabbit, the sound accompanying it was quack representing, the duck which is facing towards the left side and the target (star) appears on the left side as well. Whereas for Incongruent trials (auditory cue is not congruent with reference to the subsequent target). Finally neutral trials where ambiguous figures are presented with non related simultaneous sound (e.g. sound of a motorcycle racing) presented with an ambiguous figure showing a duck and a rabbit and then a target appears on the left or right of the screen.

Method:
Participants:

After seeking ethical approval from the Department of Psychological Sciences Birkbeck University of London and authorisation from all parents of young children, 45 young male and female healthy children from a local primary school were randomly employed to participate in this experiment. Three participants (1 female from reception class, 1 male from year 2 and 1 female from year 4) did not complete the study so their incomplete data were extracted from the study. Six children with learning disabilities also participated in this study but their data was discarded due to ethical purposes, as performing this experiment with disable children was not one of the aims of this study, I aimed to perform this study with healthy children, and so the results could be generalized to a healthy population of children. The experiment was performed on three different age groups of children. The first group consisted of 4-5 year old children who attended reception class, the second group had 6-7 year old children who attended grade 2 and third group 8-9 year old children who attended Year 4 in a local Primary School. The study was completed in three different early morning sessions.

Stimuli:

The stimuli (ambiguous figures) were displayed on the laptop screen using an e-prime programme developed by Dr Denis Mareschal. The ambiguous figures were black in colour the background was white as shown in figure no 2. The target was presented on either the right or left side of the screen. The participants were to respond to the target according to which side it appears on by pressing the corresponding right or left key on the mouse. On the Incongruent trials the target appeared in the opposite direction and congruent trials the flanking target appeared in the same direction and in neutral trials the ambiguous figure was shown with a non-related sound. Participants viewed the screen from about approximately 64 cm. The target used in this study flanker (star) can be seen in figure no 3.

Figure number 2: Showing ambiguous figures used in this study representing more than one interpretation in one figures.

Figure no 3: Showing the flanker (Star) presented on either right or left side of the ambiguous figure to alert the children in this experiment.

Design:

This non-routine experiment is based upon Michael Posner (1994) and Eriksen and Eriksen (1974) flanker task experiment. The computer based programme called E-prime “Ambiguous Figures” (a commercial experiment programme application) that runs on Windows XP presented on a 12 inch monitor to study the role of multisensory information in disambiguating ambiguous or bi stable figures in children. Trials are divided in to 6 blocks and each block consisted of 45 trials, 15 congruent, 15 Incongruent and 15 neutral trials. At the initial stage of the programme it asks for session number, gender of the participant and finally for his or her date of birth, upon completion of all this information press OK. Instructions appears on the screen “Look for the star” click the right mouse button if it is displayed on the right side of the screen and click the left button if it is displayed on the left side of the but

You Might Also Like
x

Hi!
I'm Alejandro!

Would you like to get a custom essay? How about receiving a customized one?

Check it out