How much information can short term memory hold

Encoding involves changing the information presented into a different form. Since words or other items in the short term store are rehearsed or repeated, we might assume that they are encoded in terms of their sound (acoustic coding). In contrast, the information we have stored in the long term memory nearly always seems to be stored in terms of its meaning (semantic coding).

Encoding takes many different forms; visual, auditory, semantic, taste and smell.

Best services for writing your paper according to Trustpilot

Premium Partner
From $18.00 per page
4,8 / 5
4,80
Writers Experience
4,80
Delivery
4,90
Support
4,70
Price
Recommended Service
From $13.90 per page
4,6 / 5
4,70
Writers Experience
4,70
Delivery
4,60
Support
4,60
Price
From $20.00 per page
4,5 / 5
4,80
Writers Experience
4,50
Delivery
4,40
Support
4,10
Price
* All Partners were chosen among 50+ writing services by our Customer Satisfaction Team

Capacity

The short term store has very limited capacity, about 7 items. In contrast the capacity of the long term memory is assumed to be so large that it cannot be filled, it is said to have unlimited capacity and lasts potentially forever.

Duration

Information lasts longer in the long term store than in the short term store,. There is evidence that in the short term store, if not rehearsed, information will disappear within about 18 – 20 seconds and in contrast there is evidence that elderly people can recognise the names of fellow students from 48 years previously.

Storage

As a result of encoding, the information is stored in the memory system; it can remain stored for a very long time maybe a entire lifetime.

Retrieval

Recovering information from the memory system. Can be known as recall or remembering.

Short term Memory
Definition

Short term Memory – A temporary place for storing information. Short term memory has a very limited capacity and short duration, unless the information within it is maintained through rehearsal.

Capacity in STM (Jacobs):
AIMS:

To investigate how much information can be held in short term memory.

To do this, Jacob’s needed an accurate measure of STM capacity – he devised a technique called the serial digit span

His research was the first systematic study of STM

PROCEDURES:

This was a laboratory study using the digit span technique

P’s were presented with a sequence of letters or digits

This was followed by a serial recall (repeating back the letters or digits in the same order they were presented)

The pace of the item presentation was controlled to half second intervals through a metronome

Initially, the sequence was 3 items – it was then increased by a single item until the participant consistently failed to reproduce the sequence correctly

This was repeated over a number of trials to establish the participants’ digit span.

The longest sequence length that was recalled correctly on at least 50% of the trials was taken to be the P’s STM digit span

FINDINGS:

Jacobs found that the average STM span (number of items recalled) was between 5 and 9 items

Digits were recalled better (9.3 items) than letter (7.3 items)

Individual differences were found, explaining the range of 5-9

STM span increased with age – in one sample he found an 6.6 average for 8 year old children compared to 8.6 for 19 year olds

CONCLUSIONS:

The findings show that STM has a limited storage capacity of between 5 and 9 items

The capacity of STM is not determined much by the nature of the information to be learned but by the size of the STM span, which is fairly constant across individuals of a given age

Individual differences of STM span increasing with age may be due to increasing brain capacity or improved memory techniques, such as chunking

EVALUATION:

+ The study has great historical importance because it represents the first systematic attempt to assess the capacity of STM

– The research lacks mundane realism as the digit-span task is not representative of everyday memory demands – the artificiality of the task may have made the results biased. Letters and numbers are not very meaningful, so may not be remembered as well as meaningful information.

– This means that the capacity of STM may be greater for everyday memory.

– Jacobs’ findings cannot be generalised to real life memory – so it may have low ecological validity

– However, it could be argued that using more meaningful information would produce a less pure measure of STM capacity, because participants could make use of LTM to improve performance

+ The findings have been usefully applied to improve memory (phone numbers etc). Memory improvement techniques are based on the findings that digit span cannot be increased, but the size of the bits of information can be – this is what happens in chunking.

Encoding in STM (Conrad)
AIMS:

To test the hypothesis that short term memory encodes information acoustically

PROCEDURES:

Conrad (1964) compared performance with acoustically and visually presented data.

Presented p’s with 6 letters at a time, for 0.75 seconds

P’s had to recall the letters in the order they were shown

FINDINGS:

Letters were presented visually, but ones which sounded the same were confused (e.g. S was recalled instead of X)

EVALUATION:

– Later research showed that visual codes do exist in STM – sometimes

– During a different experiment (Posner’s), reaction time was longer for Aa than AA – suggesting a visual processing rather than acoustic

Key Study: PETERSON AND PETERSON – DURATION IN SHORT TERM MEMORY.
Aims:

They aimed to study how long information remains in short term memory, using simple stimuli and not allowing the participants to rehearse the material presented to them

They wanted to test the hypothesis that information not rehearsed is lost rapidly from short-term memory.

Procedures:

They used the ‘Brown-Peterson’ technique.

On each trial participants were presented with a trigram consisting of 3 consonants e.g. BVM, CTG which they knew they would have to recall in the correct order.

Recall was required after a delay of 3, 6, 6, 12, 15, or 18 seconds.

Between the initial presentation of the trigram and the time participants were asked to recall, they were told to count back in threes from a random 3 digit number e.g. 866, 863, 860aˆ¦ this was done to prevent rehearsal.

Participants were tested repeatedly with the various time delays and the effect of the time delay on memory was assessed in terms of the number of trigrams recalled.

Findings:

There was a rapid increase in forgetting from the STM s the time delay increased.

After 3 seconds 80% of the trigrams were recalled.

After 6 seconds 50% were recalled

After 18 seconds fewer than 10% of the trigrams were recalled.

Therefore very little information remained in the STM for more than 18 seconds.

Conclusions:

The findings suggest strongly that information held in the STM is rapidly lost when there is little or no opportunity for rehearsal.

Thus information in the STM is fragile and easily forgotten

Evaluation:

– They used artificial stimuli (i.e. trigrams), which have very little meaning and therefore the experiment lacks mundane realism and external validity.

– The participants were given many trails with different trigrams so may have become confused.

– Peterson and Peterson only considered STM duration for one type of stimulus, and did not provide information about duration of STM in other kinds of stimuli e.g. pictures, smells, melodies.

+ It was a well controlled lab experiment, which allows a cause and effect relationship to be established.

+ Repeated measures design

Long term Memory
Definition

Long term Memory – A relatively permanent store, which has unlimited capacity and duration. Different kinds of long term memory have been identified; episodic (memory for personal events), semantic (memory for facts and information) and procedural (memory for actions and skills).

Key Study: BAHRICK ET AL – DURATION IN LONG TERM MEMORY.
Aims:

Bahrick et al aimed to investigate the duration of very long term memory (VLTM), to see if they could last over several decades and thus support the assumption that the duration of long term memory can last a life time.

They aimed to test VLTM in a way that showed external validity by testing memory for real-life information.

Procedure:

329 American ex-high-school students aged from 17 – 74 were used – It was an opportunity sample.

They were tested in a number of ways:

Free recall of the names of as many former class mates as possible

A photo recognition test, where they were asked to identify former classmates in a set of 50 photographs, only some of which were classmates.

A name recognition test

A name and photo matching test

Participants accuracy (and thus duration of memory) was assessed by comparing their responses with high-school year books containing pictures and names of all the students in that year.

Findings:

90% accuracy in face and name recognition (even with participants that had left high school 34 years ago)

After 48 years of leaving this accuracy of name recognition declined to 80% and for face recognition it was 40%

Free recall was considerable less accurate; 60% accurate after 15 years and only 30% accurate after 48 years.

Conclusions:

The findings show that classmates were rarely forgotten once participants were given recognition clues. Thus the aim of very long term memory was supported.

The research demonstrates VLTM for a particular type of information, it cannot be concluded that VLTM exists for all types of information.

The finding that free recall was only 30% after 48 years indicates that memories were fairly weak.

Evaluation

+ This study provides evidence for the assumption that information can remain in the LTM for very long periods of time.

– Classmates faces and names are a very particular type of information. They might have emotional significance, and there was a great deal of opportunity for rehearsal, given the daily contact they would have experienced. The same is not true for other types of information and therefore the findings cannot be generalised to other types of information.

+ Bahrick’s research has high mundane realism as he asked participants to recall real life memories, and therefore the research is more representative of natural behaviour and so has high external validity, and it may be possible to generalise the findings to other settings.

Models of Memory

The Multi-store Model of Memory – Atkinson and Shiffrin

Atkinson and Shiffrin argued that there are three memory stores:

sensory store

short-term store

long-term store

According to the theory information from the environment is initially received by the sensory stores.

(There is a sensory store for each sense.)

Some information in the sensory stores is attended to and processed further by the short-term store.

In turn some information processed in the short-term store is transferred to the long-term store through rehearsal or verbally repeating it. The more something is rehearsed the stronger the memory

trace in the long-term memory.

Long-term Memory

Short-term Memory

Sensory Memory

DataThe main emphasis of this model is on the structure of memory on rehearsal.

Rehearsal

Forgetting

Evaluation of the Multi-store Model

+ Case studies of brain damaged patients lend support to the multi-store model; they support the view that there are two different memory stores.

+ Glanzer and Cunitz found that when rehearsal is prevented, the recency effect disappears.

+ There is evidence that encoding is different in short term and long-term memory. For example Baddeley found that acoustic or sound encoding was in the short-term memory and semantic or meaning encoding was in the long-term memory.

+ There are huge differences in the duration of information in the short term and long term memory. Unrehearsed information in the short-term memory had vanished after about 20 seconds (Peterson & Peterson). In contrast some information in the long-term memory is still there 48 years after learning (Bahrick et al.)

– The model argues that the transfer of information for short term to long-term memory is through rehearsal. However in daily life people devote little time to active rehearsal, although they are constantly storing new information into the long-term memory. Rehearsal may describe what happens in laboratories but is not true to real life.

Craik & Lockhart

Suggest that it is the level at which we process information that determines how well we remember it. Rehearsal represents a fairly shallow processing level.

– This model is oversimplified. It assumes that there is a single short-term store and a single long-term store. These assumptions have been disproved, by evidence such as that from the studies of brain damages patients.

KF had a motorcycle accident that left him with a severely impaired STM but he could still make new long-term memories. Also Clive Wearing, another brain damaged patient, could till play the piano, speak and walk.

Therefore it makes sense to identify several long-term memory stores; episodic memory, semantic memory, declarative knowledge and procedural knowledge. Atkinson and Shiffrin focus exclusively on declarative knowledge and had practically nothing to say about procedural knowledge e.g. skills and learning.

Levels of Processing Theory – Craik and Lockhart

Craik and Lockhart put forward an alternative to the multi-store model of memory, called the levels of processing theory. This approach focuses its attention on how information is encoded

Craik and Lockhart argued that rehearsal is not sufficient enough to account for LTM, they proposed that it is the level at which information is processed at the time that determines whether something is stored in the LTM.

They stated that if something is processed deeply than it will be stored but if something is only processed rather superficially than it won’t be stored as effectively.

Shallow processing was physical (what it looked like)

Intermediate processing was auditory (what it sounds like)

Deep processing was semantic (what it means) this was the level of processing that they argued was needed to best store information in the LTM.

Deep processing includes

Semantic processing

Elaboration

Organisation

Distinctiveness

Evaluation of the Levels of Processing Theory

+ Research by Hyde and Jenkins has supported this theory.

+ It deals with some of the flaws of the multi-store model; it does not rely on rehearsal, it sees memory as a more active process.

+ The level of processing theory offered a model that could be applied to improving memory. for example if you’re finding it hard to remember something don’t just repeat it elaborate on it and make the memory distinctive.

– We cannot control what goes on in people’s minds, just because participants are asked to process a word in a particular way, there is nothing to stop them engaging in other levels of processing.

– The model has been criticised for being too vague, sometimes it is not clear what level of processing in necessary. It doesn’t really elaborate on what is deep processing and what is not.

– There is some evidence that doesn’t support Craik and Lockhart’s theory. Morris, Bransford and Franks found that stored information is remembered only if it is relevant to the memory test.

They conducted a study and gave their participants several words, they found that participants remembered words that had been processed in terms of their sound (shallow processing) better than those that had been processed for meaning (deep processing). Therefore disproving the statement that deep processing is ALWAYS better than shallow processing.

– Talving suggests that retrieval clues are important in remembering and Craik and Lockhart didn’t take this into account.

Craik and Lockhart did not explain the effectiveness of different kinds of processing, they did not state why deep, elaborative, or distinctive processing lead to better LTM.

Theories of Forgetting
Explanations for Forgetting in STM

There are two different explanations for forgetting in the Short Term memory they are:

Decay

Displacement

Decay

This is based on the idea that memories have a physical basis that will decay in time unless it is passed onto the LTM through rehearsal.

Evaluation of Decay

– It is hard to test

Reitman – Attempted to measure decay in an experiment where she presented a list of words and then gave participants a tone detection task to prevent rehearsal. But also to prevent new learning. After 15 seconds participants could only remember 24% of the words. Therefore supporting the view that information may have decayed.

BUT we cannot control what goes on in peoples heads so it is impossible to know that no new information was taken in

Also it lacks mundane realism as it is not representative of real life and only uses free recall.

Displacement

This theory argues that when the capacity to the STM (7 items) is full, the old information gets ‘knocked’ out by the new information

Evaluation of Displacement

+ Waugh and Norman – Demonstrated support for displacement as they showed that when a probe was given at the end of a list of numbers, the following number is more likely to be recalled than when a probe was at the beginning of the list.

– HOWEVER Waugh and Norman’s study lacked external validity as it was an artificial task and didn’t necessarily tell us anything us about memory in everyday life.

– Also Shallice found that when the rate of presentation speed up the number of items remembered increased. This suggests that STM works on a time based system rather than a capacity based system. Therefore supporting the theory of decay not displacement.

Waugh and Normans study didn’t rule out the possibility that information might be decaying rather than being displaced.

Theories of Forgetting in LTM

There are two different explanations for forgetting in the Long Term memory they are:

Interference

Cue-dependant forgetting

Interference Theory

There are two types of interference; Proactive interference and Retro-active interference.

Proactive Interference, is when old information prevents the learning of new information.

Retro-active interference, is when new information interferes with old information and corrupts it so it is no longer available.

The more similar the information, the more likely it is to be affected by interference.

Evaluation of the Interference Theory

+ Results of paired associate learning tasks support the view that both pro-active and retro-active interference play a part in forgetting.

+ Jenkins and Dallenbach’s study supports the view that interference is a factor in forgetting.

– Groups of participants were given a list of nonsense syllables to learn.

– One group stayed awake during the retention period and the other group went to sleep.

– Both groups memory for the information was tested at various intervals.

– It was found that the awake group forgot far more than the group that was asleep, therefore supporting interference, as the awake group were more likely to get new information than the asleep group.

BUT

– The time of day may have been a confounding variable, the sleeping group learned before going to bed and the awake group learned during the day.

– It is artificial and lacks mundane realism

– There is also no control over what the sleeping group were doing during the retention period.

– Tulving and Psotka found that when a cued recall task was given the effects of interference disappear. This suggests that the original memory trace is not corrupted as suggested by the interference theory, but that free recall is not sufficient to be able to access it.

– The explanation describes the effects but doesn’t explain why they happen.

Cue Dependent Forgetting

Tulving suggests that a lot of forgetting is simply retrieval failure and with the right cues we will be able to access the information.

He also suggest that the closer the retrieval cue is to the stored information, the more likely it will that the cue will be successful in retrieving the memory. Which is supported by the tip of the tongue phenomenon.

Evaluation of Cue Dependent Forgetting

+ Tulving and Pearlstone gave participants lists of words to remember arranged under headings. Half the participants had to free recall the words (given a blank sheet of paper) the other half were given the category headings. Those with the category headings remembered far more words, which supports the cue dependent theory.

+ Tulving and Psotka gave participants lists of words to remember some had one list, some had two and some had six. They found that the more lists a participant had to learn the poorer the recall, when they were tested using free recall. HOWEVER when cues were given the interference effect disappeared.

– Both of Talving’s studies are artificial and lack mundane realism, therefore it doesn’t necessarily tell us much about how we use memory in a real life situation.

+ The model has practical applications as it can help people to improve their memories.

+ Research into State and Context dependency supports the cue dependency theory

State Dependency – where the internal physiological or psychological state of the individual at the time of the acquisition acts as a cue for remembering. Goddwin found that when heavy drinkers hid keys or money when they were drunk they could only find them when they were drunk again.

This study is good as it is more realistic than a lab experiment. Although there is a lack of control as it is a natural experiment.

Context Dependency – where the external physical environment at the point of acquisition acts as a cue for remembering. Baddeley found that when deep sea divers learned information under water they were more likely to be able to recall the information when underwater than those who had learned on dry land.

HOWEVER he only looked at one specific group of people, but the study did have practical applications.

Emotional Factors in Memory
Repression

A further reason for not being able to retrieve a memory that causes negative emotions is that it may have been repressed, or held from conscious awareness. Freud stated that repression is one of the ego defence mechanisms. Repression can be used to explain forgetting in the terms that the anxiety caused by the memory in some way represses it from conscious thought.

Evaluation of Repression

– Freuds theory is hard to test so cannot be proved or disproved.

– This theory doesn’t explain why forgetting increases over time, or why we forget good things as well as bad.

+ The theory of repression is supported by Williams study.

Williams found that female victims of child abuse were likely to repress their memories of that event. HOWEVER there is a chance that they just may not have wanted to talk about it with the interviewer.

– Very young children may not be able to lay down stable memories and therefore forgetting may be due to decay rather than repression. BUT the fact that some victims recovered the information after a period of time supports the idea that their forgetting was due to repression.

– Williams used a bias sample and only deals with a specific situation so therefore his results cannot necessarily be generalised and his experiment lacks external validity.

– There is no way to know whether or not the initial reports of abuse were real / true or not, so therefore

results may be compromised. Or the recovered memories may be false – demand characteristics.

+ Post Traumatic Stress Disorder supports the idea of repression. As victims often lose their memory of a very traumatic event or details that surround it, they may recover these memories later on either spontaneously or through therapy.

HOWEVER Loftus’s research suggests that a lot of recovered memories may be false, going against the idea of repression, as the memories have not been recovered.

Flashbulb Memories – Brown and Kulik

A flashbulb memory is a long lasting, detailed and vivid memory of a specific event and the context in which it occurred. The event is important and emotionally significant e.g. a national or personal event. It is as if a flash photograph was taken at the very moment of the event with every detail indelibly printed in memory. Flashbulb events don’t have to be negative or to concern international events. However nearly all studies of flashbulb memories have focused on dramatic world events. Brown and Kulik suggested that flashbulb memories were distinctive because they were both enduring and accurate

Evaluation of Flashbulb Memories

– Brown and Kulik had no way of knowing if the participants Flashbulb memories were accurate / reliable.

McCloskey , Wible, and Cohen wanted to test the reliability of flashbulb memories, they interviewed people shortly after the explosion of the space shuttle Challenger and then re-interviewed the same people 9 months later.

They found that participants did forget elements of the event, and showed some inaccuracies in their recall. This suggests that flashbulb memories are subject to forgetting in the same way that other memories are.

Conway et al disagreed with McCloskey et al, they stated that the Challenger explosion was not a very good example of flashbulb memory as it did not have important consequences in the lives of those who were interviewed, and therefore lacked one of the central criteria for flashbulb memory.

Critical Issue – Eyewitness Testimony
Reconstructive Memory
Reconstructive Memory (Bartlett’s)
AIMS:

Bartlett aimed to investigate the effects of schemas (packets of knowledge about the world) on participants’ recall.

Schemas include prior expectations, attitudes prejudices, and stereotypes. The study was based on Bartlett’s schema theory, which states that memory involves an active reconstruction

According to this theory, what we remember depends on two factors – The information presented to us, and distortions created by our reliance on schemas. These distortions would be most likely to occur when the P’s schemas were of little relevance to the material being learned.

PROCEDURES:

Twenty English P’s took part in this natural experiment

P’s were presented with a range of stimuli, including different stories and line drawings

A repeated production method was used as P’s were asked to reproduce the stimulus they had seen repeatedly at different time intervals.

The time interval varied between days, months, and even years.

The story called ‘The War of the Ghosts’ is the best known example of Bartlett’s materials.

The story was selected because it was from a different culture (North American Indian), so would conflict with the participants’ prior knowledge contained in their schemas

The P’s story reproductions were analysed in order to assess the distortions

FINDINGS:

Bartlett found high distortions in the P’s recollections

The distortions increased over successive recalls and most of these reflected the P’s attempts to make the story more like a story from their own culture.

Changes from the original included rationalisations, which made the story more logical, as the story was shortened and the language was changed to be more similar to their own language.

As well as flattening, which was a failure to recall unfamiliar details, such as the ghosts.

Sharpening, which was elaboration of certain content and alteration of its importance.

These changes made the story easier to remember

CONCLUSIONS:

Bartlett concluded that the accuracy of memory is low.

The changes to the story on recall showed that the P’s were actively reconstructing the story to fit their existing schemas, so his schema theory was supported

He believed that schemas affect retrieval rather than encoding or storage.

He also concluded that memory was forever being reconstructed because each successive reproduction showed more changes, which contradicted Bartlett’s original expectation that the reproductions would eventually become fixed.

The research has important implications for the reporting of events requiring great accuracy, such as in eye witness testimony.

EVALUATION:

+ Bartlett’s research is important, because it provided some of the first evidence that what we remember depends on our prior knowledge in the form of schemas

+ It also has more ecological validity than most memory research, because schemas play a major role in everyday memory.

– Bartlett assumed that the distortions in recall produced by his P’s were due to genuine problems with memory. However, his instructions were very vague. it is likely that many of the distortions were actually guesses made by P’s, who were trying to make their recall seem logical and complete.

– Bartlett assumed that schemas influence what happens at the time of retrieval, but have no effect on what happens at the time of understanding of a story. Other evidence suggests that schemas influence understanding (encoding and storage) and retrieval.

– Another criticism of Bartlett’s work was that it lacked objectivity. Some psychologists believe that well controlled experiments are the only way to produce objective data. His methods were quite casual. he simply asked his P’s to recall the story at various intervals with no special conditions for this recall.

Eyewitness Testimony (Loftus and Palmer):
AIMS:

To test their hypothesis that eyewitness testimony is fragile and can easily be distorted.

Loftus and Palmer aimed to show that leading questions could distort eyewitness testimony accounts via the cues provided in the question.

To test their hypothesis, Loftus and Palmer asked people to estimate the speed of motor vehicles using different forms of questions after they observed a car accident. The estimation of vehicle speed is something people are generally quite bad at, so they may be more open to suggestion by leading questions.

PROCEDURES:

45 American students formed an opportunity sample

This was a laboratory experiment with 5 conditions. Each participant only experienced one condition (an independent measures design)

P’s were shown a brief film of a car accident involving a number of cars. The

You Might Also Like
x

Hi!
I'm Alejandro!

Would you like to get a custom essay? How about receiving a customized one?

Check it out