There are both implicit and explicit theories of intelligence. Implicit theories are peoples everyday ideas that surround a particular topic Jonsson, Beach, Korp Erlandson, 2012. Thus, implicit theories of intelligence are everyday ideas that surround intelligence. A layperson is a non-expert in a particular field and intelligence researchers have looked at laypersons’ theories of intelligence. Sternberg, Conway, Ketron and Bernstein (1981) conducted a study to investigate individuals’ conceptions of intelligence. A sample of participants were required to list behaviours which were characteristic of ‘intelligence’, ‘academic intelligence’, ‘everyday intelligence’ or ‘unintelligence’. After this, another sample of participants were asked to rate how effectively each of the listed behaviours reflected aspects of intelligence. After analysing the results of the study, Sternberg et al. (1981) identified three dimensions of intelligence: practical problem solving, verbal ability and social competence. A further study was conducted by Sternberg (1985) into laypersons’ theories of intelligence. In this study, 47 participants were asked to think of behaviours which were characteristic of an intelligent person. These suggestions enabled Sternberg (1985) to produce 40 descriptors of intelligent behaviours. Then, 40 college students were asked to sort these descriptors into those that were likely to be found together in a person. Sternberg (1985) reported similar findings in his 1981 study from the results of this task, but he also found six aspects to intelligence: practical problem-solving ability, verbal ability, intellectual balance and integration, goal orientation and attainment, contextual intelligence and fluid thought. In order to asses the differences in the way laypersons and experts in the field of intelligence viewed intelligence, Sternberg (1985) asked experts in the field of intelligence to complete two questionnaires. They found a high correlation between the experts and the laypersons in their implicit views of intelligence. Laypersons put more emphasis on the social and cultural aspects of intelligence, whereas experts emphasised the role of motivation. It is also noteworthy that lay conceptions of intelligence are much broader than psychologists’ professional conceptions (Sternberg & Kaufman, 1998). In conclusion, implicit theories of intelligence do have limitations. Rather than providing an account of intelligence, they provide a powerful framework that people use to organise and interpret their views of intelligence (Blackwell, Trzesniewski & Dweck, 2007). They also lack specificity in that the types of behaviour that emerge from analysis of the theories is very general (Sternberg, 1985). These theories do also have strengths. Implicit theories have contextual relevance as they tend to place importance on the context in which intelligence is displayed. They are also easily falsified, despite being vague in their formulations. Finally, they have a high ecological validity and focus more on typical performance rather than maximal performance, unlike explicit theories (Sternberg, 1985). In analysing the results of implicit theories research, it is important to note that they provide a basis for explicit theories which will now be discussed (Sternberg, 1985).
The classical, explicit theories that today have the most influence on understanding intelligence are now considered. Explicit theories of intelligence are based on data collected from research presumed to measure people’s intelligent functioning (Sternberg, 1985). These explicit theories can be divided into differential and cognitive theories (Sternberg, 1985). A classic example of a differential theory was developed by Spearman (1927) who argued that intelligence is comprised of two factors: a general factors and specific factors. During a study, Spearman (1904) found that children’s scores on a range of mental tests were positively correlated, and so he concluded that there is one general factor that underlies all cognitive actions, ‘g’. Alongside ‘g’, Spearman (1904) discovered a factor of intelligence called specific abilities, or ‘s’. This refers to each type of intelligence needed for performing well on an intelligence task. Support for the existence of ‘g’ has been provided by Galsworthy, Paya-Cano, Liu, Monleon, Gregoryan, Fernandes, Schalkwky and Plomin (2005) who found that mice appear to exhibit a form of ‘g’. Explicit theories of intelligence have several notable strengths. They provide a detailed specification of the mental structures and processes which may be involved in intelligent performance. They also have made it possible for researchers to go beyond the trivial operational definition of intelligence as what intelligence tests measure (Jensen, 1969). Contrastingly, explicit theories also have notable weaknesses. Many of the theories have proven difficult to falsify result of intrinsic characteristics that make falsification almost impossible (Sternberg, 1985). Explicit theories also fail to note the contexts in which intelligence behaviour occurs, however, many psychologists have argued whether it is even possible to fully understand the concept of intelligence without paying attention to the context in which it is exercised (Neisser, 1979). They have also failed to provide an explicit basis for the selection of tasks on the basis of which to study intelligence (Sternberg, 1985).
There are many problems associated with intelligence testing, but there are three which occur most frequently in intelligence literature: the reliability of intelligence measures, the validity of intelligence measures and whether the usefulness of intelligence measures is exaggerated (Maltby, Day & Macaskill, 2010). The reliability of a test can refer to the internal reliability and the test-retest reliability. Internal reliability refers to the intelligence measure including a number of items that correlate positively with one another, suggesting that they are measuring the same construct. Test-retest reliability refers to how reliable a test stays over time. A good intelligence test will show a high level of reliability over time as one’s intelligence is considered by many psychologists to be stable (Brown & Campione, 1996, as cited in Schauble & Glaser, 1996; Perkins, 1995; Resnick, 1983; Sternberg, 1985). Therefore, it can be assumed that if an individual was to take an intelligence test on one occasion then take the same test some time later they would get similar IQ scores. The main argument against intelligence tests being reliable or not involves questioning whether general intelligence scores (IQ) can vary and fluctuate. Intelligence researchers such as Benson (2003) have estimated that this fluctuation may be as much as 15 IQ points. It is recognised that other factors such as educational background and mood can also influence this fluctuation (Passer, Smith, Holt, Bremner, Sutherland & Vliek, 2009). The fluctuation is made even greater because intelligence tests are only supposed to be taken once; they should never be repeated. This helps to reduce the likelihood of an individual memorising the questions involved in the test and subsequently performing better resulting in a higher IQ score. This concern of fluctuating intelligence scores is considered so important that a great deal of research has been done into this by intelligence psychologists over time. The primary findings of this research show that although it is widely recognised that intelligence scores do vary to a degree, they are relatively stable. Jones and Bayley (1941) established the Berkeley Growth Study in which they tested a sample of 128 children yearly throughout their childhood on their intelligence and they recorded their IQ scores. The findings of this study showed that the children’s IQ scores at the age of 18 were positively correlated with the IQ scores which were recorded when they were 12 years old. This supports the argument that intelligence scores are stable to a degree. However, it can be argued that Jones and Bayley’s (1941) Berkeley Growth Study does have a few problems when accounting for a correlation between IQ scores at different ages. The participants in their study were all American children, and so one may question whether these results have high ecological validity, that is whether these results can be applied to children from other cultures. Additionally, all participants were tested as infants, however only 70 were followed up later in life. Thus, the findings were only based on 29 children. Support for Jones and Bayley’s (1941) study does exist in the form of a follow-up study of intelligence and correlation between IQ scores conducted by Deary, Whalley, Lemmon, Crawford and Starr (2000). The Mental Survey Committee in Scotland had measured intelligence in Scottish children who were born in 1921 and attended school in 1932. These people were followed up, then aged 77, and results showed that IQ scores on tests from childhood to late in life were relatively stable. Consequently, the correlations between IQ scores on intelligence tests are striking, however they are not perfect. Internal consistency is another form of reliability and involves the consistency of measurement within the intelligence test itself. An example of this is given by Gregory (1998) who argues that if a Weschler subtest has internal consistency, all of the items in the test would be measuring the same skill. Evidence of this would come from high correlations among the items.
The second question that emerges from the study of intelligence is whether or not a test measures what it is designed to measures. This is known as the validity. The question put forth is whether an intelligence test does in fact measure one’s intelligence. However, it must be acknowledged that there are different types of validity, namely construct validity, content validity and criterion-related ability (Passer et al., 2009). As intelligence is viewed as a concept or mental construct, an intelligence test is seen to have construct validity when it successfully assess the psychological concept that it is designed to measure, namely intelligence (Peter, 1981). This would be indicated by links between IQ scores and other behaviours that it should be related to. Thus, one may argue that if intelligence tests had high construct validity, then the individual differences in IQ scores would be the sole result of differences in intelligence. Perfect construct validity can never be attainted, as previously mentioned, other factors such as motivation and educational background also have an influence on IQ scores. Content validity is also seen to contribute to construct validity. Content validity is the degree to which items on a test measure all the skills which are seen to underpin the construct of interest (Haynes, Richard & Kubany, 1995). If an intelligence test such as the Wechsler Adult Intelligence Scale was to be analysed, it would show that subsets of the test are measuring different intelligence abilities that they have been designed to measure. Additionally, researchers who develop these intelligence tests strive to have high validity for all aspects of their test and so if no validity was shown in development, then the researchers would be obliged to improve the intelligence measure until it had full validity. If an intelligence test is n fact measuring what it is designed to measure, the IQ scores it produces should allow behaviours which are considered to be affected by intelligence to be predicted. These predictions are known as criterion measures, and so criterion-related validity is the ability of IQ scores to correlate with criterion measures. One criticism of intelligence measures is the degree to which they predict the kinds of outcomes we expect intelligence to influence, such as educational and career attainment.
The final question which emerges from the measurement of intelligence is whether their usefulness is exaggerated. Intelligence tests are criticised further in that their ability to predict intellectual performance in different cultures is overemphasised (Benson, 2003). It is widely recognised that intelligence test scores greatly predict one’s academic and work performance. Those who are opposed to intelligence testing argue that fluctuations exist in the predictive strength of intelligence tests. For example, when different situations or tasks change or when different demographics such as age, race or gender are considered. Benson (2003) argues that special education is one area in which these concerns are apparent and which is concerned with people suffering from special needs. This concern emerges from the use of intelligence tests to classify learning disabilities using the IQ-achievement discrepancy model. The model is used to compare children’s academic achievement to their IQ score. If a child’s achievement score is a standard deviation or more below their IQ score, they are considered to have a learning difficulty. Two studies by Hoskyn and Swanson (2000) and Stuebing, Fletcher, LeDoux, Lyon, Shaywitz and Shaywitz (2002) highlight the concerns of the usefulness of the IQ-achievement discrepancy model. Both studies found insignificant overall effect size differences between IQ-discrepant and non-discrepant poor readers, with insignificant differences on most measures of phonological ability and reading (Fletcher & Vaughn, 2009). Other studies which compared people with poor reading skills with and without significant IQ-achievement discrepancies did not find any difference in prognosis (Francis, Shaywitz, Stuebing, Shaywitz, & Fletcher, 1996; Share, McGee, & Silva, 2000) or response to instruction (Vellutino, Scanlon, & Lyon, 2000). These issues show that the use of intelligence measuring is indeed exaggerated and they do not support the method of identifying learning disabilities on the basis of a discrepancy between achievement and IQ scores (Donovan & Cross, 2002). Furthermore, Benson (2003) claims that identifying individuals as having learning disabilities using the IQ-achievement discrepancy model does not help others to understand what they must to in order to help the individual learn or give any indication to the educational program that an individual may enrol in to improve themselves. Perhaps a better indicator would be to asses the individual’s behaviour in the home or social settings. Intelligence researchers have been thinking of ways to reduce the problems associated with measuring intelligence using an IQ-achievement discrepancy model. Kaufman and Kaufman (2001) suggested that intelligence test should be administered by trained educational practitioners with an expertise in child learning. Rather than just getting an IQ score, these people would work closely with the child and make special recommendations. Kaufman and Kaufman (2001) argue that in this context, there is no reason to abolish the sole use of intelligence tests, rather these tests should be used in conjunction with other educational tools.
In conclusion, there are both implicit and explicit theories which have been developed in an attempt to define the concept of intelligence, however, all theories have limitations which must be considered. Both types of theories can be considered linked in some way as implicit theories provide a basis for explicit theories. The three questions that have been discussed in relation to the measurement of intelligence – validity, reliability and the usefulness of such measures – show aspects which must be addressed in order for the measurement of intelligence to be accurate.