Philosophy And Cognitive Neuroscience Psychology Essay

Trends in psychological theory practice often parallel advances made in technology. For example, Robins et al. have documented the (successive) “virtual death” of psychoanalysis, the rise and severe decline of behaviourism, the emergence and large ascent of cognitive psychology (the “cognitive revolution”) and the beginning of the rise in neuroscience in the last century in the field of scientific psychology alone – which can be seen to mirror technological advances. Specifically, as Tracy et al. (2003) acknowledge, the cognitive revolution of the 1970s was driven largely by the computer revolution – transferring its foundational theories of successive/linear information processing and functional modularity from machine to brain. According to Bechtel et al. (2001) the metaphor of brain as information processor first came about in the 1950s to eventually develop into cognitive science and then, with the emergence of neuroscience, and the advent of neuro-imaging technologies, came the predicted cross-cutting of the taxonomies of neuroscience and cognitive science (Fodor, 1975) to eventually give rise to cognitive neuroscience in the 80’s – concerned with how cerebral processes underlie cognitive programmes. With the 90s came the “decade of the brain” and with the start of the 21st century there would certainly seem to be a preoccupation with neuroanatomy – where the majority of cognition-based research emerges coupled with some form of neuroanatomical reference (Lake, 2007). Indeed this can be seen to mirror the advent of neuroimaging technologies such as fMRI; arguably the most prominent tool in cognitive neuroscience, invented in 1990. Since then fMRI studies alone have explosively expanded, from just 15 being published in 1991 to 2,224 being published in 2003 alone; with there now being at least 30-40 fMRI studies emerging weekly (Poldrack, 2008; Illes & Racine, 2005).

Indeed, where Kuhn (1970) has outlined how scientific paradigms compete with one another until the limited potential capabilities of one succumbs to the riper possibilities of another, Rand & Ilardi (2005) have emphasised that technological advancements serve as fortifications for the growth and maturation of particular paradigms and their replacement of others. For example, Galileo’s implementation of the telescope facilitated the advent of a new science in the same way the development of the computer functioned in cognitive science’s overtaking of behaviourism as the predominant paradigm in psychology. Likewise the advent of neuroimaging and its coupling with cognitive science can be seen as providing the force which inaugurated the cognitive neuroscience paradigm which is currently predominant.

Best services for writing your paper according to Trustpilot

Premium Partner
From $18.00 per page
4,8 / 5
4,80
Writers Experience
4,80
Delivery
4,90
Support
4,70
Price
Recommended Service
From $13.90 per page
4,6 / 5
4,70
Writers Experience
4,70
Delivery
4,60
Support
4,60
Price
From $20.00 per page
4,5 / 5
4,80
Writers Experience
4,50
Delivery
4,40
Support
4,10
Price
* All Partners were chosen among 50+ writing services by our Customer Satisfaction Team

‘Cognitivism’ arose in response to the insufficiencies of behaviourism. Behaviourism – itself a response to earlier schools which employed introspectionism (deemed too unreliable due to its subjectivity), treated the mind as a ‘black box’; only accounting for input stimuli and output, observable behaviour. The cognitive sciences as such emerged to deal with this ‘black box’ issue through implementation of a computational approach (Dartnall, 1995). The roots of this approach, says some authors (i.e. MacCormack, 1984) lies in the 18th century work by LaMettrie (‘Man as Machine’) from which the fundamental characterisation of man as being akin to a clockwork mechanism was drawn. Despite the numerous other metaphors LaMettrie used, it was this man-as-mechanism that stuck; providing impetus for the Enlightenment-era ideal that underlying human behaviour were timeless laws and rules (Eichner). While taking metaphors rooted in technological advances can be beneficial; facilitating new insight, an inherent danger lies in taking such metaphors literally. Specifically there is a twofold danger: reificiation (taking the abstract to be a concrete fact) and assuming isomorphism between both referents of the metaphor (i.e. that all qualities of each referent are equal e.g. man = machine). Indeed with this is the concern that should we class computer as thinking machine (i.e. artificially intelligent) we indirectly infer thinking-man as a computer – as such, dehumanising man while personifying machine. Rather astoundingly, however, some authors have actually advocated taking the computational metaphor literally (i.e. Plyshyn), claiming that treating it as a simple heuristic metaphor permits too many perspectives and thereby obstructs progress. What Plyshyn here overlooks is that treating a metaphorical association literally can also impede progress – as being overly zealous in one particular paradigmatic approach can close one off to new possibilities or avenues of investigation (MacCormack, 1984). Arguably, however, the computer metaphor is taken literally in many practical instances i.e. biological psychiatry, precluding many intrinsically important facets of what having a mind entails – i.e. subjectivity, intentionality etc.

As such, we can see that due to such inadequacies of the staunch cognitive scientific approach; and how it’s branched into cognitive neuroscience and become increasingly enmeshed with neuro-anatomical studies, several authors are calling for a return to phenomenology (i.e. Shear, 1996) in order to address the issues the cognitive neurosciences have resurrected. Indeed, as Mishara et al. (1998) have pointed out, debates within the philosophy of mind and cognitive sciences have begun to appeal to theories of consciousness as a means of highlighting the insufficiencies of the more popular models of mind (i.e. mind as computer). ‘Embodied’ and/or ‘enactive’ cognition in particular has become an important perspective in this reconciliation of consciousness and cognition. As such, we can outline 3 major approaches to cognition – symbolicism (or classical computationalism), connectionism and dynamicism (Eliasmith, 2005) which have are informing neuroscientific studies. The aim of the remainder of this essay is to outline how several criticisms fuelled by issues in the philosophy of mind & science highlight the caution that must be taken when using the computational metaphor and further integrating experimental results from cognitive neuroscience into theories of mind. In the first instance criticisms of the predominant computational model of mind will be sampled. For issues of space, discussion regarding connectionism and dynamism-based approaches will not be undertaken. The preferential treatment of computationalism is justified (I believe) insomuch that it is still one of the more predominantly used models of cognition (Poldrack, 2008). Following this, the problematic issues regarding the implementation of neuroimaging ‘evidence’ to espouse paradigms in the field of science will be highlighted. Consequently it will be concluded that while cognitive neuroscience has great potential for unifying the field of psychology it can also be implemented to propound specific viewpoints, running the risk of undermining others and the lived experience of the human individual.

The ‘classical’ computational approach involved the convergence of three major fields of research; artificial intelligence (AI), cognitive psychology and linguistics. Namely, the fundamental principles of AI research: claiming that intelligence was the handling/manipulation of symbol strings, cognitive psychology: modelling human cognitive processes on the operation of computers, and linguistics: which informed that sets of rules govern (semantic) operations in the brain, were assimilated to develop a “computer between ears” or “representational-computational” view of the mind (Dartnall, 1995). This ‘classical’ view of the mind was summated by Fodor (19**) who presented the Language of Thought hypothesis which claimed that we think in the language of thought – a system of symbols with semantic and syntactic properties, manifested in accordance with the structural design of the brain. Such symbols are intrinsically representational neural events (Robinson, 1995). While the computational approach isn’t explicitly reductionistic, insomuch that thinking + brain function combine to instantiate computation, as hardware and software do; as I will presently allude to, this approach re-introduces a Cartesian dualism which itself causes problems (MacCormack, 1984; Kim, 1993).

The computational system is characterised as constituted by sets of tokens for which sets of rules govern arrangement and/or transfiguration into other sets of tokens (Haugeland, 1981). Two of the primary criticisms levelled at the classical approach were the symbol grounding problem (Harnad, 1990) and the Chinese Room argument (Searle, 1980). The essence of both these arguments refers to the inherent problem of particular levels of reductionism in operation within the computational paradigm which inherently throw away intentionality but resultantly re-introduce the Cartesian dichotomy and the ‘hard’ problem of consciousness; Damiano & Canamero, 19**). Explicitly, it is difficult to ascertain how the computational system understands and learns at an extrinsic level. The input-process-output system, says Searle, simply cannot be “empirically conscious” (Glennan, 1995). Other criticisms of computationalism is that its processing is based on linear mechanisms – a dramatic limitation considering the complexity of certain tasks- and that malfunction at the symbol-level entails complete abrupt disruption of processes – unlike the “graceful degradation” that actually seems to occur in the brain. Also is the inadequacy of advanced AI to acquire and retain “common-sense” knowledge (i.e. the frame problem). Finally is that its emphasis on rule-governed symbol manipulation simply isn’t biologically plausible (Varela et al. 1991).

Therefore, while paradigmatic applications of computational principles often “fit” with the brain (explaining their continued popularity despite criticisms) they inevitably disregard those components of mental life which are “non-computable”. It is these components which pose the largest challenge to (cognitive-) neuroscientific based theory/research today (Fuchs, 2002). The “hard” problem of mind/consciousness plagues the rigid cognitive (neuro-)sciences insomuch that there exists a subjective element of consciousness which goes beyond any materialist explanation of functions (Chalmers, 1997). Although this has been seriously debated elsewhere (i.e. Dennett, 2001), it would seem that any attempts to exclude subjectivity simply lead to its surreptitious re-entrance through the back door (Fuchs, 2002). For example, those accounts which approach mind and brain in purely reductionistic/eliminativist terms (i.e. where subjective experience is a mere byproduct of brain mechanisms in action) in a sense personalise cerebral components by granting them qualities normally indicative of complete human abilities i.e. “learning” “perceiving” etc. Stapp (2008) comprehensively decomposes the problem insomuch that the distinction between intrinsic and extrinsic description renders it insufficient. Specifically, should the brain be a system of parallel computer processors (or the aggregate total of a number uniquely functional biological components), an intrinsic description details each processor as generating a specific unit of information (i.e. like a television set generating pixels). The extrinsic description, conversely, is likened to that which an external viewer of the television set sees – i.e. not individual pixels but a sum total picture. The computer model cannot account for both intrinsic and extrinsic levels of description without implicit appeal to meta-representational “ghosts in the machine” or homunculi and therefore implicitly re-introduce a Cartesian split. One of the strongest re-interpretations (in my opinion) of this latter problem has been provided by Dennett (1978) who espouses an “army of homunculi” approach, in which a hierarchy of homunculi of decreasing complexity/intelligence exist with the most basic processes at the bottom. This is akin to a connectionist or ‘Parallel Distributed Processing’ approach and suffice it to say has been criticised itself (Daugman, 2001).

Further to this charges of reductionism and/or dualism, with cognitive psychology’s convergence with neuroscience to become cognitive neuroscience; a secondary source of contention arises – specifically, within cognitive neuroscience is the tendency to map cognitive processes straight onto brain states or, even more problematically, localise them to specific brain regions (Hardcastle & Stewart, 2000). The latter authors have attributed this “taking [of] phrenology inside the head” (p*) to 3 particular movements within psychology primarily: 1) the impetus evolutionary psychology and its concept of modular functioning provided for neurosciences to find modules of specific function in the brain 2) renewed interest in mechanistic explanations of mind and brain and 3) the emergence of cognitive neuroscience and the implementation of neuro-imaging techniques. The biggest problem with localisation of processes, say the authors, is brain plasticity and the concordant multi-functionality it affords neural components. Indeed searching for the function of any one cerebral area seems pointless considering that the same area might perform a number of things depending on what else is taking place in the brain.

On a related point, the computational approach has been criticised insomuch that it can operate only in terms of static, discrete entities (i.e. symbols) which can only reliably to applied to static, discrete realities and, by extent, static, discrete cerebral ‘hardware’ (Torrance, 1995). While it deals well with systematicity and productivity, its ‘crude’ input-process-output formulisation belies difficulty in accounting for the unendingly interactive and reciprocal nature of the relationship between cognition, the body and the environment. Further, as I referenced above, the brain is anything but a static organ – rather it is dynamic, with multitudes of overlapping neural networks, and high levels of plasticity.

Despite the criticisms that have been levelled at these approaches to cognitive processes, results from neuro-imaging technologies have the propensity to be used as means of support for the reductionistic-computational accounts of free will, autonomy, self and everything such concepts entail (i.e. consciousness, lived experience etc.). Using neuroimaging technologies as evidence for localisation, however, or reductionism, isn’t as valid as might be first thought.

Indeed, results from neuroimaging are often treated as if they are “photographs of the brain”, and are treated as objective sources of evidence regarding how the brain operates with regards to performance in cognitive tasks (Roskies, 2007). McCabe and Castel (2008) have demonstrated that brain images have a significantly powerful effect on judgements of the validity of scientific evidence. Specifically they found that presence of a brain image (i.e. over a bar chart) significantly increased participants’ willingness to believe the conclusions of the article. In this sense we can get a hint of how such a technique might be implicated, perhaps unintentionally, to propound objective validation of a particular paradigm, often undermining the totality of lived experience. To explore this point further I will now briefly outline some of the criticisms of this notion of neuro-imaging as capable of providing objective evidence.

Considering neuroimages as “photographs of the brain” is a rather inapt analogy as Roskies (2007) discusses. Specifically, treating neuro-images as an ‘evidential medium’ in the way we treat photographs is an analogical misnomer in that they only indirectly measure brain activity – they measure “the timescale of the de-phasing of water molecules in the brain, [and not neural activity]”. The final images that are implemented in studies and articles, which display parts of the brain as “lighting-up”/activating, are in fact computer reconstructions of the actual data generated by the fMRI scan and associated null hypothesis significance tests (NHSTs) performed on each region of the brain (voxels) – which facilitate the generation of “statistical parametric maps” (SPM) super- imposed on the acquired images of the brain (Klein, 2009). In the manner in which the techniques are indirect measures, epistemic challenges can be posed (Bechtel & Stufflebeam, 2001).

At the practical level, as Bechtel & Stufflebeam (2001) outline, some of the epistemic challenges (i.e. are results artefacts of the technique itself or genuine pieces of freestanding, reliable information?) can be summated to include a lack of ecological validity and an assumption of faith in the cognitive decomposition-task. The former regards the novelty of the situation – lying horizontal & remaining motionless in a darkened, confined space. This implies a difficulty in reliably judging the relevance of neuroimaging data to real life cognition – where it is highly improbable that such restrictions would exist (Hardcastle & Stewart, 2003).

The latter of Bechtel & Stufflebeam’s challenges regards the faith the experimenter places in the cognitive task implemented in the experimental setting. Specifically it must be insured that the task actually draws upon the processes which one wishes to investigate. Indeed, the modelling of many such processes and tasks comes from cognitive psychology and its predominant computational model; which itself is widely debated and contested (as previously outlined), especially by those models of cognition which draw on dynamic systems theory and non-linear processing paradigms. As such, as the authors contend, any imaging study is only as good as those assumptions which it is based upon.

While questions have also been raised regarding the poor reliability of neuroimaging across studies (i.e. Poeppel, 1996), more problematically are those theoretical and conceptual issues which lie at the core of the technology itself and not just merely in its practical application. Klein’s (2009) exegesis of the underpinnings of how fMRI technology operates is a clear expression of how inherently problematic using neuroimages as evidential information is. Specifically, the fact that fMRIs do not display readily interpretable pictures of signal differences in the brain but rather they display an amalgamation of the regions for which there was found to be a statistically significant difference in signal between task conditions illustrates a source of contention. Particularly, inasmuch as neuroimaging data are the result of thousands of simultaneous null hypothesis tests they inherit the many conceptual issues inherent in the NHST methodology.

Indeed, one of the fundamental issues with neuroimaging results is the result of the ‘causal density’ of the brain (Klein, 2009). Any and every task will have widespread effects on the brain – but fMRI will not reveal these as, for a large part, these effects will be small and functionally insignificant (in the statistical sense). Nonetheless, the fact that they occur is an important point – they might be necessary for the cognitive process’ instantiation. This is somewhat similar to the problems of arbitrary thresholds and vague alternatives Klein (2009) further describes. Specifically, with regards to the former point, the “activation” of cerebral locations that we see in neuroimages are based on the choice of an alpha level of significance but this choice is rather arbitrary; should we be conservative in our choice (i.e. alpha level = 0.01) then very few regions will appear to “activate”. Meanwhile a more liberal choice of alpha level (i.e. = 0.1) will result in much greater activation apparent across the brain. A much cited study by Haxby et al. (2001) for example has shown that prediction of object class being perceived i.e. a house or a face, can be reliably ascertained through analysis of patterns of activation that occur below the threshold for significance with exclusion of those regions displaying significant activation (i.e. the Fusiform Face Area in face perception). As such, many regions not revealed by neuroimages may be playing an important functional role in the processes under investigation.

Further to this point is that high activation in an area does not equate to an important functional role (Poldrack, 2008); it might simply be a by-product of the cognitive process and not particularly indicative of a necessary and sufficient condition for said specific cognitive process. For example, studies have shown that while significant hippocampal activation occurs during delay classical conditioning procedures, hippocampal lesions do not disrupt this function (Gabrieli et al. 1995). Furthermore, one of the most basic points is that imaging studies are not based on causal connections – rather they may only reveal probabilistic covariance (Poldrack, 2006), an elicitation of the well-worn adage “correlation does not mean causation”.

As such, while cognitive neuroscience has great potential to reconcile the many factions of a disjointed psychology – linking biology to theory, it conversely can be used to propound certain paradigms over others, inappropriately so. The techniques of neuroimaging are not infallible sources of objective finalistic evidence, rather they inhere a degree of interpretation and have numerous practical and conceptual limitations.

Therefore, in conclusion, where the computational model drives cognitive neuroscientific research, we must be careful in our use of the metaphor of mind-as-computer lest it be reified and constrain progress (a sort of “prison house of perspective”). Further, while reductionism is a necessity for progress (Brendel, 2003) i.e. a bottom up approach, we must remain wary of eliminativism in the sense that just because data from cognitive neuroscience (i.e. neuroimages) cannot account for the likes of the totality of lived experience(i.e. intentionality, subjectivity) does not necessarily infer that they are mere, insubstantial by-products of neural activity. Most importantly is that caution is exerted in the interpretation and assimilation of results, particularly when it comes to applying them practically (i.e. in the clinical treatment of psychopathology). In this manner, as Rand & Ilandi (2005) attest, while cognitive neuroscience can reconcile the science of psychology and lower-order natural science domains; its other applications must be approached prudently.

R E F E R E N C E S

You Might Also Like
x

Hi!
I'm Alejandro!

Would you like to get a custom essay? How about receiving a customized one?

Check it out