• No results found

An Enactive Approach to Animal Consciousness

N/A
N/A
Protected

Academic year: 2021

Share "An Enactive Approach to Animal Consciousness"

Copied!
50
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

AN ENACTIVE APPROACH

TO ANIMAL CONSCIOUSNESS

M.A. Thesis Philosophy Bart Borghols, June 25, 2018

Auguste Vimar (1851-1916) - Le Boy de Marius Bouillab`es

Supervisor: dr. Christian Skirke

(2)

Contents

Introduction 2

1 Animal minds in context 4

Historical overview . . . 4

Consciousness and behaviour . . . 7

The neural correlate argument . . . 9

2 Representationalism 12 Tye’s PANIC -theory . . . 13

Carruthers’ higher-order thought theory . . . 16

Against a representationalist take on animal consciousness . . . 18

The upshot . . . 22

3 The sensorimotor approach 23 The sensorimotor account of vision . . . 25

Extending the account . . . 29

No¨e’s conceptualism . . . 32

Prinz’ criticism . . . 33

The missing link . . . 34

4 An enactive perspective on animal consciousness 35 Consciousness and cognition: O’Regan vs. N¨oe . . . 36

Thompson’s enactivist picture . . . 38

Consciousness in the animal kingdom . . . 41

Epilogue 45

(3)

Introduction

For ages philosophers, biologists and laymen have thought about the similarities and differences between humans and other animals. Whereas for a long time it was common to think about non-human animals as far simpler creatures – for instance, Ren´e Descartes thought about them as mere mindless machines – it is more common nowadays to ascribe to them a mental life as well. We talk about many creatures as if they have thoughts, fears and desires; we even talk about them as if they can feel, as if they have similar experiences of the world as we do. Our experience, however, is laden with sophisticated elements such as language, concepts and self-awareness, for which most other animals seem to lack the cognitive capacities. Moreover, it is known that many behaviours can be accounted for by principles of associative learning, and that the behaviours of which we think that they are motivated by thoughts, fears and desires can be easily mimicked by a computer. And thus, the question arises whether our ascription of experience to animals is mere anthropomorphism, or whether we humans are not alone as experiencing subjects. Another problem is that while most of us are willing to ascribe experience to animals that are evolutionarily speaking ‘close’ to us, we become more doubtful as the creature in question becomes more ‘simple’. Hence many will assert that we should draw a line between the conscious and the non-conscious at some point – but where, and why?

Besides being of pure philosophical interest, an answer to this question has direct practical significance as well. For whether animals can for instance feel pain has implications for how we ought to treat them. About the latter topic a lot has been written, but it is all based on assump-tions about animals’ mental lives which cannot (yet) be properly substantiated. The matter thus calls for a proper investigation. What do popular theories of consciousness say about the possibility of conscious, phenomenal1 experience in non-human animals? How broadly is expe-rience distributed? And if an animal is conscious according to a particular theory, then what is its conscious experience like? These are the main questions that motivate this thesis.

Besides establishing whether some creature is conscious on the basis of different theories of consciousness, we can also argue in the opposite direction. Perhaps some theory makes predic-tions that are way out of line with our intuitive ascrippredic-tions of consciousness to other animals, or with scientific findings, such as similar neural structures which are known to correlate with consciousness. If the predictions of a particular theory do not align with empirical findings or basic intuition, this is an argument against the particular theory of consciousness. Therefore, a critical evaluation of animal consciousness must consider two aspects: theories of consciousness should motivate us to rethink our ascriptions of mental lives to non-human animals, and these ascriptions should make us doubt some of the ideas these theories are based on.

1I do not want digress on subtle distinctions between phenomenal and action consciousness. When I say that

(4)

Following this two-way approach, I will compare the application of two ‘kinds’ of theories of consciousness to animals, and argue that one of these theories, called sensorimotor enactivism, provides a more plausible account of animal consciousness than the most popular strand of theories: representationalist theories (which, as we shall see, come in many different flavours). I will give many different reasons for this, but the main argument that the reader can keep in the back of his head is that the enactivist theory is better suited to deal with the diversity in life and the properties we share with other animals than the representationalist theories, which seem to be based too much on what experience is for us humans, rather than on what it can be for other animals. I will argue that skeptics about animal minds often look in both the wrong place and the wrong way: the focus is on cognitive abilities that are like those of humans, illustrated by behaviour similar to human behaviour. But it is only when we appreciate that different animals have different bodies, different interests, and (consequently) different worlds that we can (start to) understand their subjectivity.

The structure of the argument, and consequently thesis, is as follows. In chapter one I will sketch the philosophical and scientific context that envelops questions about the animal mind, and critically discuss the ‘quick’ argument that seeks to answer questions about animal minds solely in terms of neurobiological evidence. I will argue specifically that arguments pro and contra animal consciousness only make sense in relation to the theory of consciousness that is adopted in the background. In chapter two I will discuss representationalist theories and argue that they are unable to say something about animal consciousness in a convincing way, because they neglect the importance of how different creatures relate to their environments in different ways. In chapter three I introduce ‘sensorimotor theory,’ which adopts a radically different approach to consciousness in terms of perception as an activity rather than a mere registration of external stimuli. This paves the way for the exposition in the last chapter, which presents an alternative account of animal consciousness that aspires to explain how consciousness is something that comes in degrees, and becomes more sophisticated as animals and their worlds become more complex.

(5)

Chapter 1

Animal minds in context

The starting point of this enquiry about the animal mind is to situate the debate within its philosophical and scientific context. My aim is to show how the advance of science has shaped the debate around animal consciousness, and to discuss the connection between the observation of intelligent behaviour and consciousness. I will also discuss arguments concerning consciousness relying on neurobiological evidence, and argue that these are of little significance without a broader theory of consciousness. In this chapter I heavily draw on The Animal Mind (2014) by Kristin Andrews, an excellent introduction to philosophy about animal cognition.

Historical overview

For as long as there has been philosophy, the relation between humans and non-human animals has been a point of debate. For centuries the commonly held idea (in Europe1) was that humans are fundamentally different from other animals, since only they engage in rational thought. For instance, Aristotle “asserted that only humans had rational souls, while the locomotive souls shared by all animals, human and nonhuman, endowed animals with instincts suited to their successful reproduction and survival” (Allen and Trestman, 2017). According to St. Aquinas, (except for the heavenly creatures) “humans alone are rational thinking beings who are able to make decisions and choose their own actions” and Immanuel Kant thought the same (Andrews, 2014, p. 7). But best-known is probably the position of Ren´e Descartes, who saw animals as mere automata,2 or reflex-driven machines, and argued in favour of this position by claiming that thought presupposes the ability to use language.3

Prominent philosophers who argued against Descartes were Voltaire and David Hume. Voltaire responded to Descartes’ argument that we observe plenty animal behaviour that indi-cates that they possess thought (Andrews, 2014, p. 8). And Hume simply writes (as quoted in Andrews (2014)):

1Eastern traditions such as Buddhism have always stressed the “continuity of consciousness across life” (Tononi

and Koch, 2015).

2Note, however, that he did not see perception and/or sensation as involving thought – and thus saw these as

explainable in mechanistic terms (Allen and Trestman, 2017).

3Many contemporary philosophers have responded to this argument, for instance by claiming that

“disposi-tional thinking is not dependent upon occurrent thought,” that “the best explanation for the absence of speech in animals is the not the absence of occurrent thought but the absence of the capacity for recursion” or “that there are behaviors other than declarative speech, such as insight learning, that can reasonably be taken as evidence of occurrent thought” (Allen and Trestman, 2017).

(6)

Next to the ridicule of denying an evident truth, is that of taking much pains to defend it; and no truth appears to me more evident than that beasts are endowed with thought and reason as well as man. The arguments are in this case so obvious, that they never escape the most stupid and ignorant. (Hume, 2000, p. 118)

Before Descartes, a notable exception to the common idea that humans as rational beings were superior to all other creatures was Michel de Montaigne’s Apology (1569). An interesting aspect of his particular line of thought is that it is based on elaborate accounts of animal skill, such as the behaviour of migratory birds and fishes (Kenny, 2010, p. 513).

Perhaps it is no coincidence that philosophers in general take a different position as we enter the Enlightenment. The view that only humans have minds is what was demanded by a theocentric world-view, whereas a more modern point of view is that both humans and non-human animals belong to the same ‘nature’. This idea was only further reinforced as Charles Darwin introduced the theory of evolution in his On the Origin of Species (1859), an idea that has changed the way in which we see the human-animal relationship, and which makes it, in some sense, much more plausible that ‘having a mind’ is something that evolved prior to humankind. Darwin thought so as well and put forward what is dubbed ‘Darwin’s continuity thesis,’ which says that on the basis of their many similarities, man and higher mammals probably share the same mental faculties (Andrews, 2014, p. 25).4 At the same time, certain hierarchical interpretations of Darwin’s theory may explain why many still regard humans as being fundamentally different from other animals, despite the advances of contemporary biology.5

The science of animal minds: interpreting behaviour

Although biology as a science dates back to Aristotle, animal research only matured qua method-ology and to the extent to which it was practiced in the nineteenth century. One of the first ways to study animal behaviour, called ‘anecdotal anthropomorphism’ was developed by George Ro-manes. It basically consisted of explaining anecdotes of animal behaviour using analogies with human behaviour (Andrews, 2014, p. 26).6

4This can be identified as an argument ‘from evolutionary parsimony,’ an argument also propagated by

contemporary scholars such as primatologist Frans de Waal: considered the phylogenetic proximity of these animals it is unlikely that an entirely different mechanism is responsible for similar behaviour; see (Andrews, 2014, p. 10) and (de Waal, 1999).

5

It is easy to contend that the principle of ‘survival of the fittest’ indicates that there is a single fittest species, which turns out to be homo sapiens. Darwin himself also seems to endorse a certain kind of hierarchy. In the Descent of Man (1871), he writes:

There can be no doubt that the difference between the mind of the lowest man and that of the highest animal is immense. An anthropomorphous ape, if he could take a dispassionate view of his own case, would admit that though he could form an artful plan to plunder a garden – though he could use stones for fighting or for breaking open nuts, yet that the thought of fashioning a stone into a tool was quite beyond his scope. Still less, as he would admit, could he follow out a train of metaphysical reasoning, or solve a mathematical problem, or reflect on God, or admire a grand natural scene. (Darwin, 1871, p. 104).

However, it is worth noting that the general idea of a single fittest species is unfounded (consider for instance that there are also many animal skills that humans lack), and that the biological evidence and theory on which Darwin makes hierarchical distinctions between humans and other animals – and also between different human races – is outdated.

6Note that this somewhat resembles the practice of telling fables (such as The Tortoise and the Hare) that

(7)

To overcome the obvious shortcomings of this approach,7 the methodology changed in two principal ways. To make the research more scientific (falsifiable, repeatable) more research was done in controlled research environments. And to avoid the problems of mental state attribution, C. Lloyd Morgan formulated ‘Morgan’s canon’ (as quoted in Andrews (2014)): “in no case is an animal activity to be interpreted in terms of higher psychological processes, if it can be fairly interpreted in terms of processes which stand lower in the scale of psychological evolution and development” (Morgan, 1930, p. 249). Whereas Morgan only thought that that we should minimize mental state attribution, it became standard practice in the first half of the twentieth century to make mental concepts completely redundant – a position known as behaviourism.

Initially behaviourism was quite a successful approach, managing to explain many types of behaviour solely by principles of associative learning,8 such as classical (Pavlov) and operant (Skinner) conditioning. But eventually behaviourism went out of fashion, mainly because the relation between associative learning and higher-order9 cognition is far from understood, and because the claim that all behaviour can be explained by associative learning mechanisms turned out to be unfounded (Andrews, 2014, p. 38).

Two opposed tendencies

Interestingly, the advance of science thus brought forth two opposed tendencies. On the one hand, the theory of evolution and the loosened grip of religion on society and knowledge made it logical to reconsider the relationship between humans and non-human animals. This means that every aspect of being human, including having conscious experience, must ultimately be explained in terms of human’s animality. On the other hand, advances in biology and psychol-ogy made it possible to talk about behaviour in terms of conditioning, which often made it unnecessary to consider conscious experience when explaining behaviour.

These tendencies in fact still dominate the contemporary debate. The ‘charitable’ side argues that the many similarities between humans and other (higher) animals, and the complexity of these animals’ behaviour, show beyond doubt that they have conscious experience. The ‘skeptic’ side refuses such an argument based on analogy by being skeptical about what these similarities really show: there seem to be ways in which complex behaviour could arise other than as a result of conscious thought. Moreover, the skeptics commonly argue that most animals lack one or more of the necessary conditions for conscious experience to occur.

To get a firmer grip on the issue, and to determine how skeptical the skeptical position really is, we need to examine what kind of behaviour is taken to be an indication of consciousness, what role consciousness plays in the explanation of behaviour, and which arguments are commonly invoked to argue against the attribution of consciousness to most species other than humans.

modest than the character traits attributed to animals in fables, and of course they were based on real events rather than fictive stories.

7

The principal methodological problem with anecdotal anthropomorphism is that in any case, we cannot properly determine whether there is another explanation, for the anecdotes miss a lot of information. More precisely, “[they] don’t allow for statistical analysis about the frequency of the behavior, and hence make it much more difficult to eliminate alternative explanations for the behavior,” and “lack information about the contexts in which the animal doesn’t act similarly” (Andrews, 2014, p. 28). The principal philosophical problem is whether we are justified in introspecting our own mental states and attributing them to an animal to explain its behaviour.

8

“Learning resulting from the procedures involving contingencies among events” (Andrews, 2014, p. 35).

9“In contemporary practice ‘lower’ usually means associative learning, that is, classical and instrumental

conditioning or untrained species-species responses. ‘Higher’ is reasoning, planning, insight, in short any cognitive process other than associative learning” (Shettleworth, 2010, pp. 17-18).

(8)

Consciousness and behaviour

Before discussing the relationship between consciousness and behaviour directly, I want to give a short overview of what is often taken to be behavioural evidence for consciousness. I will focus on perceptual experience, but the interested reader must note that concerning (conscious) pain experience, a number of comparative studies spanning all classes of animals have been performed.10

Species that we should keep an eye on in testing our theories of consciousness are especially those that show remarkably intelligent behaviour. It is widely agreed that the cognitive capacities of many bird species – whether memory, tool use, vocal learning, etc. – level those of most mammals (Edelman and Seth, 2009). In fact, the animal that is by many regarded as coming closest to humans qua cognitive abilities is the African grey parrot. One famous African grey parrot, Alex,11 was able to communicate with people and discriminate colours, shapes and numbers at the level of three year old kids.12

Quite serious cognitive abilities have also been observed in some fish, bees and cephalopods. Some fish can learn indirect routes to food, discriminate features of their environment and use them for navigation, find their way through mazes, and recognize certain patterns (Tye, 1997, pp. 304-305). Several fish species also show one-trial learning, may memorize things for over a year, exhibit social learning, and are capable of tool use (Brown, 2015). Honey bees show “an impressive ability to learn (and re-learn) colors, odors, shapes, and routes quickly and accurately” (Srinivasan, 2010). They are able to navigate through labyrinths, perform well on delayed match-to-sample tasks, are capable of learning the distance from their nest to a food source and can report this distance to fellow bees via dancing, count to four, and recognize abstract signs (Srinivasan, 2010). Finally, octopuses can discriminate objects based on size, shape and intensity, find their way through mazes, retrieve objects from transparent, plugged bottles, have a good memory and even appear to be capable of observational learning (Edelman and Seth, 2009).

And the list goes on; remarkable abilities are also observed in for instance hermit crabs and spiders (Allen and Trestman, 2017). The point is that there are a lot of creatures – not of one particular class but throughout the animal kingdom – that show complex behaviour. The hard task is now to distinguish between the behaviours that have resulted from unconscious conditioning and those that involve conscious thought.

Evidence of self-consciousness

Perhaps the best-known experiment regarding self-consciousness is mirror self-recognition. Var-ious animals (great apes,13 elephants, dolphins (Andrews, 2014, p. 71)) seem to be able to recognize themselves in a mirror, which would suggest that they have some concept of self: in order to recognize the mirror image as being an image of yourself, you need a concept of who or what ‘yourself’ is to begin with. There are, however, problems with inferring self-consciousness

10

See for instance Varner (2012), Sneddon et al. (2014) and Elwood (2011). On the basis of neuroanatomical, physiological and behavioural evidence the aforementioned studies agree that it is very plausible that at least the bony vertebrates and cephalopods can experience pain.

11

See Alex in action: https://youtu.be/p0E1Wny5kCk.

12There is even evidence that these parrots observe the M¨uller-Lyer illusion (Pepperberg et al., 2008), which

perhaps makes subjective experience even more plausible.

(9)

from mirror self-recognition. First, it is difficult for many species to establish whether they in fact pass the test (Andrews, 2014, p. 71). Besides this practical issue, it may simply be the case that self-consciousness is not necessary.14 A different problem with mirror self-recognition is that it is only an appropriate test for species with sophisticated visual abilities. For in-stance, “fish seldom (if ever) see their reflection so they are unlikely to have evolved visual self-recognition. (...) [But] there is compelling evidence that fish are capable of self-recognition using chemical cues” (Brown, 2015, p. 14), as are dogs (Horowitz, 2017).

Another possible indication of self-consciousness comes from mental monitoring tasks, which try to establish to what extent an animal is aware of its own mental states. An example of such a mental monitoring task is given by research that psychologist Robert Hampton performed with macaque monkeys. He found that these monkeys seem to be able to assess how well they remember a certain sample (Andrews, 2014, p. 73).15 An analogous experiment with dolphins had similar results (Gennaro, 2009).

If experiments like these indeed show that animals are involved in mental monitoring, they provide direct evidence for the ability of metacognition. But there is heavy debate about the interpretation of such experiments, as there are alternative explanations for the observed be-haviours in lower-order cognitive terms (Andrews, 2014, p. 75). However, the fact that many behaviour can be explained in terms of lower-order cognitive mechanisms does not imply that the behaviour is in fact caused by these simpler mechanisms.

A third way to approach self-awareness is via episodic memory tasks. Episodic memory is roughly the capacity to remember past experiences or imagine future experiences. It is diffi-cult to assess whether any animal has this capacity, but it is obvious that episodic memory presupposes a concept of self, and thus finding evidence of episodic memory would be strong evidence in favour of self-consciousness. By observing how they store and retrieve food, it has been shown that scrub jays (a type of bird) are able to recall ‘what,’ ‘where’ and ‘when’ infor-mation (Andrews, 2014, p. 76), similar results have been found for several species of primates, birds, and mammals such as dolphins, mice and rats (Gennaro, 2009; Dere et al., 2006). It has also been suggested that the fact that apes carry tools to places where they are later needed indicates episodic foresight (Andrews, 2014, p. 77). However, the same remark as for mental monitoring tasks holds: these kinds of evidence looks promising, but may be explained without episodic memory or foresight.

What kind of behaviour is necessarily conscious?

It is difficult to distinguish conscious behaviour – that is, behaviour that involves conscious experience or thought – from hard-wired behaviour but several authors (see e.g. Allen (2013,

14

Alternative explanations are, for instance, that “having the belief that two objects match doesn’t mean that one is self-conscious, even when one of those objects is oneself,” that “passing the mirror self-recognition task only indicates that one has the ability to recognize one’s own body, not one’s own self,” or that “the test involves the ability to generate and compare two different representations of the same thing” (Andrews, 2014, pp. 71-72).

15

More elaborately: “knowing that monkeys can perform a simple delayed match to sample task, Hampton added one feature: he allowed monkeys to decide whether to take the test, or to choose not to take the test. If they took the test and passed, they received a valuable treat, but if they failed the task they received nothing. However, if they decided not to take the test they were given a lesser value food reward. (...) Hampton found that the frequency with which the monkey chose not to take the test increased with the duration of the delay since the original sample was presented.” (Andrews, 2014, p. 73).

(10)

pp. 35-36) and Varner (2012, p. 124)) have listed a number of learning or behavioural strategies that indicate conscious experience. It has been argued that ‘trace conditioning’ and full oper-ant conditioning involve conscious thought whereas ‘delay conditioning’ does not,16 although evidence of operant conditioning in rats mediated by the spinal cord alone raises some doubts about it necessarily being conscious.17 The reason that these types of conditioning are linked to consciousness is that in humans, trace- and operant conditioning only occur if the experimental subject reports to be conscious of the association it has learned, whereas delay conditioning does not depend on whether the subject is conscious of the association. There is evidence of trace conditioning in for example Atlantic cod and rainbow trout (Allen, 2013, p. 35).

Varner gives three examples of the ‘full-fledged’ operant conditioning that excludes cases like the instrumental learning in rats’ spinal cords: multiple reversal trials,18 probability learning19 and the formation of learning sets.20 Interesting about these experiments is that while mammals, birds, amphibians, reptiles and octopi perform well on one or more of these tasks (Varner, 2012, p. 129), fish do not – although Allen notes that this might also be due to the fact that operant conditioning in fish has not really been investigated (Allen, 2013, p. 36).

It must be noted that although good performance on these learning tasks is a strong indication of consciousness, a bad performance does not imply that the animal in question has no conscious experience. Not only are cognition and conscious thought something different from conscious experience, it may also be the case that these learning tasks are more natural to the animals that perform them well than to the animals that perform poorly.

The neural correlate argument

We have seen that a wide variety of species often exhibits complex and intelligent behaviour. While this behaviour does not directly show that a species is conscious, it does lay the burden of proof with the skeptics: they need to argue convincingly how plausible it is that such behaviour can arise without consciousness, given the fact that the human analogies of such behaviour are accompanied by conscious thought and experience. An extra complicating factor is conscious-ness in human infants and certain mental disabilities. Their behaviour is less sophisticated than that of some other animals, and if one wants to argue in favour of, say, infant consciousness but against consciousness in non-human animals, one must give a compelling argument that is not based on cognitive abilities.

16In ‘delay conditioning’ the conditioned stimulus overlaps temporally with the other stimulus that already

produces the response. In ‘trace conditioning’ the two stimuli are separated in time, making this kind of condi-tioning dependent on memory. In ‘operant condicondi-tioning,’ what is conditioned is not a hard-wired response but a goal-directed response (Allen, 2013, p. 35) – see Varner (2012, p. 128) for a more elaborate definition.

17

Both Allen and Varner note, however, that the instrumental learning exhibited by the rat’s spinal cord needs to be distinguished from full-fledged operant conditioning.

18

Here the experimental subject must respond in one out of two ways. After some time, the reward pattern changes, meaning that the subject must from then on respond in the other way. The pattern is subsequently reserved multiple times, and performance is measured basically by whether and how quickly the subject learns how the game works.

19

In a probability learning task, the subject gets rewarded a certain percentage of the time for choosing option A and the the remaining time for choosing option B. Performance is measured by whether the animal doing the task chooses the same kind of strategy as a human would.

20

Here the subject needs to learn that it has to consequently choose one out of two objects, and that when the two objects are changed, the rule that one out of two objects is the correct one remains unaltered.

(11)

One such argument is what I call the neural correlate argument. It goes like this: animal X cannot have conscious experience because it lacks brain structure Y . It is now easy to defend infant consciousness: infants have the same brain structures (albeit perhaps not yet fully devel-oped) as adult humans, and therefore they can have conscious experience despite their lack of cognitive abilities. I want to examine this argument in a little more detail and then argue that it can never be an argument against animal consciousness in isolation – that is, it can only be part of a larger argument that is grounded in a specific theory of consciousness.

The search for ‘neural correlates of consciousness,’ that is the “minimal neuronal mechanisms jointly sufficient for any one specific conscious percept” (Tononi and Koch, 2008, p. 239), is a research paradigm initiated by Crick and Koch (1990). Results are obtained by clever exper-iments such as those involving the phenomenon of ‘binocular rivalry,’ in which one image is shown to the left eye and another image to the right, and where despite the constant stimulus, the observer either consciously sees only the right or only the left image (in alternating fashion) (Tononi and Koch, 2008). By letting the experimental subjects report when they see which image consciously, the experimenters can observe how this correlates with brain activity.

The difficulty with this approach is to dissociate the actual correlates of consciousness with the correlates of cognitive processes that accompany the percept, such as attention, the brain activity needed to report the observation, and the processes that process the contents of the images. But because these parameters can be varied or studied independently, the problem is not (completely) insurmountable. The latest update in the neural correlate search is that the neuroanatomical basis of (visual) consciousness is “primarily localized to a more restricted temporo-parietal-occipital hot zone with additional contributions from some anterior regions” (Koch et al., 2016, p. 315).

Since most animals lack a (well-developed) neo-cortex, it is now perhaps tempting to conclude that these animals cannot have conscious experience. Similarly, some scholars argue that fish cannot feel pain, or suffer, because they lack the limbic system that is responsible for affective responses in humans. But it also appears that for example “goldfish have areas of the brain functionally equivalent to the hippocampus and amygdala [and] that goldfish with lesions in the amygdala-like area cannot learn to avoid an electric shock, while typical goldfish can” (Andrews, 2014, p. 64). This raises doubt about whether the lack of a specific brain region is really a good argument against the animal in question having conscious experience.

Not everyone agrees about the (neo-)cortex fulfilling a principal role. Merker (2007) for instance argues on the basis of its “functional role revolving around integration for action” that “key mechanisms of consciousness are implemented in the midbrain and basal diencephalon, while the telencephalon [or cerebrum; including the cerebral cortex] serves as a medium for the increasingly sophisticated elaboration of conscious contents” (p. 64) – and hence that “the primary function of consciousness [vastly] antedates the invention of neocortex by mammals, and may in fact have an implementation in the upper brainstem without it” (p. 80).

With this disagreement in mind, we must conclude that it is not (yet) possible to draw definitive conclusions about animal’s minds on the basis of their neuroanatomy – except perhaps for the observation that all mammal brains are very similar to human brains (Koch et al., 2016) and the common-sensical point that probably some degree of brain complexity must be attained. In fact, even if there would be univocal agreement about the necessity of having a neocortex for consciousness in humans, it would still not show that its functional role is not performed by a different brain structure in other types of animals (that is, it does not imply anything if

(12)

one holds that mental states are multiply realizable). In other words, if neural correlates are – or can be – found, then this still leaves open what it is about this correlate that constitutes consciousness. And if it is only its functional role, then what counts as ‘similar enough’ if we try to find analogical structures in other creatures?

What we do know is that for instance birds, fish,21 bees and cephalopods have complex brains (Tononi and Koch, 2015). While fish do not have a well-developed forebrain, their midbrain is quite developed (in mammals it is more or less the other way around) (Allen, 2013). Bird brains contain regions both structurally and functionally similar to thalamocortical regions of the mammal brain (Edelman and Seth, 2009). For bees and cephalopods it is even more difficult to make a comparison, for whereas birds, fish and mammals are of common lineage, the brains of these invertebrates have developed independently. But we know some things, such as that in cephalopod brains, EEG patterns have been detected which resemble those of awake vertebrates and are distinct from those observed in other invertebrates (Edelman and Seth, 2009).

Towards a broad theory of consciousness

At this point, it is again a matter of deciding what is plausible. Would we really deny a fish pain experience on the basis of it lacking a limbic system, when experiments show that we can manipulate its behaviour in the presence of nociceptive stimuli by lesioning a specific brain region? We would not. The only good reason for doing so would be a solipsism kind of skepticism that no one would uphold in practice, or if one has a broader theory of consciousness which already excludes fish consciousness for other reasons. In other words: the lack of a limbic system is in isolation never enough of an argument to deny pain experience in fish. It would only be a good argument if it were complemented by a theory of consciousness that explains why a limbic system specifically is a necessary requirement for pain experience.

This is then what we need to do: find a ‘broad’ theory of consciousness, which not only provides an explanation for the character of human experience, but also indicates which bodily features are a necessary requirement for conscious experience and why. The theory needs to take into account the fact that the bodies and capacities of different species are very diverse, and see these differences with respect to humans not necessarily as deficiencies, but potentially also as a way to understand consciousness from another perspective.

In the next section I will discuss one popular branch of theories, called representationalist theories. I will investigate how animals are incorporated in these theories of consciousness, and argue that the accounts of animal consciousness that result are rather poor. This paves the way for an exposition of the theory that I advocate.

21Planet earth is inhabited by an enormous variety of species, which are all (taken to be) evolutionarily related

in some way, and different classes of animals are given different names as to distinguish different ‘branches’ of this ‘evolutionary tree’. On this basis, there are some relevant distinctions to keep in mind. First, the distinction between invertebrates and vertebrates. Interesting subphyla of animals in the invertebrate phylum are artrhopods, containing classes of animals such as insects, arachnids (spiders), and crustaceans (lobster, shrimp, crab), and the mollusks, which contains classes such as cephalopods (squid, octopus) and snails. In the vertebrate phylum, a relevant distinction is between the classes having a bony skeleton and (simple) lungs – namely mammals, amphibians, reptiles, birds (actually birds and reptiles belong to the same branch, as they have a common ancestor) and the bony fish – and the remaining classes. Note in particular that vertebrates such as sharks, rays, and sawfish do not belong to the vertebrates with a bony skeleton. This reveals that it will be dangerous to speak of ‘fish’ later on, as for instance tuna are more closely related to humans than to sharks (Allen, 2013).

(13)

Chapter 2

Representationalism

As was discussed in the last chapter, it is hard to consider arguments in favour of, or against animal consciousness independently from a background theory of consciousness. In this chapter I will discuss the most popular branch of contemporary theories, representationalist theories, and argue that for several reasons these theories are unable to provide a satisfactory account of animal consciousness.

In broad lines, Representationalism is the view that the qualitative aspects of an experience are represented aspects of one’s surroundings (or own body). Besides some some technical argu-ments,1 the main philosophical motivation for this view comes from introspection: whenever we have a conscious experience, this experience asserts something about either the external world or the state of our own body. A contemporary reason to adopt representationalism is the com-putationalism that dominates the cognitive sciences since the 1970’s: the idea that the mind is nothing more (or less) than a sophisticated computer manipulating representations. Proponents of representationalism will likely hold that it is the only viable version of materialism. That is, they will hold that given the modular structure of the human brain, and how we broadly conceive of its workings, there is no conceivable alternative explanation for consciousness.

Two important distinctions have to be made to make the representationalist thesis more precise.2 We can distinguish between strong and weak representationalism. The strong version makes a supervenience claim: there can be no qualitative difference between experiences with-out a representational difference. The weaker version just claims that experiential states have a representational aspect – a claim which, as we shall see in a bit, is much less contested. From now on, when I talk about ‘representationalism’ I mean the strong version. A second distinction is between reductive and non-reductive representationalism. The former sees the representation-alist thesis as ‘reducing’ sensory qualities to representational properties, the latter does not.3 I will discuss the reductive version.

1

These arguments claim that the ontology of representationalism gives a natural explanation for some of the main features of perception, such as its (apparent) transparency, (non-)veridicality and the fact that we can have non-actual experiences such as hallucinations or optical illusions. See the encyclopedia article by Lycan (2015) for an extensive discussion of these arguments.

2A distinction that falls outside the scope of this thesis is that between narrow and wide representationalism

(see Lycan (2015)). In the latter version, the representationalist thesis is taken to imply that the content of experiences is to some extent external (because qualities are represented properties, qualities do not only supervene on brain states) whereas the former version argues against this inference.

3

One can for instance argue that in order to represent properties of the external world, sensory qualities are needed.

(14)

All (strong) representationalist theories agree about the fact that ‘conscious states’ are repre-sentational. But representation alone is not enough – there are a lot of things that represent features of the outside world, such as photographs, that do not seem to be the kind of things that are conscious – hence some additional aspect has to be identified. Usually, this addi-tional aspect is some funcaddi-tional role of the representation within the larger cognitive system. In ‘higher-order’ theories the additional conditions involve some capacity for metacognition, while in ‘first-order’ theories they do not.

In this chapter, I discuss representationalism and its relation to animal consciousness from both perspectives. First, I discuss a first-order theory by Michael Tye, and thereafter a higher-order theory by Peter Carruthers. Then some drawbacks of representationalist theories – espe-cially in relation to animal consciousness – are discussed.

Tye’s PANIC -theory

One example of a first-order theory is the Michael Tye’s ‘PANIC’-theory. According to Tye, “a mental state is phenomenally conscious just in case it has a PANIC – a Poised, Abstract, Nonconceptual, Intentional Content” (Tye, 1997, p. 292). Let us first see why Tye thinks that specifically these properties are what distinguish conscious mental states. Then we can move on to discuss what his PANIC criterion implies for the distribution of consciousness throughout the animal kingdom.

To understand the criterion that mental states need to be poised, we must first appreciate that according to Tye, we must make the distinction between “basic perceptual experiences or sensations” and “beliefs or other conceptual states” (Tye, 1997, p. 293). These basic perceptual experiences must be seen as an intermediate layer between raw sense-data and conceptual mental states, or in Tye’s words, they “form the outputs of specialized sensory modules, and the inputs to one or another higher-level cognitive system” (Tye, 1997, p. 294). It is in this sense that mental states can be poised: they may (that is they have the disposition to) influence the formation of, or alter, beliefs.

As is evident from his definition of basic perceptual experiences as an intermediate state between sense-data and conceptual states, Tye takes phenomenal states to be (possibly) non-conceptual. He gives some examples that support this view, for instance that our colour sen-sations “subjectively vary in ways that far outstrip our color concepts” (Tye, 1997, p. 195). He also stresses that although perceptual experiences need not be conceptual, they may be conceptual or be influenced by the concepts one has.

The condition that a phenomenally conscious state needs to have intentional content seems to mean at once that experience is transparent (in the sense discussed in the previous section) and that the experience has representational contents and nothing else (that is, his account is a version of strong representationalism): “introspecting a visual experience is not like viewing a picture. (...) one seems inevitably to end up focusing on external features one’s experience represents the object as having, to the [aspects of the perceived object] as out there in the world” (Tye, 1997, pp. 296-297). And ‘internal’ experiences, such as the experience of pain or proprioception, “represent changes in the body much as visual experiences represent changes in the external environment” (Tye, 1997, p. 298).

The final property of phenomenal states is that they may be abstract, which means that the experience need not contain concrete objects. According to Tye, “what is crucial to

(15)

phe-nomenal character is the representation of general features or properties. Visual experiences nonconceptually represent that there is a surface having so-and-so features at such-and-such locations” (Tye, 1997, p. 298). Again, he does not argue that all content is abstract, but rather that some content may be abstract, and that in some particular cases (such as hallucination) experience may be purely abstract.

PANIC and animals

Before we apply the PANIC criterion to animals, Tye notes that although his criterion allows us to determine which creatures have phenomenal consciousness, it does not tell us what it is like to be a certain creature:

What gets outputted depends upon what gets inputted and how the modules oper-ate. Contents that are poised for us may not be for other creatures and vice-versa. This is why we cannot know what it is like to be a bat, for example. Given how we are built, we cannot undergo sensory representations of the sort bats undergo. And this is why experiences and feelings are perspectivally subjective: knowing what it is like to undergo them requires the right experiential perspective. (Tye, 1997, p. 301) What remains is the application of the theory to different organisms. It is clear that plants do not have phenomenal consciousness, Tye argues, for their experience is not poised – in fact, plants don’t even have beliefs or desires (Tye, 1997, p. 302). Similarly, caterpillars “have a very limited range of behaviors available to them, each of which is automatically triggered at the appropriate time by the appropriate stimulus” – for instance, they have two eyes and always move in the direction where the light intensity is strongest; remove one eye and it will just keep on walking in a circle (Tye, 1997, pp. 203-203). Therefore “there seems no more reason intuitively to attribute phenomenal consciousness to a caterpillar on the basis of how it moves than to an automatic door” (Tye, 1997, p. 303).

Things change when we move to creatures that exhibit flexible behaviour. It appears that some fish, for instance, have quite sophisticated cognitive abilities (see chapter 1). They can change their behaviour on the basis of new sensory information, and this points in the direction of mental states with poised content.

But do fish have beliefs? According to Tye, we must first recognize that fish have concepts – although these are probably very different from and much more simple than ours: “possessing a perceptual concept, in my view, is (roughly) a matter of having a stored memory representation that has been acquired through the use of sense-organs and that is available for retrieval, thereby enabling a range of discriminations to take place” (Tye, 1997, p. 305). And then “perceptual beliefs are (roughly) representational states that bring to bear such concepts upon stimuli and that interact in rational ways, however simple” (Tye, 1997, p. 305).

On this interpretation of concepts and beliefs, Tye argues, the observed behaviour is good evidence for the hypothesis that fish have mental states with PANIC. Tye is also aware of the fact that honey bees exhibit remarkable learning capacities and flexible behaviour. And so he concludes that even honey bees are phenomenally conscious.

It thus seems like on the PANIC account, even some invertebrates can feel things. But according to Tye, we must not confuse phenomenal consciousness and what he calls ‘awareness’:

(16)

Honey bees and fish behave intelligently and they are the subject of phenomenally conscious experiences, but they have no order consciousness. In the higher-order sense, they are unconscious automata – they have no cognitive awareness of their sensory states. They do not bring their own experiences under concepts. Unlike you and me, they function perpetually in a state like that of the distracted driver who is lost in thought for several miles as he drives along. (Tye, 1997, p. 310)

That is, these creatures (and the humans in cases like the distracted driver) can ‘consciously’ experience something without being able to establish, as a matter of fact, that they are under-going the experience. On the basis of this distinction, he argues that fish and honeybees cannot suffer because they lack the capacity to be cognitively aware of the pain they ‘feel’ (Tye, 1997, p. 310). But is it really possible to disentangle phenomenal consciousness and awareness? What is it like to unawarely undergo experiences? Is it not like not experiencing anything?

According to Tye there is no such problem. To him any experience has a phenomenal character, for this character is needed to perceive:

Consciousness of the sort the driver lacks is not phenomenal consciousness. His blindness is cognitive. He is oblivious to the phenomenal character of his visual states. But those states still have such a character. Things do not lose their looks to him while he is distracted. If they did, how could he keep the car on the road? (Tye, 1997, p. 310)

This is also what should have been expected from Tye’s description of the perceptual system. The driver has poised basic perceptual experiences, ergo he has phenomenally conscious expe-rience, despite his lack of awareness. But again, is he really justified to separate phenomenal consciousness and awareness? Consider as an example that you are playing a video game, and are fully immersed in what you are doing. Like the distracted driver, you are not bringing your experiences under concepts. But contrary to the distracted driver, your experiences are most vivid, and you are, as it seems, in a state of full awareness: you are fully attuned to any visual or auditory cue that may need to trigger a response.

Perhaps the example of playing a video game is more comparable to animal perception than that of distractedly driving a car is.4 This seems to be precisely the catch of the distracted driver comparison: because distracted driving ‘works’ to some extent, it seems conceivable that animals are like distracted drivers; but it is obvious that animals do not never attend to what they perceive, and since the driver also consciously perceives its surroundings as its attention is ‘on the road,’ the comparison seems to be misguided. This raises further questions: is consciousness properly conceived of as the ability to think about your own mental states? Does the fact that a fish is not aware of the fact that it has an experience of pain imply that it is not aware of the pain? And to what extent are animals like distracted drivers? Because these are essentially questions about a higher-order theory, let us turn to one such theory now.

4

Perhaps the distinction should be made differently: is animal perception like distractedly playing a video game or driving, or is it like doing these things while fully attending to them? But the reason to not phrase it in this way is that it is harder to conceive of animal perception like distractedly playing a video game, because doing so generally does not work so well.

(17)

Carruthers’ higher-order thought theory

The best known higher-order account of consciousness is presumably Peter Carruthers’ – es-pecially amongst ethicists, since Carruthers uses his account to argue that (most) non-human animals have no moral status for, he argues, they cannot suffer. Carruthers proposes that “a conscious, as opposed to a nonconscious, mental state is one that is available to5 conscious thought – where a conscious act of thinking is itself an event that is available to be thought about in turn” (Carruthers, 1989, p. 262). For example, my experience of the colour red is conscious because I have the disposition to think for instance: I am now experiencing red(ness). According to Carruthers, this explains the examples of unconscious experience:6 the expe-rience contains nothing that we can spontaneously think about. In for instance the case of the unconscious/distracted driver, visual information is being processed, but somehow this infor-mation is not made available for conscious thought – as if not stored in the right box.

The remainder of his argument follows more or less immediately. It seems implausible to ascribe the ability for metacognition (holding propositional attitudes towards one’s own mental states) to most non-human animals, hence they are probably unconscious:

If it is implausible to ascribe second-order beliefs to mice or fish, it is even more unlikely that they should be thinking things consciously to themselves – that is, that they should engage in acts of thinking which are themselves made available for the organism to think about. Indeed, it seems highly implausible to ascribe such activities to any but the higher primates; and, even then, many of us would entertain serious doubt. (Carruthers, 1989, p. 265)

Carruthers’ account has its problems, however, both regarding the definition of conscious states and the inference that non-human animals lack consciousness. Regarding his definition I want to address two points. First, his definition seems to end up in some form of infinite regress, and begs the question of what makes a mental state conscious. If a mental state is conscious in virtue of it being available for conscious thought, and this thought is conscious in virtue of being available “to be thought about in turn,” then where does the chain of enabling conditions stop? If the latter thought must be conscious as well, then it is still not fully clear what makes a state conscious, but if the latter thought need not be conscious, then there is a strange kind of asymmetry in the definition: why would a second order thought be conscious in virtue of non-conscious thought, whereas a first order thought is conscious only in virtue of conscious thought? A way out of this would be to say that the higher-order thought need not be conscious, which would ‘lower the bar’ for having conscious experiences.7

Second, one can attack his (lack of a) conception of what consciousness is. It appears that he makes no distinction between being “conscious of (the fact that)” and phenomenal 5Some authors (e.g. Gennaro, 2004) prefer an ‘actualist higher-order thought’-theory over a ‘dispositionalist

higher-order thought’-theory such as Carruthers’, where in general in the ‘actualist’ accounts “what makes a mental state conscious is the presence of an actual (i.e. occurrent) higher-order thought directed at the mental state” (Gennaro, 2004, p. 1). For reasons of brevity, I will not discuss this distinction.

6

Carruthers’ conception of the unconscious driver is not the same as Tye’s: Tye thinks that the unconscious driver is phenomenally conscious, but that he is not higher-order aware of the fact that he is (phenomenally conscious), whereas Carruthers thinks that the unconscious driver lacks both awareness and phenomenal con-sciousness.

7

It is however the question whether one can have higher-order propositional attitudes if these attitudes do not have the disposition to become conscious. If this is a valid point, it does not make sense to distinguish between conscious and non-conscious higher-order thoughts.

(18)

consciousness. That is, to him consciously seeing a red door is the same as having the disposition to be conscious of the fact that you see a red door. On a strong representationalist interpretation this makes sense: the experienced red quality of the door is nothing else than the represented fact that the door is red. But if one does not take representationalist thesis for granted, the higher order thought requirement may seem too stringent. For while there can almost be no doubt that in order to consciously experience something, one must have some sort of cognitive access to the experience,8 this does not imply that conscious experience presupposes the ability to reflect upon or report about it.

Conceptual sophisticatedness

Next to the definition issues, we must also doubt whether Carruthers is justified in asserting so easily that non-human animals lack the capacity for metacognition. That (some) animals are able of higher-order thought is argued for instance by Rocco Gennaro (2004; 2009). We could say that there are two things required for being able to think that I am in (mental state) M : an I -concept and and an M -concept.

A common argument is that many non-human animals lack these concepts because there are experiments that show that these animals cannot read minds; and if they cannot attribute mental states to other animals, why would they be able to attribute mental states to themselves? Gennaro responds to this in two ways. First, he notes that there is evidence indicating that some animals in fact can attribute mental states to fellow creatures (Gennaro, 2009, p. 192).

Second, he argues that the argument is based on the false assumption that having a concept requires being able to apply this concept in any context.9 Gennaro argues that all that is required is to have a partial understanding of the concepts involved (Gennaro, 2009, p. 196). For instance, a child may not be able to distinguish between a bush and a tree, but still can distinguish a tree from a flower and thus has some concept of it. From this perspective, it is clear that animals can have concepts without being able to apply them to other creatures, and also that an animal does not need to possess a sophisticated concept like ‘visual experience’ in order to distinguish seeing from hearing. Moreover, one can argue that it is easier to attribute mental states to oneself than to others (Gennaro, 2009, p. 197). Indeed, I know what I am feeling all the time, but I sometimes do not have a clue about what my girlfriend is feeling. Regarding I -concepts in particular, a common argument is that experiments such as the mirror-test show that many animals lack self-consciousness. However, as was discussed in section 1, passing/failing the mirror test need not be conclusive evidence regarding the possession of self-consciousness; and for many creatures there is episodic memory and self-monitoring evidence that supports that they have a self-concept. Moreover, we should keep in mind that there are multiple interpretations of what it is to have a concept of self. Gennaro distinguishes “I qua this body,” “I qua experiencer of mental states,” “I qua enduring thinking thing” and “I qua thinker among other thinkers” (Gennaro, 2009, p. 189). He argues that most higher-order thoughts only require the first, simplest, I -concept, and that there is no good reason to suppose that it is a concept too sophisticated for many animals to grasp (Gennaro, 2009, p. 190).

Finally, one could also argue that an I -concept is not required at all. Like saying that ‘it is raining,’ we can think about experiences in third person: “an animal can be aware that it hurts

8What would it mean to experience something and not have any access to it? Who would be its subject? 9

(19)

or thinks that p, where the ‘it’ here does not express a concept of a thing or a subject that is thought to possess pain or to think that p” (Lurz, 2018). So perhaps an animal cannot think “I see red” but still think “it sees red,” or even “redness,” without being able to infer that the experience relates in a special way to the creature itself as an experiencing subject.

First-order or higher-order?

We can summarize the discussion of Tye’s and Carruthers’ theories and how they relate to one another as follows. Whereas Tye seems to be on the right track in identifying conscious mental states as those having poised, abstract, non-conceptual, intentional content, he lacks a clear account of unconscious mental states which seem to have all these properties. Hence something extra (presumably some further cognitive requirement) must be added, but it is not immediately clear what this is. Some form of a higher-order thought theory may work, but as concerns the application to animals, much depends on how conceptually sophisticated one thinks a creature needs to be to be capable of having higher-order thoughts.

Another interesting observation is that what both Tye and Carruthers seem to be doing, is comparing the experience of animals with that of the distracted driver – while, as has already been noted, this comparison is at least questionable:

There isn’t any evidence that other animals are limited in the way we are when we are engaged in some automatic behaviors; we may be unable to remember some things we did when engaged in inattentional driving. If animal action were like inattentional driving, we should expect animals to lack some memory, but we don’t find this to be the case. (Andrews, 2014, p. 61)

Perhaps, then, what needs to be added to representationalist theories is an adequate account of attention and how it is involved in consciousness, rather than some higher-order thought or awareness requirement which renders animal consciousness unlikely. Next to this, there are many other drawbacks of representationalist theories in general. Let us now turn to these.

Against a representationalist take on animal consciousness

I take the foregoing discussion to have shown that within representationalism, a first-order ac-count is the more plausible. I therefore take such a theory – Tye’s theory, to be specific – as exemplifying the representationalism that I reject for three reasons. First, I will argue that representationalism cannot adequately account for the qualitative character of experience. Sec-ond, drawing on phenomenological arguments,10I will argue that representationalism (wrongly) neglects the fact that experience is inherently meaningful. And finally, I will contend that Tye does not manage to adequately address issues about both animal experience and conceptual content at the same time. For more technical arguments against representationalism, such as objections to its property realism or ‘inverted spectrum’ thought experiments, the reader is referred to Lycan (2015).

10A word of caution is in place. I use phenomenological characterizations of perception to make a case against

representationalism, but this does not mean that I refute naturalism. This thesis is not concerned with, or ‘neutral’ on, the ontological and metaphysical questions posed by the phenomenological tradition.

(20)

Figure 2.1: The M¨uller-Lyer illusion. The top line appears longer than the bottom line, whereas they are in fact of equal length.

The mystery of what it is like

The biggest contemporary objection to representationalism is that it gives no clue about the ‘what it is like’ of experience in at least two ways: it does not explain why experiences are like something, and it does not explain what certain experiences are like. This is also relevant in the context of animal consciousness. For what we would like to have is a theory that sheds some light on what it could possibly be like to be a certain (conscious) creature.

Representationalists themselves need not agree with this objection. Tye, for instance, does not distinguish between representing the sensory quality ‘red’ and what red is like. There are, however, also representationalist theories that do distinguish between sensory qualities and what experiences of them are like (Lycan, 2015). These theories view the process of introspecting what one’s experiences are like as a form of higher-order representation. What an experience is like, then, depends on how the experience is represented, which could again depend on one’s ‘phenomenal concepts’ (Lycan, 2015).

It is not hard to indicate the disadvantages of this approach. While it makes sense of the fact that there is a difference between intentional experience and attending to the subjective experience itself, it is not at all clear how this approach helps to understand the what and why of experience. Also, like Carruthers’ higher-order thought theory, the solution seems to beg the question and seems to lead to an infinite regress.

A phenomenological critique

According to Merleau-Ponty, the greatest drawback of Tye’s theory would undoubtedly be his flawed characterization of what it is to perceive. Recall that according to Tye, perception basically consists of three ‘stages’: first there are the raw sense data, then there are ‘basic perceptual experiences,’ or phenomenal consciousness, and then there are conceptual states, beliefs, judgements and awareness. The middle stage is, according to Tye, (possibly) non-conceptual and possibly ‘unaware’. This construal has multiple problems.

A first problem comes to light when we consider an optical illusion such as M¨uller-Lyer’s (figure 2.1). The problem, for empiricist as well as representationalist theories, is to explain why we have an ambiguous perception of something that is in itself – that is, ‘on the paper,’ or ‘in the world’ – unambiguous. A common response, also given by Tye (1997, p. 294), is to say that we believe or judge, rather than experience, that the lines in M¨uller-Lyer’s illusion are of different length. Merleau-Ponty argues that on this conception, perception “becomes an ‘hypothesis’ made by the mind in order to ‘explain to itself its own impressions’ ” (Merleau-Ponty, 2012, p. 35). This is, however, far from what ordinary experience teaches us: judgement is the act of taking a position, whereas sense experience is “the giving of oneself over to the appearance

(21)

without seeking to possess it or to know its truth” (Merleau-Ponty, 2012, p. 36). Moreover, if the ambiguity results from judgement, then why do we still see the ambiguity despite of the fact that we know that they are of equal length? And if ordinary perception is already made up of judgement, then how can we still judge that some experiences are unveridical?

The discussion of M¨uller-Lyer’s illusion illustrates that it is hard to make Tye’s distinction between basic perceptual experience and conceptual awareness intelligible. Another way in which this becomes manifest is when we consider Merleau-Ponty’s observation that whatever we perceive is already invested with some sense (Merleau-Ponty, 2012, p. 5). We hear a melody not just as a collection of sounds of different pitch and duration, but as just that: a melody, which may be joyful or sad, logical or strange, beautiful or difficult to listen to – and this all depends on the person who listens. Consequently, what we experience is not just what is represented. It involves a relational component that depends on how the object of experience figures in our world. This is a fundamental issue of representationalism. It revolves around the objective contents of perception and neglects the signification that somehow transcends it. So far, this analysis of perception has been, in some sense, neutral. All it says is that in ordinary experience we do not – and sometimes cannot – disentangle the objective contents and their signification. But I want to go one step further: the analysis indicates that the primary function of consciousness is not to provide access to these objective contents (i.e. having poised representations) but to provide this signification; to situate the agent in a meaningful world.

In this primary signification, the role of the body is crucial. What external objects signify depends on what possible actions they afford, the possibilities of the agent depend on his ability to move around, and what can be perceived depends on a creature’s sense organs. Although the role of the body is only one aspect. Full blown experience is shaped by all aspects of how the self relates to its world. From beliefs to sexual drive, from past experience to imagination and from hopes to fears: all these things together are what constitute the intentional arc. The way in which this criticism is phrased already gives away how it relates to the study of animal minds. If one adopts a representational theory of mind, one starts to look for indications of animal consciousness such as the creature’s ability to represent its surroundings and the extent to which it can utilize its ‘basic perceptions’ in cognitive processes related to belief formation, knowledge and thus thought. But to adopt this approach is to look both in the wrong place and in the wrong way. In the wrong place because consciousness consists not in what objective external features the animal can represent or on the extent to which the animal can reflect about experience, but on the extent to which the creature inhabits or ‘creates’ a meaningful world. In the wrong way because research is often modelled on human behaviour and human capacities, instead of related to the animal’s world.

In other words, if we wonder whether some animal can consciously see colours, we must not ask whether this animal has the appropriate colour concepts to make its beliefs poised – we must simply say: of course many animals can see colours, for these colours have practical significance for them every day.11 A bird has to distinguish the blue sea from the green land 11One might ask to what extent this practical significance is similar to the manner in which things can be

meaningful to humans – or object that this ‘meaning’ is just a metaphor for the merely descriptive statement that the environment of an animal contains salient features which are relevant to its behaviour. But, albeit it is very plausible that the human world is in some sense invested with more sophisticated meanings, I hold that this objection is mistaken; for how can we understand these ‘sophisticated’ meanings if not in terms of more primitive

(22)

(for navigational purposes). It has to distinguish between a ripe red fruit and a green one that isn’t. Can the bird also think: the sea is blue? It doesn’t matter, for consciousness primarily discloses a world of significance, and not a world of objects and properties.

One may object that this criticism is but a rephrasing of the same conditions on conscious experience as Tye gave, for it seems that an experience can be significant for a creature if and only if it its contents are poised. In the example of the bird, this would mean that the bird has an implicit belief that green fruits cannot yet be eaten, and that the visual experience of such a green fruit would be poised to inform the (practical) belief that it must keep on looking for something different to eat. It appears that such an interpretation would indeed amount to the same. And we have indeed seen that Tye endorses a little-demanding interpretation, as he thinks of (perceptual) concepts as ‘stored memory representations that enable a range of discriminations’ and beliefs as states that conceptualize sensation and can rationally interact.

But such a ‘friendly’ interpretation runs into other problems. We can easily conceive of a robot that has stored memory representations that enable a range of discriminations which allow it to make ‘rational’ choices. But we would not say that such a robot has conscious experience. We would be much more tempted to say so if the robot would be the kind of being to which it mattered, in one way or another, what it perceives. Therefore, we must not think of criticism in this section as a mere reformulation of representational theories of perception, for what is principally wrong with representationalism is that it does not situate perception within the broader context of how a creature inhabits its environment.

Concepts

As was explained in section 2, Tye insists that experience states can be non-conceptual, that is that the subjects of basic perceptual experiences “need not have concepts that match what [these experiences] represent” (Tye, 1997, p. 295). Whether this is a weakness or a virtue of his theory depends on whether you believe that all experience needs to be conceptual.

A well-known proponent of conceptualism is John McDowell. He argued that “experiences must be able to form reason-giving relations with thoughts through their contents,” and that in order to do this they “must be conceptually structured” (Mineki, 2009, p. 90). It is unclear whether Tye’s position would fit in this framework. On the one hand he contends that perceptual states need to be poised, which implies that they are reason-giving. On the other hand, he says that experience may be nonconceptual, and that “sensory experience is the basis for many beliefs or judgements, but [is] far, far richer” (Tye, 1997, p. 296).

What seems to be an internal contradiction in Tye’s view might be traced back to Tye hav-ing a different interpretation of ‘concept’ than McDowell has. Tye for example contends that one’s experience of an inkblot will be nonconceptual (Tye, 1997, p. 296), since we do not have a concept for every particular shape. However, McDowell would argue that one does conceptual-ize these inkblots to the extent that he can visually distinguish their shapes. So whereas Tye’s idea of being conceptual is that one has for instance a word for it,12 McDowell’s idea of being conceptual is the ability to form reason-giving relations. Since Tye holds that contents must be poised, it appears that his theory is conceptual on McDowell’s construal.

ways in which things can acquire significance? This will be discussed in greater length in the last chapter.

12Of course, it could be the case that one has the concepts of for instance ‘circle’ and ‘square’ but does not

Referenties

GERELATEERDE DOCUMENTEN

Although urban China has been plastered with Chinese Dream posters from 2013 onwards, these only exist in digital form, on the website run by the China Civilization Office and

including effects for parietal areas and effects at later time intervals (also, as shown in the current study as well, including the activity at frontal areas), suggesting that

When devotees purify all consciousnesses, they proceed to the last path, the ultimate level (究竟位, aśaikṣā-mārga).57 Devotees eventually turn all polluted consciousness into

Richard Rodriguez’s book Hunger of Memory: The Education of Richard Rodriguez (1982) contains six critical autobiographical essays in which he chronicles his experience of

For example, Sidarus and Haggard (2016) used a Flanker task showing that the conflict induced by incongruent trials impaired both action performance as well as the experience of

Based on previous research, congruency might change the sense of agency according to two competing accounts: (1) a volition model in which incongruent trials require volition and an

However, because we reduced the working memory load manipulation to two levels, we were also able to increase within participant power by doubling the number of trials (from 8 to 16)

Especially the notion that perceptual attraction between actions and effects results from voluntary movement (i.e. when movement is self-initiated and motor cues can predict the