• No results found

Predictive Processing and Classical Computation Theory of the Mind

N/A
N/A
Protected

Academic year: 2021

Share "Predictive Processing and Classical Computation Theory of the Mind"

Copied!
54
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Predictive Processing and Classical

Computation Theory of the Mind

Jakub Vít, University of Amsterdam, 2017

Abstract

Alan Turing used the term ‘computer’ for a human operator who combined calculations that were carried out by calculators capable only of finite state operations. From human operators the term has expanded with development of universal programmable machines capable of seemingly any kind of operation. These machines were believed to copy human thought. Computers were capable of delivering results in the areas of thought that followed the rules of logic. Languages were designed to solve different issues on new universally programmable machines. Mind was thought to be simply a program running on a brain wetware. Despite the stunning advancements in the field of computation, the complexity of any brain structure still supersedes the complexity of any computational device developed by humans.

There seem to be processes running in the mind that cannot be so readily mapped. Questions and issues were raised in cross-disciplinary research coming from neuroscience, cognitive science and artificial intelligence research following the original optimistic views in the 1950s and 1960s. Connectivism seemed to be a more accurate model of the brain hardware than classical memory and processor computational view. In recent days, the Predictive Processing theory operates with the connectivist model of neural networks. Predictive Processing utilizes computational idea of neural networks that actively try to predict any incoming sensory data. Predictive Processing can be used to explain cognitive processes on the low level as computational. PP shows how we learn about the world and exploit it in a way that seems intuitively natural.

In this thesis, I discuss one specific approach to the computational mind: predictive processing. In particular, I ask two questions. First, How does the predictive processing theory fit with higher human cognitive processes? I argue that It is very well capable of explaining sources of our beliefs about the world. But it does not help to explain the fact that we question these beliefs and even think about them in the most intuitive way, in language and dialogue with ourselves. Second, Can Predictive Processing coexist with other computational views of human mind that are better at explaining language of thought? The thesis explores the possibilities of hierarchical organization of different computational views and argues for a structure involving both classical model as well as Predictive Processing.

(2)

Contents

Abstract ... 1

1. The Computational Mind, beginning of the idea ... 3

1.1. Turing machines and the Church-Turing Thesis ... 4

1.2. Von Neumann’s computer architecture and the metaphor of computational mind ... 5

1.3. Functionalism, Jerry Fodor and The Language of The Mind hypothesis ... 6

1.4. Early AI issues: inability to crack the problems of being in the world... 7

1.5. The Frame problem ... 9

1.6. Neuroscience, brain computational modeling, connectivism ... 10

2. Defining computation ... 11

2.1. Von Neumann: Simple mapping state ... 12

2.2. Chalmers: Causal and counterfactual ... 13

2.3. Fodor: Semantic definition... 14

2.4. Fodor: Syntactical definition ... 14

2.5. Piccinini: Mechanistic computation definition ... 15

2.6. Mind to brain and extension of computational terms to this problem ... 16

3. Embeddedness and Embodiment of the mind: the predictive processing approach ... 17

3.1. Neuroscience, neural networks and new phenomenological perspectives ... 17

3.2. Thrown machines: the Heideggerian view of embeddedness of mind and skillful acquisition ... 19

3.3. Embedded robotics and views of SPAC ... 20

3.4. Predictive processing ... 23

3.5. How does Predictive Processing and Embedded Robotics work together? ... 24

4. The strength of Predictive Processing hypothesis... 25

4.1. Explanation of agents taking actions in the world ... 26

4.2. Sensory input processing by neural networks as a solution to the Frame Problem ... 26

4.3. Reasoning as an embodied process ... 28

5. What do we miss in the picture of embedded mind? ... 30

5.1. Learning process again ... 30

5.2. From analog to discrete states ... 31

(3)

5.4. Self-reflection, consciousness, inner dialogue ... 34

6. Are there any concurrent computational theories that could explain the missing parts? ... 36

6.1. Language of Thought, again ... 36

6.2. Folk psychology concept and behavioral explanation advantages of LoT ... 37

6.3. Mechanistic account versus hierarchical ... 38

6.4. Dynamical system theories as an alternative to computation ... 38

7. Does connectionism allow for computation? ... 40

7.1. Data processing by neural networks ... 40

7.2. Is there a place where we could join Predictive Processing and Classical Computational Theory? ... 41

7.3. Self and computation ... 42

Bibliography ... 45

1. The Computational Mind, beginning of the idea

This chapter presents a selective overview of history of the studies of computational mind (and the

philosophy thereof). My interpretation here is following specific critical points to which I will be reacting in the later chapters. It is not my goal to reference history of computation thoroughly nor to follow closely the development of these relatively new scientific fields.

I intend to introduce the origins of computer technology, which is closely linked to the reductionist theories about the mind, brain and their relation. Section 1.1. introduces computer science in its infant theoretical stage. Section 1.2. introduces how the father of modern digital computers related his ideas on computers to human intelligence. 1.3.talks about more complex views of human mind in relation to the metaphor of computational mind. 1.4. shows problems that early artificial intelligence researches ran into, suggesting that computational mind might be too simple view of the human intellectual capacities. 1.6. points to new stimulus in theoretical thoughts about computational mind, coming from neurosciences and cognitive sciences.

(4)

1.1. Turing machines and the Church-Turing Thesis

Turing Machines are theoretical machines invented with the purpose of figuring out

computability of mathematical and subsequently other symbolic operations. Coming from works of Alan Turing and Alonzo Church in 1940s and 1950s, any task that can be processed by a

program defined for the Turing Machine, is a computable task.

The theoretical machine itself consists of a tape (could be a paper tape, magnetic tape, any kind of a tape), which is divided into sections. Each of its sections can hold a symbol (token). The concept of token is important further in the thesis, as sometimes I will discuss symbol operation, where the output can be linked to the symbolic operation or syntactic processing. But in the sense of TM symbol may be just a placeholder without any symbolic reference to the result output of the machine, in such case I will use token. This tape can theoretically have an infinite length. The tape is being pushed in either direction from state to a state through a head that performs operations on the tape section(state). The actions Turing Machine can perform are reading, moving to different sections of the tape, and writing on the tape. The Last part of the machine is a set of instructions for the head which tells it what action it should undertake. This set is finite, but can be looped. Instructions are organized into linear set performed in succession of one another in Turing (1936, pp. 233–248).

The idea behind this theoretical machine is that any mathematical or other symbol operation that can be written as a set of instructions for the reading head is a computable task. This means that if we are capable of finding out a way of how to get from a state A to a state B via some causal chain of simple actions (read, move, write), then we can think of this operation as of

computational. This causal chain is a description of what a program is.

The Turing Machine, though, is only a finite state automaton unless we include the possibility to change the heads instructions. Without it, it is primed to do one operation at a time only and then stop. This operation could also be a loop, which means that it could run infinitely long, but it would not be able to change its type of output.

A Universal Turing Machine (further as UTM) is the idea behind modern computer languages as well as behind the Classical Computational Theory of Mind. UTM is a theoretical device that uses output of any number of Turing machines on its tape. It performs one level higher computation than the classical Turing machine. It operates with resulting states of many TMs, and thus

combines small parts of computational task into a higher level of computational processing. This, in fact, is a model for first compiler languages, which would translate specific symbols of

(5)

enabling human operators to understand easier what a computer is doing and how to put together instructions for the processing unit.

Turing machine aims to represent the fundamental concept in computation. There are other ways to represent computational processes using mathematical expressions such as in Lambda calculus, or logic, but TM was the first theoretical view and used most commonly in philosophical

literature. TM operates on the simplest most basic chunks of the process such as some

arithmetical operations like addition, detraction, multiplication or syntactical like replacement of one symbol by another. This is what is happening with the 0 and 1 in a modern digital computer and what we call the machine code. Modern computers are equipped with assembler and compiler instructions, which are translating programs that translate higher level languages used for input typically by humans into this basic machine code.

The Church Turing thesis - CTT – states that anything that is computable by any Turing Machine can be computed by an UTM. This thesis has been coined together by unifying the Turing Machines and Church’s lambda calculus into a comprehensive view of computability, thus into what can be computed by Universal Turing Machine. This the outputs of those machines can serve as an input for the universal machine. Thus any operation that satisfies CTT is a

computational operation. This has been commonly mistaken as a claim that any process that can be written down using a set of instructions is a computational process.

Turing himself starts the debate on the prospects on creating intelligent machine designs in some of his papers mainly in Turing (1950) where he proposes his famous view on intelligence itself. In other words, what we perceive as intelligent behavior, can be deemed intelligent through the only available method in our possession, which is comparison to our own intelligence. That’s where Turing develops the setting of his famous guessing game, which is used as a benchmark up till today, and known as The Turing Test.

1.2. Von Neumann’s computer architecture and the metaphor of computational mind

Von Neumann’s architecture brought to life a concept of truly universal computational device. Programs could be written onto it and also erased or modified, giving the computer great variety of use. This is known as the Von Neumann architecture that consists of a three main parts: memory, system bus (device responsible for routing instructions and data between computer parts) and central processing unit (CPU). This is the architecture of universally programmable machines, where memory stores both program instructions and their input data, possibly also output results.

(6)

Von Neumann formulated a hypothesis based on a metaphor that is being discussed till nowadays, an idea of mind as a program in Gigerenzer and Goldstein(p. 134). Human brain is viewed as a computational device by itself. Turing Machines have shown, that almost any kind of set of tasks that can be translated into an algorithm, which can be carried out by mechanical device, is astoundingly high. The requirements for physical implementations are not part of TM definition, as long as the machine/object performs computational tasks (produces expected output, based on measured input – mapping issues to be discussed in more detail in chapter 2.); it can be said to be a computer. This idea sparked enormous research in a whole new field called the cognitive

sciences, which cover anything from AI, Philosophy, Neuroscience, Psychology, etc.

The idea of brain as a computer has been encouraged by several technological breakthroughs in computer development. One of the main ones is an invention of a magnetic memory disc that is rewritable which was continuously developed since 1950s until nowadays, with IBM company leading in its development during 1950s-1960s and 1970s. Magnetic disc storage allowed for only a basic set of instructions to be hard wired in the circuitry of the chip in the processing unit.

Lengthy tapes or cards with instructions and input data did not allow to store too complex procedures. All the instructions could now be saved onto the actual memory disc where the data that the operations were performed on were stored also. No computer had to be created with specific designated tasks hard wired into it. Now computers could be programmed and

reprogrammed, used for many different applications across fields. The only limit possibly being the specific speed at which it can perform basic operations or as we can call it now its processing power.

1.3. Functionalism, Jerry Fodor and The Language of The Mind hypothesis

A further expansion of the Von Neumann’s beliefs of the power of universally programmable computer is presented in functionalist attempts to capture computable functions in the natural world. Jerry Fodor is the main proponent of functionalism in the computational debate, his main works spanning from 1970s till 2000s. There are more applications of functionalism within

philosophy other than in the debate on computational mind, such as the bellow example by Ruth Millikan, her focus is mainly on functional explanations of biological phenomena.

In the functionalist view, the materialist reduction of mind to matter is based on the purpose that specific parts of an organism play. It describes its functions as teleological causes for their

existence. For example: the reason we use hands is that we use those to grab, lift, gesture, etc. (Millikan 1989).

Representationalism approaches mind with the same idea of breaking it up into different

(7)

operated and processed by the brain. Representations can range from completely unrelated symbols to the external world up to a scale of spatial and temporal models. Representations are tokens of external objects and captured as symbols they can be fitted onto the imaginary tape of the Turing Machine, which allows for computational operations to be performed on them.

Fodor postulated an idea of a mind as a computational device that performs syntactical

computations in its own specific code, which he calls the Language of Thought. The Language of Thought (LoT) hypothesis is the most prominent of functionalist based theories. It is represents the most refined description of the classical Computational Theory of mind and has spurred large debates over the nature of human mind. LoT is a specific programming type of a language for a collection of Turing machine like brain processors that each run different version of it. It does not refer in any way to human social day to day language. It is a private human/machine code with which the brain manipulates concepts coming from different perceptual sources. It serves as an underlying code that possibly enables for other code to be run on top of it; actual human thoughts, Fodor (1980) and Fodor (2010).

According to LoT on the lower level, the whole brain is split into modules that are each capable of different types of computational tasks i.e. they represent separate Turing Machines. One part computes the visual input, another part computes auditory input, another is taking care of movement, etc. This seems to be very nicely aligned with visual architecture of the brain, where for example optical nerves enter specific and visibly quite distinct areas of the brain, in this case visual cortex. As it is known now, the visual cortex is not the only part of the brain that processes visual data. The same applies to other parts of the brain previously thought to be specialized in one and only one kind of processing. There are observations of patients with damaged specific parts of the brain, where other parts of the brain took up its role in Clark(2000, Chapter 5). It also appears that such view copies closely functional structure of the Von Neumann architecture. Each module has specific function with a one central module that processes input from all these

modules. That is where our thinking allegedly happens. It happens as a syntactic processing operation of the main central mind module, which makes decisions based on all processing outputs from specialized modules (sensory, motoric, etc.).

1.4. Early AI issues: inability to crack the problems of being in the world

If conscious agents are just running minds as programs on their wetware, as was assumed by Von Neumann and other computer and cognitive scientists, then it should be possible to replicate those by applying programs that mimic their behavior to robotic agents. It should follow that writing an algorithm which gathers all the incoming sensory data, processes it and determines the best behavior should be relatively simple. But as the early AI scientists found out, interpreting the real world without any restrictive framework on the incoming sensory data poses significant

(8)

problem. The amount of that data and its subsequent processing becomes computationally more and more demanding on the scale of processing time.

In the tested scenarios the amount of incoming sensory data ultimately exceeds the capacity of the machine to provide any output as the machine just keeps computing continuous flow inputs. As an example, modern supercomputers are most commonly used for incredibly difficult

computational task, such as modelling thermodynamic phenomena in weather forecasts or other nonlinear systems like stock markets that require computation of large amounts of input data. But even using the most powerful machinery, these models must be significantly downscaled by limiting input amounts of computable data before, so the input data does not choke the

processing power of the supercomputer. In the example of complex weather prediction models computed by modern supercomputers are always being built on pre-selected input data set, otherwise there are too many variables to compute. It is impossible to open linear processing units (even though the supercomputers have several to many CPUs) to wide spectrum of problems and data as the CPUs chips are not built to process massive parallel way such as the human brain does.

While mimicking conscious agents using computers, there are more issues than just with the input data amounts. How is it possible to address properly the abilities of biological organisms to learn and adapt to their environments? How does classical computation formally solve acquiring new knowledge and skill, Clark (2000, pp. 84–87)? Learning processes and algorithms on classical computational devices are solved usually by storing a set of data which is then compared to other data, either acquired previously or determined by computing some desired model situation. This could resemble the naïve idea we have of learning as memorizing specific material for a class or speech or any activity that is memory oriented. This then determines approaches we should apply in case the machine encounters similar input/output problem. All the instructions on how to resolve situations with expected input to output relation are preprogrammed. Early AI’s were simply dull machines completely dependent on the code imagined by their creators.

Modifications to the learning algorithms were not possible, as those machines could not

reprogram themselves according to what a situation required. Their bag of tricks was limited in comparison to learning capacities of living organisms.

What I have described as the problem of over flooding the machine with sensory inputs to a point where it is overwhelmed, is a part of a story which became known as the relevance problem. It ultimately stopped the development of autonomous decision-making robots for decades. I further discuss the relevance problem in next section.

(9)

1.5. The Frame problem

Examples of maze experiments in the research of AI during 1970s have shown that creating a set of instructions for a robot that would qualify as independent is very difficult task. Maze

experiments were attempts of computer scientists to create robots capable of autonomously navigating environment of any complexity, with regards to their physical abilities. Modern digital computers are capable of great variety of computational tasks. These tasks can be in graphical form or textual or mathematical, auditory, etc. But can we combine these methods into one concept that can explore the surrounding world in the same manner as us? This question motivated such attempts.

Even though each of these tasks is a better or worse copy of some human capacities, there might be a certain type of a task that can hardly be replicated in machines of this type. Specifically those tasks where we require immediate decision-making, efficient sensory data processing and

choosing among the most frugal tasks to achieve some goal. This has always been believed to be a matter of a good quality programming. It sets all the expectation on to the programmer, to anticipate as many of the situations, as his or her creation is going to encounter. We might never achieve completeness of pre-cognizing such tasks in an environment as complex as the physical world is. It is virtually impossible to anticipate for us every kind of situation we are going to encounter.

The decision making based on sensory input seems to be relatively simple for a human. We simply prioritize the task based on the given situation by being in it, see Dreyfus (1990, p. 92) and then define our course of action where machines tend to fail in such tasks. But how to write down a set of universally applicable decisions for a machine to take at any given situation?

The issue comes from the combination of an architecture that supports linear processing of tasks within a given subset of perceived data and very large datasets. Robots were getting stuck on computing tasks that were not related to their primary goals. As a new sensory input appears, the machines had no sense of linking these situations with proper decisions to achieve their goals Dreyfus (1990, pp. 118–120). How do we define a program that can abstract from any given situation and skillfully pick that portion of it which is relevant to the system in regards of

achieving its goal? Human beings are often consciously not aware of such priorities themselves as these are automatically chosen for them by their subconscious processes. The Frame problem in fact became one of the big issues in the computational debate.

Writing down a set of instructions for robots with the purpose of achieving some specific goal is a tricky task. Such goal might be for example to reach point F starting from a point S. For a human being this seems to be a simple goal expressed by what we call the will or aim of the mind to do so.

(10)

In a computer program we must incorporate all its physical aspects such as the speed that the machine can develop. Such as choosing the best route that does not outstretch it physical

movement abilities or the anticipation of possible obstacles on the route. Another problem might be the energy the machine can afford to spend to achieve such goal. What happens within the human mind once we adopt a decision to achieve such goal? Why are we not overwhelmed by the fact that we need to activate each group of muscles while simultaneously talking to our friends as we walk?

We probably all think of these tasks that follow from our decision to achieve our goal and compile some kind of program (get from A to B) as well, so some of those operations are present in our conscious thought. Examples of such can be a realization of an obstacle in our way and taking a different course to avoid it. But for the machine, the problem is that it needs to contain actual calculations of the exact precise firing of its engines in time to achieve specific kind of movement while making other kinds of decisions. To sum this , we need the machine to behave as human agents do in the sense that it can navigate areas that have not been mapped or preprogrammed in its route plan. The machine needs to be able to choose the most efficient action or at least the one that is sufficient enough for it to achieve its goal while there are mechanism allowing for the main processing unit to take important momentary decisions.

Central Processing Unit in this instance must not become overloaded by calculating each and every subroutine required to fire in order to generate motoric action. It must be capable of excluding actions that are not necessary for achieving movement from A to B. It should not ponder the color and shape differences of floors and the ceiling. So in this example it is obvious that for a machine to live in the world it has to be able to organize incoming sensory data using some hierarchical system. System that allows for prioritization in incoming sensory data. This is because any given machine or even a human being has only limited capacity of computational time for it to be efficient from human perspective. If that machine or subject has clogged its processing unit time by computing mundane and unimportant tasks, the goal of achieving similar to human efficiency has not been reached.

1.6. Neuroscience, brain computational modeling, connectivism

Large research in neuroscience led to the development of a better picture of how the brain inner architecture might look like. Computational theories in philosophy, artificial intelligence research, and neuroscience started to reflect these findings from approximately the 1980s and developed a whole branch of theoretical approaches that can be labeled “connectionism”. The original idea comes from the observation of the brain: actual neural cells are not directly adjacent to each other, but rather connected to each other via their body extensions called axons. Axons create massively complex web of connections that conduct weak electrical currents that trigger production of

(11)

so-called neurotransmitters at the axons ending so-called synapse. Neurotransmitters are chemicals produced by a synapse connected to another neuron cell. Neurotransmitters enter neuron’s cell body to transfer information to it. Information within the brain network is supposed to travel using two way process that is relatively slow in comparison to modern digital computers that are defined by the enormous speed by which electric signal travels within its circuitry, in i.e. Penrose and Gardner (2002).

Connectivism uncovers a dilemma of a relatively slow brain neuronal transfer process that leads to results that are able to outperform in many case the most advanced computers where

processing happens at incredible speeds. Modern digital computers are in theory limited only by the speed of an electron inside the metal frame of its circuitry. This is considerably faster than any human brain is capable of processing information. Yet modern digital computers are not as efficient in computing large datasets as human minds are, at least specific datasets, such as sensory data. Human brain is capable of resolving many tasks at once such as helping to fire proper activation signals to muscles, organs, the whole body to keep us upright and along with that it enables all the tasks connected with the traditional view of the intellect.

The key to the problem of how can slow processing in a brain be so efficient, lies in massively parallel processing through very large amount of individual atomic processing units. In Von Neumann architecture we have one processing unit, memory, system bus, input and output methods. The processing unit consists of a chip that is very fast, but allows only for limited number of parallel operations, it does rely mainly on its speed for tasks that are computationally complex, yet do not require large input data sets. Human operators can influence the input data sets for a computer and prepare packages that are relevant for the task that is required. Within connectionist model we have any number of processing units with n-times that number for their connections that are responsible for both input and output methods as well for holding of the memory, we do not need specifically designed system architecture (in computer terminology: BUS) as the order of operations is given by the physical organization of the network structure. The computational structure of the brain will be further discussed in chapter 3.

2. Defining computation

In the previous short historical excursion I tried to show that accounts of computation have been changing along with developments in several scientific fields. Coexisting from the beginning with the debate on computational view of the mind there has been a debate on what computation itself actually is. How can a computational physical system be described in a way that is not too inclusive or exclusive with regards to other physical objects? The main issue in this debate concerns the inclusivity of most definitions of computation. These definitions serve as a reference point further in the thesis: they back up some of the concepts behind hypotheses of computational mind I introduce in later chapters. I have ordered them by their relative complexity of use of specific knowledge either from computer sciences or other fields. I start

(12)

with the Simple Mapping definition, which has developed into more complex Semantic and Syntactic definitions. The Syntactic and Semantic definitions were mainly used by what is now called the Classical Computational Theory to which I will further refer as CCTM. Then I move to newer attempts to define computational objects via causal mappings or mechanistic accounts. Regarding the newer definitions I will later try to deal with the Mechanistic Account in favour of the Causal View of computation in chapters 5.-7. I will employ some of those definitions in the overall picture of a computational mind that I will be

presenting towards the final chapter.

2.1. Von Neumann: Simple mapping state

Computational operation is when a physical system (S) can be described by a computational description C. Any system for which this holds is computational. What is a computational

description though? It originally was in Von Neumann’s view a tabular instruction set, tape input and tape output and writing head. This was taken directly from the concept of Turing Machines, thus the technical description of the computational system had to be a version of a Turing machine.

Concerning contemporary electronical digital computers, Von Neumann had to adjust the description of actual computational implementations in computer machinery in relation to the older Turing Machine view. Turing Machines are purely theoretical and do not reflect any actual physical machinery. A digital electronic computer is not operating (any more, mostly) with a tape or stores its instructions in a tabular form as per the suggested method by Turing (1936, pp. 233– 240). The revised computational description now has a form where description C consists of states s1,s2,…sn which are being reflected within C.

C ˃ S (s1 → s2)

Thus computational description that states that this computational description (c) has to reflect physical state change from s1 to s2. Reflecting such change is the simple mapping, each physical state maps onto each computational description.

There are many counterarguments pointing to the fact that this definition of being too lenient with the descriptions of computational states As for example for any physical system there is a description available, which cannot be mapped onto discrete states. Differential equations describe physical system in continuous function in example and such description cannot be mapped onto computational states in the simple state to c-state description. Computational description in the sense of Turing Machine requires possibility of discrete state mappings. It is also possible to describe physical system in many states that cannot be mapped computationally, such as its physical attributes and subsequent changes even with using discrete states (heating,

(13)

cooling, etc.). It thus opens the door for problematic concepts, such as pancomputationalism: every physical system is also a computational system devolving the value of simple state mapping definition.

2.2. Chalmers: Causal and counterfactual

With an objection to the pancomputationalism arguments, there were more attempts to define computation in a way that would eliminate pancomputationalism. In one of the examples fitting a group that can be described as causal based definitions, Chalmers describes computational

systems as that whose “causal structure mirrors the formal structure of a computation”. (Chalmers n.d., pp. 3–5).

At this point we are going to either be expanding or narrowing the theme set by simple mapping state definition. Causal definitions add more refined criteria to the simple state definition with the intention to limit its scope to only relevant subset of all physical objects. Formal

computational structure argument sets boundaries on which kind of description can be used as a computational one. When using any kind of description for S it is not clear whether that

description can be deemed computational, even if it correctly maps to each state of S.

Chalmers specifies two kinds of physical systems that fit causal mapping conditions:

First and relatively less complicated is the Finite State Automaton - FSA. In such automaton every physical state of S has correlative input state I and output state O. It is clear that the finite state automaton is capable of only simple operations where input directly ‘causes’ output.

Second, The more complex Combinatorial State Automaton – CSA, maps I to O through a chain of inner states (s1,s2,…sn). CSA allows for no output at random point of the chain, it only requires

output at a point that is formally described as a result of the computational operation – C. It can be expanded to variety of physical objects without the worry that physical descriptions will obstruct the computational descriptions, as the computational descriptions can be only a subset of a larger group of physical states. It is only required for the sufficient mappings of the states to be observed for S with correlation to I and O.

Chalmers’s definition is causal because it emphasizes the correlation between state and

computational state in causal progression. It is also counterfactual as it places no condition on the nature of computation related to the physical state. A given physical system can compute any given computation, if and only if its causal states can be traced and linked to causal structure of specific computational states (any of those states with successive causal structure). So the system

(14)

can perform any kind of computation, if there is causal mapping. These computations can be of any nature, they do not have obey any rules besides their own causal structure and causal structure of the physical states of the computer.

2.3. Fodor: Semantic definition

“There is no computation without representation” Fodor ( 1981). Fodor poses functional view of the mind as a solution to the problem arising from the contradiction between behavioral and dualist theories of the mind. Its basic clause is that the mind itself is functional, thus the mind reflects both physicalist causality and causation within the mind. A semantic computational system of the mind can be described not only from the perspective of either input or output only such as in psychological behaviorism, but from both perspectives and their interaction. Thus input S1 affects output S2, but also functionally it is possible to define S1 or S2 separately. As I mentioned in section 1.3. this definition was developed alongside with the hypothesis of the Language of the Thought (LoT).

In the semantic account, the operations done by the mind are operations with a semantic content. This means processing of representations of external world rather than just pure symbolic

manipulation. The tokens with which mind operates carry meanings as representations of some aspects of the external world. These representations are then being assembled in and associative way similar to how Hume pictured the mind creates ideas, as Fodor states. Further on he says this is though just one part of the picture. This kind of semantic manipulation happens only at the higher level of cognition, thus where we speak of the mind as self, thought and reflection. This process is fueled as Fodor believes by syntactic operations, which would be the basic mechanistic operations of the underlying machinery – the brain.

The Semantic definition proposes that semantic computing is in fact a Turing Machine operating on a content that has semantic values. Thus the contents of such manipulation are governed by instructions that follow rules coming from the way meaning is interpreted and associated between terms or expressions.

2.4. Fodor: Syntactical definition

Syntactical definition follows in concept from the semantic account. It is more restrictive on the computational state mappings than semantic definition. Syntactical view is based on specific symbol processing where some of the symbols can be tokens for representations, but not necessarily. The constraints for syntactical computational processes are that the processed symbols follow formal rules dictated by the syntax allowed by its symbolic set, i.e. chosen

(15)

language. It restricts the view of the semantic account where any representation is a valid mapping of a computational state.

Proponent of both semantic and syntactical type of computation Jerry Fodor argues for these definitions to paint a picture of computation in the mind as pure symbol operation as we saw in the case of Turing Machine (TM). The idea is that this could happen with any kind of tokens. These tokens then could be for example representations of the external world within the mind. The point is that once it is possible to designate brain states that could be associated with different symbols, it is possible to think of a brain as of a typical TM. Syntactic definition arose from the critique of a the semantic definition, claiming that not all that is happening in the mind has to a representation of something else. Some operations are purely syntactic, allowing to build more complex structures that only then hold some semantic context, if this applies to the

common use of any kind of language.

Both syntactical and semantical views were created with the idea of a mind or mental process as a computational state behind them. It follows that its critics are claiming that they are inflating the idea of what computation is and expanding it without sufficient theoretical and experimental background.

2.5. Piccinini: Mechanistic computation definition

In the mechanistic view computational, systems are computational states can be distinctly mapped to specific discrete or possibly analog mechanistic states. For example, we know the measureable states of the computing units we manufacture. For instance, processing unit

performs an operation O while computing task C. It can be argued prima facie against it that it is a circular definition due to the fact that it accepts only mappings of already known computational implementations. A system or an object can only be claimed as a computational when its design fulfills implementation of specific computational state/s. The idea behind mechanistic account is to give less obscure definition per Piccinini (2017) in relation to actual digital computers and their theoretical counterparts. And as a benefit in that way limit the looming idea of

pancomputationalism that plagues the earlier mentioned definitions.

“According to the mechanistic account, concrete computing systems are functional mechanisms of a special kind — mechanisms that perform concrete computations - Piccinini (2017, sec. 2.5).”As an example, if we are speaking about computations, we can start with an actual digital computer or at least a calculator - so as per section 2.2. either Chalmers’s CSA(computer) or FSA(calculator). First would be in a universal discrete state computational machine on the Turing machine scale, that is equipped with processing unit and memory that can hold both instructions for its

(16)

one applying strict requirements for computational state-to-matter mappings. It operates by routing electricity through specially equipped circuits in order to receive a discrete state at a resulting output gate.

With calculator the case is simple as program is already hard wired in the device’s electrical board and also it is not equipped with memory. Output is provided usually via its display and input is given usually via buttons with assigned values or functions. The latter is an example of a finite state automaton as it only can perform certain operation, such as arithmetical operations. Modern computer is capable of much more thanks to the fact that it can store any kind of programming that can be translated into its basic machine code. A program is a set of instructions that can be interpreted by the machine code and ran as a set of very simple actions represented by running an electric current through transistors in the processing unit chip of a computer. Those consist of a basic parts that are called logical gates. Logical gates are mechanistic model of specific logical operation, such as conjunction, disjunction, etc. These provide output in Boolean form that is then used in the binary form of encoding which we know as 1 and 0s.

Mechanistic definition allows for computation in a system that fulfils its conditions for finite number of logical operations carried by separate mechanical parts of the system based on mapping between the input and output. Behavior of these separate units must allow for specific states to be the output only. If there is no method to describe specific processes as computational, these do not qualify as per mechanistic account. At least not until proper explanation is found.

2.6. Mind to brain and extension of computational terms to this problem

This short section is not explaining any additional computational definition, but rather I want to relate the previous ideas to the view of mind. What is known as Classical Computational Theory nowadays is an approach to mind that is trying to describe the mind as a computational construct with the aid of views of computation developed by Jerry Fodor. Thus we are talking about the Semantic and Syntactic accounts of computation. These definitions were adopted because they allow to be extended from computers as machines to human brains and minds as Fodor believes. In section 1.4. to 1.6. I have mentioned some of the issues Artificial Intelligence research has been facing trying to replicate complex human behavior by using approach that can be characterized as mind as software. An approach that can be linked to the functionalist and representational views of mind.

These problems led some of the philosophers into parting with the ideas of CCTM into claims that mind cannot be characterized as computational, see Piccinini and his mechanistic definition. In his view the mind is completely different from what we now call computers. There are another approaches such as Chalmers’s that was largely adopted by Andy Clarke, which in general says,

(17)

that the mappings of computational to physical states in a system such as brain might exist. My following question is whether there is a solution to this issue, if it is possible to construct a model of the mind that can be computational yet escape the narrow hypothesis of mind as a software of the brain.

Following chapter will present research that inspired a new field of robotics to emerge. An Embedded Robotics, which might present many helpful solutions to some of the problems mentioned in chapter 1. As well as some issues in chapter 2. Such as what is the more precise computational definition.

3. Embeddedness and Embodiment of the mind: the predictive

processing approach

Embeddedness and embodiment of certain cognitive processes could be used to explain how these processes are possible without the need of computational model. In fact this idea has created a whole new field in robotics that explores possibilities inspired by bodily designs of many different species. The concept is inspired by an unusual source regarding the debates adherence to the Analytical tradition so far; it comes from Phenomenology of Martin Heidegger presented by Hubert L. Dreyfus as I have shown in section 1.5.. And in works of Paul Churchland realized that the computational ideas were in some part lacking the foundation of science and sought a way to introduce connectionist approach to the debate. From the phenomenology side Heidegger is not the only thinker involved, work of Maurice Merleau-Ponty served to support new research stirring the computational debate by Hubert L. Dreyfus. This potentially saves us from the idea of a conscious mind solely as the only puppet master of its body where without direct control there is no action. It brings us to the image of a brain as a student within its own and fully alive body, learning how to achieve certain goals by collaborating with it. In the sections 3.1. and 3.2. I show what was behind these changes in computational mind theory. 3.3. Introduces Specialized Purposive Adaptive Couplings as a concept that describes the basic functional ideas of embodiment with relation to cognition. 3.4.introduces one of the most important embodied hypotheses I work with, predictive processing. In 3.5. I connect these concepts into one picture.

3.1. Neuroscience, neural networks and new phenomenological perspectives

All the computational models mentioned in chapter 2 are purely theoretical views of how a computational process in a meat-ware could replicate those processes that are nowadays being processed by silicon chips. What neuroscience brought into the debate is the actual reflection of what the physical systems of brain look like. It fails yet to explain the exact mapping of brain states, which can be experimentally measured using for example MRI scan to the mental states of thought, which was the aim of philosophers engaged with the idea of computational mind.

(18)

The core topic here will be so called neural networks, both artificial and biological, and how they can be deemed computational in the first place. In section 1.6. I briefly spoke about neural networks in human brain. Research in the microscopic processes of the brain has inspired development of their artificial counterparts. Artificial Neural Networks that have been designed since 1970s are able of processing input data in a very specific way. Even though they are being modelled within computational environments with computational tools, they might not be viewed as computational by some scholars in the field, due to the fact that there is no strict syntax to be operated on. We lack the typical computational terminology in their description. In fact, the important factors for neural processing are not logical and syntactic relations, but rather

connection weights and what are called sigma or corrective functions of the physical structure of the network. But it is possible track this process via so called backpropagation method.

Backpropagation is simply tracing the signal back from output through each layer of connections back to the input. This can be done only after the network has been stimulated with input data and produced some results in the output layer.

The brain itself is an incredibly complex neuronal network consisting of huge number of connected neuron cells. These brain cells are perceived as the basic level of the brain structure. Neurons are specialized cells that do not adhere to each other in dense tissue structure. In fact they keep quite a distance from each other filled in between by mostly fat matter. They do have connections that stretch out to up to six of their neighbors. These connections create

astoundingly complex web. Neurons communicate between each other via these outstretched extensions of their bodies called axons. Axons transmit weak electric impulses that stimulate production of chemicals at their end which are called synapses. Synapses transfer these chemicals to the neuron cell body they are attached to. Chemical produced by synapses are called

neurotransmitters.

Here we see some minor similarities between the brain and the silicon computer structure. There are wires that transmit electric current into transistors which either amplify or cut of their

impulse. In case of neurons, their response is not that simple. Neurons produce another kind of output signal as they receive impulse from another neuron. It could be a whole scale of reactions which increases the complexity of information processed by the circuitry in comparison to a digital computer.

If we add the perspective of mental phenomena of the mind to this view, we arrive at a picture of a complex system that has many levels of conscious and subconscious decision making (such thought and its intentionality, together in mix with processes keeping our internal organs running, i.e.). A situation that Fodor is trying to describe by his Language of Thought hypothesis and modular mind theory, a sort of underlying programming language of the brain and then the mind itself as consciousness. It appears there are many levels of activities going on simultaneously in the brain. Can we explain with the same computational terms the underlying level of body

(19)

control as well as higher cognitive capacities of human mind? And how can we capture them in a computational way that reflects this the best? LoT (refer to Section 1.3.) offers some explanations for how some inputs can be processed, but by itself LoT does not really connect the physical structure and computational implementation of the brain to mind. As it will be explained further in chapter 5. it is missing some crucial parts of the story.

Among those who argued against the view of brain as a syntactic processor Paul Churchland should be named (1986). Churchland questioned the idea of a mind as a software and argued in favor of connectionism. Connectionism is dropping the mental to brain mapping on the level of representations as some syntactic operations by a meat Turing Machine, it upholds the idea that mind is encoded simply in the physical structure of neural connections. Churchland is trying to more accurately reflect in theory the advances within neurobiology. Another perspective that inspired the debate on computational mind is based on phenomenological concept coming from works of Martin Heidegger, promoted by Dreyfus and Dreyfus (1980) and mainly in Dreyfus (1990), of a human being thrown in the world. Phenomenology focuses on the human condition in contrast with the mechanical simplicity of the syntactical processing machine. French

philosopher Merleau Ponty’s work, proved useful in the case study of Dreyfus brothers, more on that topic will be shown in the following section 3.2. It is useful for the notion of neural

processing because it describes sensory field in the modes of pattern selection from experience active search of those patterns within the perceptual field. Perceptual field is separated to

consciously ignored (subconsciously processed) background. More will be said about this topic in section 3.4. and chapter 4.

3.2. Thrown machines: the Heideggerian view of embeddedness of mind and skillful acquisition

Dreyfuses research report (1980) sums and describes human type of skillful learning as an

argument in which he opposes some of the ideas of hard artificial intelligence. The article called A Five-Stage Model of the Mental Activities Involved in Directed Skill Acquisition, proposes that instead of adopting formal rules and thus creating better algorithms, experience and memory is playing more important role in the learning process:

“This striking dependence on every day, concrete, experience in problem solving seems an anomaly from the point of view of the information processing model of mental activity whose basic assumption is that all cognitive processes are produced by formal manipulation of independent bits of information abstracted from the problem domain.” (1980, p. 5)

Dreyfuses stress the importance of pattern recognition as progressively more and more important as the subject acquires better skill level in the tested practice. Human learning in its first stage

(20)

might show similarities to the machine learning process, where memory input actual

computation of possible outcomes of certain situations are involved. This might be expressed in calculating possible destinations of where do we get using certain speed of movement or what will happen if we apply force to a branch we want to hang on. This is the hard way of learning (1980, p. 2–3.).

Their research was oriented on descriptions in perception changes of task environment in the area of learning second language. Perceptual differences they have extracted based on

participants reactions served as base for description later the five stages of skill acquisition. Together with Merleau Ponty they have argued that an artificial setup of an experiment could distort the subjects reaction. Rather the experiment was conducted on a topic as close to lived phenomenal experience as possible (1980, pp. 1–2).

Another useful example of a skillful acquisition mentioned by Dreyfuses is a novice driver just learning how to control the vehicle. The driver has to use so much concentration and effort on all the tasks connected to driving. At this stage it is hard to do any other task alongside the actual driving. It only becomes a routine without the need of such concentration in the later more skilled stages. In the later stages it becomes something different. Awareness of what we have learned is being moved to the background. Experts in their respective fields are hardly capable of describing exactly the mind processes responsible for their skillful action.

Learning becomes more of a pattern recognition process than a precise following of formal rules for the specific domain and calculation of required action. This has led to new research in the area of robotics focused on designs copying nature with emphasis on the fact of their being in the world. These robots are often described as thrown machines, Clark (2000, Chapter 6). I am going to expand the views of thrown machines in the next section.

3.3. Embedded robotics and views of SPAC

While it seemed that robotics and hard AI attempts are facing a problem that cannot be solved by classical computational methods, new view based on the principles of embodiment has emerged. I am presenting its basic structure that Clark calls the Special Purposive Adaptive Coupling (2000, Chapter 6).

Special Purposive Adaptive Couplings, or SPAC for short, are natural wirings of biological systems that are oriented towards certain specific aspect of its surroundings. The orientation, as suggested in many examples, is mainly focused on ‘stuff’ that is external to the organism. SPAC thus couples something that is happening outside the organism with something inside of it in a way that it

(21)

creates some benefits for action in the world. It serves purpose as a part of a system designed to help it achieve certain goals or become successful in a specific action.

Important factors in these systems or system circuits are mainly their simplicity and specialization. SPAC is a bodily or material system that is ready to engage with only certain aspects of its surroundings or of the world other than itself. A hand can be used as an example of a part of such system. Hand serves its owner to a purpose of some motoric and space orientation action. There are many ways to imagine how we could use a hand, and as it is often referred to as the tool of tools, it is still a highly specialized tool. Hand cannot for example be mistaken for being the source of an idea or it cannot be used to extract nutrients from food and deliver those directly to the blood stream as the digestive tract does. Its functions would be to grab, feel, move, touch, show, etc. None of its ways of adaptation to the purpose it should serve suggests that it digests or drives imagination by any of its internal parts.

Is hand an example of SPAC then? Not just on its own. By itself, and without any driving cognitive power, it just functions as a statue made from bones and tissue. As an example of an idea coming from a perception of a hand, the hand has to be viewed as a sort of background to other special purpose adaptive couplings within the organism capable of imagination and of giving it

instructions. Hand does not possess the ability to hear sounds. Or even if it is sensitive to sound, this ability is so much limited that we cannot say that it can serve its owner to any advantage. But it exists to be able to grab things that fit its size and other various specifications like not too hot, solid, not too heavy, etc. This is based on an impulse to perform such action sent to it by certain part of human neural wiring.

Another important aspect of a SPAC is that it is a part of a system that consists of smaller parts that can be organized into structures. As these structures take form they separate themselves from other structures and create their own identity among them. It has to be clear from observing such structures that there are differences in them in a way that we can identify its adaptation to its surroundings and their potential to serve some purpose. We probably cannot say that hand by itself is a SPAC because there is no motivation to action behind hand itself, by itself it has no purpose other than to be and it also has no function and no intelligible role in the world.

Once the hand is coupled with a will that drives it in space and controls its movements, then the hand and the brain, or parts of the brain plus neural system reaching from the brain to the hand, represent an example of a SPAC. This structure can be described by its form of operating center, wiring leading to the operator and the operator itself. The operator is adapted to be able to operate certain objects in the world. There are many other examples of SPAC within each single organism. Their correct description could be also as a mechanism. Mechanism is a form of a structure that allows for describing its parts and has clear capacity to some actions. Special

(22)

adaptive couplings though have very simple setup and might be viewed also only as a part of a higher and more complex mechanism. In context of my thesis SPAC is viewed as a mechanism opposed to a biological organism or a full independent system. Organism or a system of high complexity may contain many mechanical qualities, but it might be too complex to be understood as a simple mechanism when processes that are defining it are happening on multiple levels and can only be described via different special methods.

SPACs is an ingenious idea that can show a way out of a very difficult problem. If we imagine organism separated into structure of logical atoms such as hand, nerves, brain, we perceive their function only within the system without any relation to the actual world in which they occupy space. Such device like brain is then fully responsible for being open to the world in a way that in uses the rest of the body to act in it. This seems simple, but it is more difficult to try to construct an operational system based on instructions from a fully detached-from-the-world device.

For example once we want to try to develop an algorithm for operation with some objects for a machine that has sensory system and operator hand, we can easily do so only knowing if the object of transportation is not too heavy if it is solid, if it will not damage the operator hand in some way. Then we also have to incorporate knowledge of locations we want to use the hand on and we need to incorporate time and speed conditions, also power that our system is using and limit it for certain scenarios or increase for other scenarios. All this effort is given for one single action of moving a hand.

SPAC is encapsulation of embeddedness of an organism in the world. It is a definition of

phenomenological perspective of a body in world given by cognitive science. It has proven to be explanatory and helpful in describing many sensory-motoric aspects of different organisms. Using a famous example of cricket’s spatial orientation SPAC that was a source of a successful model of a robot. Instead of having to use complicated computational models to describe the way cricket orients itself in the world by using representational system operating with vectors, it uses simple mechanical wiring of its auditory receptors. These are placed in crickets case on its legs and body with relative distance to each other and the central neural system. Using different positions of these receptors on cricket’s body it can determine direction of an incoming sound. This is done by comparing different times when the audio signals hit its receptors, thus determining the closest point on its body to the signal source.

So in the sense of computation if we have SPAC present as an open tool to handle certain aspects of the system to world relation, we can adopt a view that SPAC can represent to a certain extent what a set of logical gates do for a computer. It allows for a certain approach to a situation and only that as it is not adapted to a different use. Unless it changes in some evolutionary process into another kind of tool a different SPAC.

(23)

3.4. Predictive processing

By taking on from the idea of neural networks and robust connectivist neural to body systems that are joined together with the use of different specialized circuitry. Andy Clark continued to expand the view into a more precise description of how brain state to mental concepts mappings could work. Initially together with David Chalmers expanding the computational views they have proposed theExtended Mind hypothesis, which I will also use later on. Extended mind proposes a fully connectivist computational mechanism that uses not only its bodily tools, but also external. The Heideggrian being in the world (in the computational debate referred to often as “thrown machines”) that is coming from the work of Dreyfus I introduced in section 3.2., is emphasized by Clark to the limit where it becomes actual ontological status of the mind. Mind is not only in the body – brain, limbs, etc., but it is also being offloaded onto books, phones, computers any tools that we use to action in the world. The argument for such view comes from counterfactual analysis of human activity. for example, If I did not put a date of my meeting in a calendar in my phone, I would most likely forget that I have any appointment. The agents actions are dependent more and more on the external tools she adapts to for acting in the world, see more in Chalmers and Clark (1998). Chalmers later disregarded the computational view of the mind due to his interest in consciousness, which he believes is not solvable through materialist reduction.

From the concept of extended mind Clark continued on to his hypothesis of the architecture of cognitive processes utilizing his previous work. He pursues classical computational problems in a new way. How does the mind approach the world in a way that complies with materialistic view, and yet it does not become a victim of the relevance problem explained in the first chapter? Taking in account the adaptability of our mind to pick frugal decision paths in order to solve any task it is presented with, it is necessary to develop a concept that would be capable of explaining such ability using concepts I have previously mentioned coming from neuroscience, psychology or cognitive sciences.

Predictive Processing uses deep neural learning processes Clark (2000, Chapter 4); it tries to piece together a picture of what these networks are good at into a functional cognitive system. The Premise is that neural networks are very good at recognizing patterns within any given data set. Cognitive science prior to Predictive Processing theory assumed that brain is basically a passive unit that accepts/processes incoming sensory data and applies it as an information to specific tasks of our body. With the model of modular mind presented by the classical computational theory we have seen that there is a need for incredibly sophisticated and efficient central

processing unit that combines all the pieces of the puzzle together. How is this done? In theory, it seems that it is impossible to achieve by specific task recognition programming, such as for case A select from possible set N from the memory as it would happen in typical syntactical processing unit. But in Predictive Processing neural networks actively approach incoming sensory data by

(24)

trying to guess what it will be like. This guessing method is governed by Bayesian like principles that are somehow embedded into the structure of the neural network in the brain.

By this active approach, the system is able to learn and adapt to different situations and

environments in way that, according to Clark, explains more clearly several issues connected to the relation CCTM has to embodiment. The mind develops alongside with its body by guessing and predicting its behavior. The brain tries to predict the next incoming sensory data based on existing models of the situation it acquires from previous experiences. This way it learns how to use it in a way that Dreyfus describes skillful learning.

Brain is in the constant process of building and correcting its models or hypothesis of the world. These predictions are based on Bayesian kind of approximations, Clark (2015, Chapters 1–5), scaling the known and believed into expected results. To make it clear though, we are talking about subconscious process that is related to basic animalistic activity, same descriptions are used in examples that Clarke proposes from the animal world. Predictions are concerning incoming sensory data, they are not abstract theoretical models of reality. They are patterns that are arising from the robust spectrum of impulses we gather from our senses.

3.5. How does Predictive Processing and Embedded Robotics work together?

The Embodied Mind hypothesis is not perceivable as simply a program running on a wetware. It is a mechanism more than a computational system. It consists of many specialized purposive adaptive couplings linked together into a web of neural connections and their bodily extensions. By itself the ‘brain in a vat’ as the model of the mind looks using Classical Computational Theory is not capable of any kind of learning or developing anything close to a full bodied creature abilities.

Embodied mind presents a solution to the relevance problem which is one of its biggest achievements especially in robotics. The nature of SPACs only allows for a certain way of

approaching the world and also defines relevance of large part of the sensory data processing. for instance for the order of how well things are suited to be manipulated by hand. We are more prone to recognize patterns for objects that are higher on the scale of embodiment friendliness or readiness than those to which our bodies are not that readily adapted to. This could be due to the object size, shape, scale, color, temperature and many other physical factors.

Embodied Mind also provides solution to the spatial orientation problems faced by AI scientists in 1970s. No more need for complex and difficult to process spatial representations. These can be narrowed down in the mind for only those aspects of the surroundings that are sensed through

(25)

the attuned embodied receptors. Most sensory organs are usually on the higher part of their bodies for surface dwelling organisms. This allows for better angle towards the surface and larger area of detection for possible threat or food source, etc.

Embodied Mind creates a framework for revised research and theory of the mind. Mind and body are viewed as one and the same organism that acquires new skills using concepts of deep neural network learning. This seems to reflect more both scientific observation on the inner brain processes as well as human intuition of how learning, memorizing and recognition happen within the mind opposed to the classical computational idea.

We can now depict with some more clarity an organism with some of its specific abilities that is not just a brain to body schema, but where these processes overlap. Some operations taken up by the brain that are being consciously reflected then do not have to be considered computational operations, but rather as simpler mechanisms in the sense of reflexive SPAC arrangement. It becomes easier to delineate actual mental processes if we select out those that are directed by body-to-world readiness. By using the example of a hand for SPAC spatial coordination comes free of hard processing costs, as well as many reflexive actions. It becomes a natural trait of the bodily part design.

4. The strength of Predictive Processing hypothesis

In the previous chapter I have developed an idea of a mind that sits within a complex circuitry of a well-designed body adapted to use as much as it can from what the world has to offer. Connectivism shows commitment of a theory to the facts presented by neuroscience. Brain structure seems to be rather closer to very complex web than to a very powerful mind running syntactical engine. But what does that tell us about the way brain actuallyuses its body and how is the mind wired and encoded in those structures?

What I am interested is how brain states can in theory be mapped onto mental states. If this is not possible then the computational view of a mind itself is meaningless. The kind of mapping I will present is based on A. Clark’s hypothesis on the structure of such embodied mind. I try to address the point where I think predictive processing gives us good theoretical background for further investigations. Section 4.1.discusses bodily control related problems. Section 4.2. tries to scope back to the relevance issues and how the predictive processing hypothesis solves that. In 4.3. I speculate about the nature of reasoning, as a bottom to top processing activity. The Last section shows how predictive processing uses attributes of artificial neural networks in describing possible ways learning processes work in the brain.

Referenties

GERELATEERDE DOCUMENTEN

It survived the Second World War and became the first specialized agency of the UN in 1946 (ILO, September 2019). Considering he wrote in the early 1950s, these can be said to

In other words, I am interested in the traditional conception of education as politically neutral, against the progressive notion of education as inherently political, and

Hiermee kunnen ziekteprocessen in het brein worden bestudeerd maar ook cognitieve processen zoals het waar- nemen van objecten of de betekenis van woorden in een

The last part will result in a concluding chapter where Classical (Hollywood) cinema, complex cinema, control as an intrinsic need, the lack of control and

[r]

evidence the politician had Alzheimer's was strong and convincing, whereas only 39.6 percent of students given the cognitive tests scenario said the same.. MRI data was also seen

Major cultural heritages are valued for its economic value (consume), while the sociocultural values are more important for local heritages sites

Van de raadsman wordt in het stelsel van het wetboek van 1926 verwacht dat hij de verdachte adviseert omtrent de te maken processuele keuzes en dat hij – door gebruik te