• No results found

Technologies on the stand: Legal and ethical questions in neuroscience and robotics

N/A
N/A
Protected

Academic year: 2021

Share "Technologies on the stand: Legal and ethical questions in neuroscience and robotics"

Copied!
437
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Tilburg University

Technologies on the stand

van den Berg, B.; Klaming, L.

Publication date:

2011

Document Version

Publisher's PDF, also known as Version of record

Link to publication in Tilburg University Research Portal

Citation for published version (APA):

van den Berg, B., & Klaming, L. (Eds.) (2011). Technologies on the stand: Legal and ethical questions in neuroscience and robotics. Wolf Legal Publishers (WLP).

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

• You may freely distribute the URL identifying the publication in the public portal Take down policy

(2)

Technologies on the stand:

(3)
(4)

T

ECHNOLOGIES ON THE

S

TAND

L

EGAL AND

E

THICAL

Q

UESTIONS

IN

N

EUROSCIENCE AND

R

OBOTICS

edited by

B

IBI VAN DEN

B

ERG

& L

AURA

K

LAMING

(5)
(6)

Bibi van den Berg, Laura Klaming (eds.)

ISBN: 978-90-5850-650-4

Published by Wolf Legal Publishers (WLP)

P.O. Box 31051

6503 CB Nijmegen

The Netherlands

Tel: +31 24 355 19 04

Fax: +31 84 837 67 00

E-Mail: info@wolfpublishers.nl

www.wolfpublishers.com

Cover design: Debbie Rovers & Ellen Knol

(7)
(8)
(9)
(10)

Foreword

At present, neurotechnologies such as functional Magnetic Resonance Imaging (fMRI) and Deep Brain Stimulation are mainly used in the health sector for research, diagnosis and therapy. But neurotechnologies could also be used for human enhancement, for instance to improve cognitive functions or to morally enhance convicted offenders. Moreover, insights from neuroscience are increasingly used for legal purposes, for instance to determine a suspectʼs responsibility for his actions, or to distinguish truthful from deceptive statements. This raises the question whether neuroscience even has a contribution to make to (criminal) law at this point in time. Regardless of this concern, neuroscience has already entered the courtroom and influenced legal decisions. Using neurotechnologies for legal purposes obviously raises a number of important ethical and legal questions that require further discussion, most importantly regarding the admissibility of neurotechnologies in court.

Similarly, the application of robotics and autonomous technologies in various (social) situations, including the home, hospital environments, traffic and in war, raises a number of ethical and legal issues. These include questions such as: what are the ethical implications of applying robots in the health sector with regard to our ideas about human dignity and autonomy? What are the consequences of using robotics in war? And can we hold robots liable if they play an ever more important role in our daily lives? The increasing autonomy and intelligence of robotics technologies, moreover, raises questions regarding the moral and legal standing of such machines: should we implement ethics into robotic soldiers or robotic nannies, is this feasible, and if so, how should we go about designing moral machines?

Technologies on the stand: Legal and ethical questions in neuroscience and robotics is a textbook of papers that deal with diverse topics from the fields of law and neuroscience on the one hand and law, ethics and robotics on the other hand. The book is organised as follows: the first part deals with different topics from the field of law and neuroscience, ranging from criminal responsibility to the legal implications of using neuroscientific evidence, to human enhancement and its ethical and legal implications. The second part of the book deals with diverse topics from the field of law, ethics and robotics, and includes chapters on the morality of robots, the ethical and legal status of robots, and the regulation of behaviour through the design of robots.

(11)

us to assess the responsibility of someone who becomes mentally ill subsequent to committing their crime. In addition, the chapter addresses the question whether direct brain interventions aimed at mental capacity restoration help us to make a convicted offender more responsible.

Section B of part I deals with the legal issues raised by using neurotechnologies in the courtroom. In

Chapter 3, Stefan Seiterle discusses the use of fMRI for lie detection as one of the core goals of criminal

procedure. The main focus of this chapter is on the question whether, and under what circumstances, neuroscience-based lie detection would be admissible in criminal courts in Germany. Chapter 4, by Jan Christoph Bublitz deals with the ethical and legal issues of using neurotechnologies to change the minds of other people outside of a therapeutic context. Bublitz explains how neuroscience may change legal thinking about the protection of the mind. Chapter 5, by Laura Klaming, addresses one specific challenge of using neuroscience in the courtroom, i.e. the potentially overly persuasive influence of neuroscientific evidence on legal decision-making. More specifically, the importance of presentation mode in the discussion about the admissibility of neuroscientific evidence in court is emphasised. In Chapter 6, Tommaso Bruni discusses cross-cultural variability at the neural level and its consequences for the use of fMRI for the purpose of lie detection stressing that fMRI lie-detection may hinder the ascertainment of truth, if research does not take cross-cultural variability into account.

Section C of part one of Technologies on the stand deals with enhancement and the various ethical and legal questions that arise with regard to human enhancement. In Chapter 7 Anna Pacholczyk discusses the use of neurotechnologies for the purpose of moral and social enhancement. Besides examining what we mean by moral enhancement and what is currently possible, she discusses the potential problems with morally enhancing interventions. Chapter 8, by Elizabeth Shaw, focuses on the possibility of employing neurotechnologies in the penal system to morally enhance offenders. Elizabeth Shaw argues against attempting to alter offendersʼ goals and values using neurotechnologies that wholly or largely circumvent the offenderʼs rationality mainly for reasons of equality and moral dialogue. Chapter 9, by Bert-Jaap Koops and Ronald Leenes, deals with the possibility of using new technologies in order to improve our sight and vision and outlines a number of ethical and legal issues that may arise with this yet hypothetical form of human enhancement. In Chapter 10 Pieter Bonte answers the question why we should be natural by presenting five arguments against the supposed duty to ʻbe naturalʼ as grounds for outlawing human enhancement.

Part II of this book deals with law, ethics and robotics. Section A of part II addresses the foundations of roboethics. Chapter 11, by Wendell Wallach, focuses on ethics, law, and public policy in the development of robotics and neurotechnologies. Wallach argues that robotic technologies in combination with neurotechnologies and other emerging technologies will contribute to a transformation of human culture, which will pose important challenges that need to be addressed. In Chapter 12, Samir Chopra answers the question whether robots can be considered moral agents, focusing on the ascription of an appropriate set of beliefs and desires to a putative intentional entity. Chapter 13, by Steve Torrance, deals with the ethical and legal status of artificial agents. Specifically, the moral status of robots is linked to their consciousness. In

(12)

Section B of part II deals with ethics and the design of robots, and with the implementation of ethics or morality into robots. In Chapter 15 Andreas Matthias analyses the concept of an ethical governor, which is supposed to effectively control and enforce the ethical use of lethal force by robots on the battlefield and which has had a great influence on the design of war robots. He argues that the concept of an ethical governor as favoured and already implemented by the military research community is misleading and does not address the moral problems it is supposed to solve. Chapter 16, by Aimee van Wynsberghe, outlines a framework for the ethical evaluation of care robots. Specifically, Aimee van Wynsberghe emphasises the importance of understanding the complexity of care practices, and the consequences this may have for designing care robots. In Chapter 17 Joshua Lucas and Gary Comstock ask the question whether machines have prima facie duties by comparing two competing moral theories for the basis of algorithmic artificial ethical agents.

The final section of part II focuses on legal issues in robotics. Chapter 18 deals with the legal responsibility of robots under Italian and European law. Chiara Boscarato discusses whether a robot should be considered an artefact or whether it should be compared to a person, for instance to a minor or a person with an unsound mind. In the final chapter, Bibi van den Berg argues that scholars in the field of Law & Technology ought to widen the scope of their research into techno-regulation, to include not only the intentional influencing of individuals through technological artefacts, but also more subtle, and implicit forms thereof. She discusses examples from various robotics domains to explain how this could work.

The editors wish to thank the following persons: first and foremost, the authors of the book, whose work has turned editing this volume into a real pleasure. We also wish to thank the reviewers for their time and effort to provide feedback on all of the papers. We thank Han Somsen and Anton Vedder, who, in their role as head of the Tilburg Institute of Law, Technology and Society (TILT) made it possible to organise the conference that was at the heart of this book. Thanks also to the members of the organising team who supported us in realising the conference and the book, Leonie de Jong, Femke Abousalama and Vivian Carter. We thank Debbie Rovers and Ellen Knol for the great job they did in designing promotional materials for the conference and the cover of this book. And last but not least, we thank our publisher, Simone Fennell, for a job well done.

(13)
(14)

Contents

Chapter 1

NeuroLawExuberance: A plea for neuromodesty

Stephen J. Morse 23

Chapter 2

Capacitarianism, responsibility and restored mental capacities

Nicole Vincent 41

Chapter 3

Legal admissibility of suitable fMRI based lie detection evidence in German criminal courts

Stefan Seiterle 65

Chapter 4

If manʼs true palace is his mind, what is its adequate protection? On a right to mental self-determination and limits of interventions into other minds

Jan Christoph Bublitz 89

Chapter 5

The influence of neuroscientific evidence on legal decision-making: the effect of presentation mode

Laura Klaming 115

Chapter 6

Cross-cultural variation and fMRI lie-detection

Tommaso Bruni 129

Chapter 7

Moral enhancement: What is it and do we want it?

Anna Pacholczyk 151

Chapter 8

Free will, punishment and neurotechnologies

Elizabeth Shaw 177

Chapter 9

Cheating with implants: Implications of the hidden information advantage of bionic ears and eyes

Bert-Jaap Koops, Ronald Leenes 195

Chapter 10

Why should I be natural? A fivefold challenge to the supposed duty to ʻbe naturalʼ as grounds for outlawing human enhancement

Pieter Bonte 215

Chapter 11

From robots to techno sapiens: Ethics, law, and public policy in the development of robotics and neurotechnologies

(15)

Chapter 12

Taking the moral stance:

Morality, robots, and the intentional stance

Samir Chopra 271

Chapter 13

Does an artificial agent need to be conscious to have ethical status?

Steven Torrance, Denis Roche 281

Chapter 14

Roboethics: The problem of machine responsibility

David Jablonka 307

Chapter 15

Is the concept of an ethical governor philosophically sound?

Andreas Matthias 322

Chapter 16

Understanding the complexity of care in context and its relationship to technical content: The greatest challenge for designers of care robots

Aimee Van Wynserghe 340

Chapter 17

Do machines have prima facie duties?

Joshua Lucas, Gary Comstock 361

Chapter 18

Who is responsible for a robotʼs actions? An initial examination of Italian law within a European perspective

Chiara Boscarato 377

Chapter 19

Techno-elicitation: Regulating behaviour through the design of robots

(16)

Author biographies

Pieter Bonte studied philosophy at Ghent University and law at the Free University of Brussels (VUB). In

2010 he started researching human enhancement at the Bioethics Institute Ghent. Tackling only the intrinsic arguments for and against human enhancement of oneʼs own body (entailing also the possibility of parental enhancement of their offspring in the pre-birth stages of life) and thus excluding prudential arguments as well as questions of cultural and political interferences, he seeks to present a clearer picture of what may be wrong and/or right about human enhancement in itself. In the current, first stage of this project, he analyses the normativity of human nature, after which he will gauge the dramatically deepened responsibilities over ourselves and our offspring, to then conclude in 2013 with proposing a rational, useable conception of human dignity to handle the disruptive new liberties brought on by human enhancement technologies.

Chiara Boscarato graduated in 2009 in Law from the University of Pavia. Her final thesis, in Commercial

Law, was on The Responsibility of a Holding. She is a trainee lawyer in Vigevano, Italy. Since February 2010 she has been working with the Interdepartmental Research Center ECLT – University of Pavia, Italy. On 1st December 2010 she became a Scholarship Fellow of the University of Pavia and ECLT Centre. Her research deals with the legal implications of the application of neuro techniques in the field of research and rehabilitation.

Tommaso Bruni is a PhD candidate in Foundations of the Life Sciences and Their Ethical Consequences at

the University of Milan.

Jan Christoph Bublitz is a junior lecturer and researcher at the Faculty of Law, University of Hamburg. His

research is at the intersection of law, moral philosophy and life sciences. His PhD thesis concerns the foundations and limits of a fundamental right of mental self-determination.

Samir Chopra is an Associate Professor of Philosophy at Brooklyn College and the Graduate Center of the

City University of New York. Professor Chopraʼs interests include pragmatism, the philosophical foundations of artificial intelligence, the politics of technology, and legal theory. His latest work (co-authored with Laurence White), A Legal Theory for Autonomous Artificial Agents is forthcoming with the University of Michigan Press in April 2011. His previous work (co-authored with Scott Dexter), on the philosophical significance of free software, Decoding Liberation: The Promise of Free and Open Source Software was published by Routledge in 2007.

(17)

David Jablonka is a PhD student at the University of Bristol (2010 to present) in the field of Philosophy of

Law. He has an undergraduate degree in Law from the University of Kent (2006 -2009), and an LLM (Rearch Master) from the same university (2009 -2010). Jablonka is currently working on his PhD thesis with the working title Roboethics and Legabotics – Can a machine be responsible for its actions?

Laura Klaming holds a MSc degree in psychology from Maastricht University (2004) and a PhD (summa

cum laude) from Bremen University (2008). At the Tilburg Institute for Law, Technology and Society (TILT), Laura's primary research interest lies in the area of law and neuroscience. Her current research at TILT concerns the possibility of applying neurotechnologies to various problems within the field of psychology and law, including improvement of eyewitness memory and the detection of deception, as well as the ethical and legal implications thereof. In addition, she is involved in research on the influence of neuroscientific evidence on legal decision-making.

Bert-Jaap Koops is Professor of Regulation & Technology at the Tilburg Institute for Law, Technology, and

Society (TILT), the Netherlands. From 2005-2010, he was a member of De Jonge Akademie, a young-researcher branch of the Royal Netherlands Academy of Arts and Sciences. His research field is law & technology, in particular criminal-law issues such as cybercrime, cyber-investigation powers, and DNA forensics. He is also interested in other topics of technology regulation, including privacy, data protection, identity, digital constitutional rights, ʻcode as lawʼ, human enhancement, and regulation of bio- and nanotechnologies. From 2004-2009, he co-ordinated a VIDI research program on law, technology, and shifting power relations. Koops studied mathematics and general and comparative literature at Groningen University, and received his PhD in law at Tilburg University in 1999. He co-edited six books in English on ICT regulation, including Starting Points for ICT Regulation (2006) and Dimensions of Technology Regulation (2010).

Ronald Leenes is Professor of Regulation by Technology at the Tilburg Institute for Law, Technology, and

(18)
(19)

Andreas Matthias studied philosophy, worked as a programmer and lecturer for programming languages for

almost twenty years, before becoming a philosopher again. He has worked in Germany and Hong Kong, focusing on the moral aspects of computing technology, machine personhood, and the philosophical problems posed by artificial intelligence, technology, and the pursuit of happiness. He currently lives in Hong Kong and teaches at Lingnan University.

Stephen Morse J.D., PhD, is Ferdinand Wakeman Hubbell Professor of Law and Professor of Psychology

and Law in Psychiatry at the University of Pennsylvania. Trained in both law and psychology at Harvard, Dr. Morse is an expert in criminal and mental health law whose work emphasises individual responsibility and the relation of the behavioural and neurosciences to responsibility and social control. Professor Morse was Co-Director of the MacArthur Foundation Law and Neuroscience Project and he co-directed the Projectʼs Research Network on Criminal Responsibility and Prediction. He is co-editor with Adina Roskies of A Primer on Law and Neuroscience (forthcoming, Oxford University Press), and is currently working on a book, Desert and Disease: Responsibility and Social Control. Professor Morse is a founding director of the Neuroethics Society. Prior to joining the Penn faculty, he was the Orrin B. Evans Professor of Law, Psychiatry and the Behavioral Sciences at the University of Southern California.

Anna Pacholczyk studied for her BSc in Cognitive Science at the University of Westminster, and for her MA

in Health Care Law and Ethics at Manchester University. She is currently a PhD student under the supervision of John Harris and Søren Holm. Her doctoral study considers ethics with regard to moral and social enhancement. Her principal research interests are in the ethics of enhancement and the social and ethical implications of the developments in neuroscience, including the ethics of use new technologies such as brain imaging, TMS and DBS as well as empirical investigations of morality and the consequences of this research for moral philosophy.

Stefan Seiterle is a research and teaching assistant of criminal law and criminal procedure law with the law

faculty at the Europa-Universität Viadrina Frankfurt (Oder). He studied law at the universities of Konstanz, Amiens and Berlin (FU). In 2009, he obtained his doctorate degree (summa cum laude). 2009/10 he was a visiting fellow at the Zentrum für Interdisziplinäre Forschung (ZiF) in Bielefeld with the research group ʻChallenges to the Image of Humanity and Human Dignity by New Developments in Medical Technologyʼ. His research areas include medical criminal law, neurolaw, legal philosophy and bioethics.

Elizabeth Shaw holds an honours law degree (first class) from Aberdeen University (2008) and a masters

(20)

their behaviour via neurological interventions), without appealing to the ideas of ʻfree willʼ or ʻretributive desertʼ. Her research is funded by The Arts and Humanities Research Council and by the Clark Foundation for Legal Education.

Steve Torrance is Visiting Senior Research Fellow at the Centre for Research in Cognitive Science (COGS)

at the University of Sussex. He also teaches part-time at Goldsmiths College, London, and he is Emeritus Professor of Cognitive Science at Middlesex University. He writes on artificial ethics, on the feasibility and moral justifiability of creating artificial consciousness, and on the project to produce super-intelligent agents. He is currently a joint organiser of a workshop on Machine Consciousness this April in York, UK. He has edited a volume of the journal AI and Society on Ethics and Artificial Agents, and he has contributed a chapter to a forthcoming book on Machine Ethics.

Aimee van Wynsberghe is currently doing her PhD in Philosophy at the University of Twente, the

Netherlands. During her undergraduate degree in Cell Biology at the University of Western Ontario, Canada, she was a research assistant at CSTAR (Canadian Surgical Technologies and Advanced Robotics) working on the Telesurgery project (long distance robotic surgery), which inspired her to continue working with robots. Following her studies in Science at UWO, she pursued a Masters in Applied Ethics at K.U. Leuven, Belgium and an Erasmus Mundus Masters in Bioethics. This gave her the opportunity to reflect on the philosophical issues pertaining to technology in healthcare, with a particular focus on robotics. Her current work focuses on the social and ethical implications of human-robot interactions, but specifically addresses the use of robots in the care of elderly persons by targeting issues of design.

Bibi van den Berg is post-doc researcher at the Tilburg Institute for Law, Technology and Society (TILT), at

Tilburg University in the Netherlands. Her research areas are: (1) regulation and ethics in robotics, and (2) identity and privacy in online worlds. Van den Berg has a PhD in philosophy of technology, obtained from Erasmus University Rotterdam in the Netherlands in 2009.

Nicole Vincent obtained her PhD from the University of Adelaide in Australia in 2007 with a dissertation

entitled Responsibility, Compensation and Accident Law Reform. She subsequently worked at Delft University of Technology in the Netherlands on a project entitled The Brain and The Law, which examined how neuroscience is relevant to legal responsibility. And since early 2011 she has been at Macquarie University in Sydney, Australia working on a project entitled Reappraising the Capacitarian Foundation of Neurolaw, which investigates whether doubts about the restoration and enhancement of responsibility challenge capacitarianism. She is also developing an Australasian Neurolaw Database.

Wendell Wallach is consultant, ethicist, and scholar at Yale University's Interdisciplinary Center for

(21)
(22)
(23)
(24)
(25)
(26)
(27)

Chapter 1

NeuroLawExuberance: A plea for neuromodesty

Stephen

J. Morse*

University of Pennsylvania

University of Pennsylvania Law School

smorse@law.upenn.edu

Abstract This chapter suggests on conceptual and empirical grounds that at present neuroscience does

not have a large contribution to make to criminal justice doctrine, adjudication and policy and to law generally despite the great advances in the science. Irrational exuberance and overclaims about the relevance should be avoided. It also explains why the new neuroscience does not present a radical challenge to current legal conceptions of agency and responsibility. Although present caution is warranted, the chapter concludes that in the near and intermediate term, as the science advances, neuroscience might well make helpful contributions to the law.

Introduction

In a 2002 editorial, the Economist warned, “Genetics may yet threaten privacy, kill autonomy, make society homogeneous, and gut the concept of human nature. But neuroscience could do all those things first.” (2002, p. 77). The genome was fully sequenced in 2001 and there has not been one resulting major advance in therapeutic medicine since. Thus, even in its most natural domain, medicine, genetics has not had the far-reaching consequences that were envisioned. The same has been true for various other sciences that were predicted to revolutionize the law, including behavioural psychology, sociology, psychodynamic psychology, and others. I believe that this will also be true of neuroscience, which is simply the newest science on the block. Neuroscience is not going to do the terrible things the Economist fears, at least not for the foreseeable future. Neuroscience has many things to say, but not nearly as much as people would hope,

*

(28)

especially in relation to law. At most, in the near to intermediate term, neuroscience may make modest contributions to legal policy and case adjudication. Nonetheless, there has been irrational exuberance about the potential contribution, an issue I addressed previously in an article addressing ʻBrain Overclaim Syndromeʼ (Morse, 2006). I now wish to re-examine the case for caution.

In this chapter, I shall first make some remarks about the lawʼs motivation and the motivation of some advocates to turn to science to solve the very hard normative problems that law addresses. Then I shall consider the lawʼs psychology and its concept of the person and responsibility. I then consider how neuroscience might be related to law, which I call the issue of ʻtranslationʼ. Next, I turn to various distractions that have bedeviled clear thinking about the relation of scientific, causal accounts of behaviour to responsibility. The chapter then examines the limits of neurolaw. The next section considers why neurolaw does not pose a genuinely radical challenge to the lawʼs concepts of the person and responsibility. Nonetheless, the next section makes a case for cautious optimism about the contribution neuroscience may make to law in the near and intermediate term. A brief conclusion follows.

Science and law

Everyone understands that legal issues are normative, addressing how we should regulate our lives in a complex society. How do we live together? What are the duties we owe each other? When, for violation of those duties, is the State justified in imposing the most afflictive but sometimes justified exercises of state power, criminal blame and punishment?1 When should we do this, to whom and how much?

Virtually every legal issue is contested – consider criminal responsibility, for example – and there is always room for debate about policy, doctrine and adjudication. In a fine, recent book, Professor Robin Feldman (2009) has argued that law lacks the courage forthrightly to address the difficult normative issues that it faces. It therefore adopts what Feldman terms an internalizing and an externalizing strategy for using science to try to avoid the difficulties. In the former, the law adopts scientific criteria as legal criteria. A futuristic example might be using neural criteria for criminal responsibility. In the latter, the law turns to scientific or clinical experts to make the decision. An example would be using forensic clinicians to decide whether a criminal defendant is competent to stand trial and then simply rubberstamping the clinicianʼs opinion. Neither strategy is successful because each avoids facing the hard questions and they retard legal evolution and progress. Professor Feldman concludes, and I agree (Morse, 2011), that the law does not err by using science too little, as is commonly claimed. Rather, it errs by using it too much because the law is so insecure about its resources and capacities to do justice.

1

(29)

Here is my speculative interpretation of the motivation of enthusiasts for using neuroscience in criminal justice. Many hate the concept of retributive justice, thinking it is both prescientific and harsh. Their hope is that the new neuroscience will convince the law at last that determinism is true, that no offender is genuinely responsible, and that the only logical conclusion is that the law should adopt a consequentially-based prediction/prevention system of social control guided by the knowledge of the neuroscientist-kings who will finally have supplanted the Platonic philosopher-kings (Greene & Cohen, 2006, pp. 217-218). On a more modest level, many advocates think that neuroscience may not revolutionize criminal justice, but it will demonstrate that many more offenders should be excused and do not deserve the harsh punishments United States criminal justice imposes. Four decades ago, they would have been using psychodynamic psychology for the same purpose and more recently genetics has been employed similarly. The impulse is clear, however: jettison desert, or at least mitigate judgments of desert. As we shall see below, however, these advocates often adopt an untenable theory of mitigation or excuse that quickly collapses into the nihilistic conclusion that no one is really criminally responsible.

The lawʼs psychology, concept of the person and responsibility

Criminal law presupposes a ʻfolk psychologicalʼ view of the person and behaviour. This psychological theory explains behaviour in part by mental states such as desires, beliefs, intentions, willings, and plans. Biological, other psychological and sociological variables also play a causal role, but folk psychology considers mental states fundamental to a full causal explanation and understanding of human action. Lawyers, philosophers and scientists argue about the definitions of mental states and theories of action, but that does not undermine the general claim that mental states are fundamental. Indeed, the arguments and evidence disputants use to convince others presuppose the folk psychological view of the person. Brains donʼt convince each other; people do. Folk psychology presupposes only that human action will at least be rationalisable by mental state explanations or that it will be responsive to reasons, including incentives, under the right conditions.

For example, the folk psychological explanation for why you are reading this chapter is, roughly, that you desire to understand the relation of neuroscience to criminal responsibility or to law generally, you believe that reading the chapter will help fulfil that desire, and thus you formed the intention to read it. This is a practical rather than a deductive syllogism.

Brief reflection should indicate that the lawʼs psychology must be a folk psychological theory, a view of the person as a conscious (and potentially self-conscious) creature who forms and acts on intentions that are the product of the personʼs other mental states. We are the sort of creatures that can act for and respond to reasons. The law treats persons generally as intentional creatures and not simply as mechanistic forces of nature.

(30)

be modified by means other than influencing deliberation and human beings do not always deliberate before they act. Nonetheless, the law presupposes folk psychology, even when we most habitually follow the legal rules. Unless people are capable of understanding and then using legal rules to guide their conduct, law would be powerless to affect human behaviour.

The legal view of the person does not hold that people must always reason or consistently behave rationally according to some pre-ordained, normative notion of rationality. Rather the lawʼs view is that people are capable of acting for reasons and are capable of minimal rationality according to predominantly conventional, socially-constructed standards. The type of rationality the law requires is the ordinary personʼs common sense view of rationality, not the technical notion that might be acceptable within the disciplines of economics, philosophy, psychology, computer science, and the like.

Virtually everything for which agents deserve to be praised, blamed, rewarded, or punished is the product of mental causation and, in principle, responsive to reason, including incentives. Machines may cause harm, but they cannot do wrong and they cannot violate expectations about how people ought to live together. Machines do not deserve praise, blame, reward, punishment, concern or respect because they exist or because of the results they cause. Only people, intentional agents with the potential to act, can violate expectations of what they owe each other and only people can do wrong.

Many scientists and some philosophers of mind and action consider folk psychology to be a primitive or pre-scientific view of human behaviour. For the foreseeable future, however, the law will be based on the folk psychological model of the person and behaviour described. Until and unless scientific discoveries convince us that our view of ourselves is radically wrong, the basic explanatory apparatus of folk psychology will remain central. It is vital that we not lose sight of this model lest we fall into confusion when various claims based on neuroscience are made. If any science is to have appropriate influence on current law and legal decision making, it must be relevant to and translated into the lawʼs folk psychological framework, as shall be discussed in more detail below.

All of the lawʼs doctrinal criteria for criminal responsibility are folk psychological. Begin with the definitional criteria, the ʻelementsʼ of crime. The ʻvoluntaryʼ act requirement is defined, roughly, as an intentional bodily movement (or omission in cases in which the person has a duty to act) done in a reasonably integrated state of consciousness. Other than crimes of strict liability, all crimes also require a culpable further mental state, such as purpose, knowledge or recklessness. All affirmative defences of justification and excuse involve an inquiry into the personʼs mental state, such as the belief that self-defensive force was necessary or the lack of knowledge of right from wrong.

(31)

Legally responsible agents are therefore people who have the general capacity to grasp and be guided by good reason in particular legal contexts.2

In most cases of excuse, the agent who has done something wrong acts for a reason, but either is not capable of rationality generally or is incapable on the specific occasion in question. This explains, for example, why young children and some people with mental disorders are not held responsible. How much lack of rational capacity is necessary to find the agent not responsible is a moral, social, political, and ultimately legal issue. It is not a scientific, medical, psychological, or psychiatric issue.

Compulsion or coercion is also an excusing condition. Literal compulsion exists when the personʼs bodily movement is a pure mechanism that is not rationalisable by the agentʼs desires, beliefs and intentions. These cases defeat the requirement of a ʻvoluntary actʼ. For example, a tremor or spasm produced by a neurological disorder is not an action because it is not intentional and it therefore defeats the ascription of a voluntary act. Metaphorical compulsion exists when the agent acts intentionally, but in response to some hard choice imposed on the agent through no fault of his or her own. For example, if a miscreant holds a gun to an agentʼs head and threatens to kill her unless she kills another innocent person, it would be wrong to kill under these circumstances. Nevertheless, the law may decide as a normative matter to excuse the act of intentional killing because the agent was motivated by a threat so great that it would be supremely difficult for most citizens to resist. Cases involving internal compulsive states are more difficult to conceptualize because it is difficult to define ʻloss of controlʼ (Morse, 2002). The cases that most fit this category are ʻdisorders of desireʼ, such as addictions and sexual disorders. The question is why these acting agents lack control but other people with strong desires do not? In any case, if the person frequently yields to his or her apparently very strong desires at great social, occupational, or legal cost to herself, the agent will often say that she could not help herself, that she was not in control, and that an excuse or mitigation was therefore warranted.

Lost in translation? Legal relevance and the need for translation

What in principle is the possible relation of neuroscience to law? We must begin with a distinction between internal relevance and external relevance. An internal contribution or critique accepts the general coherence and legitimacy of a set of legal doctrines, practices or institutions and attempts to explain or alter them. For example, an internal contribution of criminal responsibility may suggest the need for doctrinal reform, of, say, the insanity defence, but it would not suggest that the notion of criminal responsibility is itself incoherent or illegitimate. By contrast, an externally relevant critique suggests the doctrines, practices or institutions are incoherent, illegitimate or unjustified. Because a radical, external critique has little possibility

2

(32)

of success at present, as I explain below, here I will make the simplifying assumption that the contributions of neuroscience will be internal and thus will need to be translated into the lawʼs folk psychological concepts.

The lawʼs criteria for responsibility and competence are essentially behavioural – acts and mental states. The criteria of neuroscience are mechanistic – neural structure and function. Is the apparent chasm between those two types of discourse bridgeable? This is a familiar question in the field of mental health law (Stone, 1984, pp. 95-96), but there is even greater dissonance in neurolaw. Psychiatry and psychology sometimes treat behaviour mechanistically, sometimes treat it folk psychologically, and sometimes blend the two. In many cases, the psychological sciences are quite close in approach to folk psychology. Neuroscience, in contrast, is purely mechanistic and eschews folk psychological concepts and discourse. Thus, the gap will be harder to bridge.

The brain does enable the mind, even if we do not know how this occurs. Therefore, facts we learn about brains in general or about a specific brain in principle could provide useful information about mental states and human capacities in general and in specific cases. Some believe that this conclusion is a category error (Bennett & Hacker, 2003; Pardo & Patterson, 2010). This is a plausible view and perhaps it is correct. If it is, then the whole subject of neurolaw is empty and there was no point to writing this chapter in the first place. Let us therefore bracket this pessimistic view and determine what follows from the more optimistic position that what we learn about the brain and nervous system can be potentially helpful to resolving questions of criminal responsibility if the findings are properly translated into the lawʼs psychological framework.

The question is whether the new neuroscience is legally relevant because it makes a proposition about responsibility or competence more or less likely to be true. Any legal criterion must be established independently, and biological evidence must be translated into the criminal lawʼs folk psychological criteria. That is, the expert must be able to explain precisely how the neuroevidence bears on whether the agent acted, formed a required mens rea, or met the criteria for an excusing condition. If the evidence is not directly relevant, the expert should be able to explain the chain of inference from the indirect evidence to the lawʼs criteria. At present, as the part about the limits of neurolaw explains, few such data exist, but neuroscience is advancing so rapidly that such data may exist in the near or medium term. Moreover, the argument is conceptual and does not depend on any particular neuroscience findings.

Dangerous distractions concerning neuroscience and criminal responsibility and

competence

(33)

Contrary to what many people believe and what judges and others sometimes say, free will is not a legal criterion that is part of any doctrine and it is not even foundational for criminal responsibility (Morse, 2007). Criminal law doctrines are fully consistent with the truth of determinism or universal causation that allegedly undermines the foundations of responsibility. Even if determinism is true, some people act and some people do not. Some people form prohibited mental states and some do not. Some people are legally insane or act under duress when they commit crimes, but most defendants are not legally insane or acting under duress. Moreover, these distinctions matter to moral and legal theories of responsibility and fairness that we have reason to endorse. Thus, law addresses problems genuinely related to responsibility, including consciousness, the formation of mental states such as intention and knowledge, the capacity for rationality, and compulsion, but it never addresses the presence or absence of free will.

When most people use the term free will or its lack in the context of legal responsibility, they are typically using this term loosely as a synonym for the conclusion that the defendant was or was not criminally responsible. They typically have reached this conclusion for reasons that do not involve free will, such as that the defendant was legally insane or acted under duress, but such usage of free will only perpetuates misunderstanding and confusion. Once the legal criteria for excuse have been met, for example—and none includes lack of free will as a criterion—the defendant will be excused without any reference whatsoever to free will as an independent ground for excuse.

There is a genuine metaphysical problem about free will, which is whether human beings have the capacity to act uncaused by anything other than themselves and whether this capacity is a necessary foundation for holding anyone legally or morally accountable for criminal conduct. Philosophers and others have debated these issues in various forms for millennia and there is no resolution in sight. Indeed, some people think the problem is not resolvable. This is a real philosophical issue, but, it is not a problem for the law, and neuroscience raises no new challenge to this conclusion. Solving the free will problem would have profound implications for responsibility doctrines and practices, such as blame and punishment, but, at present, having or lacking libertarian freedom is not a criterion of any civil or criminal law doctrine.

(34)

A related confusion is that behaviour is excused if it is caused, but causation per se is not a legal or moral mitigating or excusing condition. I have termed this confusion “the fundamental psycholegal error” (Morse, 1994, pp. 1592-1594). At most, causal explanations can only provide evidence concerning whether a genuine excusing condition, such as lack of rational capacity, was present. For example, suppose a life history marked by poverty and abuse played a predisposing causal role in a defendantʼs criminal behaviour. Or suppose that an alleged new mental syndrome played a causal role in explaining criminal conduct. The claim is often made that such causes, which are not within the actorʼs capacity to control rationally, should be an excusing or mitigating position per se, but this claim is false.

All behaviour is the product of the necessary and sufficient causal conditions without which the behaviour would not have occurred, including brain causation, which is always part of the causal explanation for any behaviour. If causation were an excusing condition per se, then no one would be responsible for any behaviour. Some people welcome such a conclusion and believe that responsibility is impossible, but this is not the legal and moral world we inhabit. The law holds most adults responsible for most of their conduct and genuine excusing conditions are limited. Thus, unless the personʼs history or mental condition, for example, provides evidence of an existing excusing or mitigating condition, such as lack of rational capacity, there is no reason for excuse or mitigation.

Even a genuinely abnormal cause is not an excusing condition. For example, imagine a person with paranoid suspiciousness who constantly and hypervigilantly scans his environment for cues of an impending threat. Suppose our person with paranoia now spots a genuine threat that no normal person would have recognized and responds with proportionate defensive force. The paranoia played a causal role in explaining the behaviour, but no excusing condition obtained. If the paranoia produced a delusional belief that an attack was imminent, then a genuine excuse, legal insanity – an irrationality-based defence – might be appropriate. In short, a neuroscientific causal explanation for criminal conduct, like any other type of causal explanation, does not per se mitigate or excuse. It provides only evidence that might help the law resolve whether a genuine excuse existed or it may in the future provide data that might be a guide to prophylactic or rehabilitative measures.

(35)

behavioural predictions, but predictability is also not per se an excusing or mitigating condition, even if the predictability of the behaviour is perfect. To understand this, just consider how many things each of us does that are perfectly predictable for which there is no plausible excusing or mitigating condition. Even if the explanatory variables that enhance prediction are abnormal, excuse or mitigation is warranted only if a genuine excusing or mitigating condition is present. For example, recent research demonstrates that a history of childhood abuse coupled with a specific, genetically-produced enzyme abnormality that affects neurotransmitter levels vastly increase the risk that a person will behave antisocially as an adolescent or young adult (Caspi et al., 2002). A person is nine times more at risk if he has the MAOA deficiency and a childhood abuse history. Does that mean an offender with this gene by environment interaction is not responsible, or less responsible? No. The offender may not be fully responsible or responsible at all, but not because there is a causal explanation. What is the intermediary excusing or mitigating principle? Are these people, for instance, more impulsive? Are they lacking rationality? What is the actual excusal or mitigating condition? Again, causation is not compulsion and predictability is not an excuse. Just because an offender is caused to do something or is predictable does not mean the offender is compelled to do the crime charged or is otherwise not responsible. Brain causation, or any other kind of causation, does not mean we are automatons and not really acting agents at all.

Causal information may be of prophylactic or rehabilitative use for people affected, but no excuse or mitigation is applicable just because these variables make antisocial behaviour far more predictable. If the variables that enhance prediction also produce a genuine excusing or mitigating condition, then excuse or mitigation is justified for the latter reason and independent of the prediction.

Most informed people are not ʻdualistsʼ about the relation between the mind and the brain. That is, they no longer think that our minds (or souls) are independent of our brains (and bodies more generally) and can somehow exert a causal influence over our bodies. It may seem, therefore, as if lawʼs emphasis on the importance of mental states as causing behaviour is based on a pre-scientific, outmoded form of dualism, but this is not the case. Although the brain enables the mind, we have no idea how this occurs and have no idea how action is possible (McHugh & Slavney, 1998, pp. 11-12). It is clear that, at the least, mental states are dependent upon or supervene on brain states, but neither neuroscience nor any other science has demonstrated that mental states play no independent and partial causal role. Indeed, the most likely explanation of complex human behaviour will be multi-field, multi-level, and will include mental states (Craver, 2007).

(36)

In conclusion, legal actors concerned with criminal law policy, doctrine and adjudication must always keep the folk psychological view present to their minds when considering claims or evidence from neuroscience and must always question how the science is legally relevant to the lawʼs action and mental states criteria. The truth of determinism, causation and predictability do not in themselves answer any doctrinal or policy issue.

The limits of neurolaw

Most generally, the relation between brain, mind and action is one of the hardest problems in all science. We have no idea how the brain enables the mind or how action is possible (McHugh & Slavney, 1998). The brain-mind-action relation is a mystery. For example, we would like to know the difference between a neuromuscular spasm and intentionally moving oneʼs arm in exactly the same way. The former is a purely mechanical motion, whereas the latter is an action, but we cannot explain the difference between the two. We know that a functioning brain is a necessary condition for having mental states and for acting. After all, if your brain is dead, you have no mental states, are not acting, and indeed are not doing much of anything at all. Still, we do not know how mental states and action are caused.

Despite the astonishing advances in neuroimaging and other neuroscientific methods, we still do not have sophisticated causal knowledge of how the brain works generally and we have little information that is legally relevant. This is unsurprising. The scientific problems are fearsomely difficult and only in the last decade have researchers begun to accumulate much data from functional magnetic resonance imaging (fMRI), which is the technology that has generated most of the legal interest. Moreover, virtually no studies have been performed to address specifically legal questions.

Before turning to the specific reasons for neuromodesty, a few preliminary points of general applicability must be addressed. The first and most important is to repeat the message of the prior section of this article. Causation by biological variables, including abnormal biological variables, does not per se create an excusing or mitigating condition. Any excusing condition must be established independently. The goal is always to translate the biological evidence into the criminal lawʼs folk psychological criteria.

Assessing criminal responsibility involves a retrospective evaluation of the defendantʼs mental states at the time of the crime. No criminal wears a portable scanner or other neurodetection device that provides a measurement at the time of the crime. At least, not yet. Further, neuroscience is insufficiently developed to detect specific, legally-relevant mental content or to provide a sufficiently accurate diagnostic marker for even severe mental disorder (Frances, 2009). Nonetheless, certain aspects of neural structure and function that bear on legally relevant capacities, such as the capacity for rationality and control, may be temporally stable in general or in individual cases. If they are, neuroevidence may permit a reasonably valid retrospective inference about the defendantʼs rational and control capacities and their impact on criminal behaviour. This will of course depend on the existence of adequate science to do this. We now lack such science, but future research may remedy this.

(37)

such questions. The criteria for competence are functional. They ask whether the subject can perform some task, such as understanding the nature of a criminal proceeding or understanding a treatment option that is being offered, at a level the law considers normatively acceptable to warrant respecting the subjectʼs choice and autonomy.

Now, let us begin consideration of the specific grounds for neuromodesty. At present, most neuroscience studies on human beings involve very small numbers of subjects, which makes establishing statistical significance difficult. Most of the studies have been done on college and university students, who are hardly a random sample of the population generally and of criminal offenders specifically. There is also a serious question of whether findings based on subjectsʼ behaviour and brain activity in a scanner would apply to real world situations. Further, most studies average the neurodata over the subjects and the average finding may not accurately describe the brain structure or function of any actual subject in the study. Replications are few, which is especially important for law. Policy and adjudication should not be influenced by findings that are insufficiently established and replications of findings are crucial to our confidence in a result. Finally, the neuroscience of cognition and interpersonal behaviour is largely in its infancy and what is known is quite coarse-grained and correlational rather than fine-grained and causal (Miller, 2010).3 What is being investigated is an association between a task in the scanner and brain activity. These studies do not demonstrate that the brain activity is either a necessary, sufficient or predisposing causal condition for the behavioural task that is being done in the scanner. Any language that suggests otherwise, such as claiming that some brain region is the neural substrate for the behaviour, is simply not justifiable. Moreover, activity in the same region may be associated with diametrically opposed behavioural phenomena, such as love and hate.

There are also technical and research design difficulties. It takes many mathematical transformations to get from the raw fMRI data to the images of the brain that are increasingly familiar. Explaining these transformations is beyond me, but I do understand that the likelihood that an investigator will find a statistically significant result depends on how the researcher sets the threshold for significance. There is dispute about this and the threshold levels are conventional. Change the threshold and the outcome will change. Now, I have been convinced by my neuroscience colleagues that many of such technical difficulties have been largely solved, but research design and potentially unjustified inferences from the studies are still an acute problem. It is extraordinarily difficult to control for all conceivable artifacts. Consequently, there are often problems of over-inference.

A major, potential problem for present and future collection and use of imaging evidence is whether an

3

(38)

uncooperative subject can invalidate a scan by the intentional use of countermeasures. This is not a problem if the subject either has a right not to be scanned, such as a 5th Amendment constitutional right in the United States not to be a witness against himself, or if the subject wishes to use neuroscience evidence. But if the subject can be scanned involuntarily or if the subjectʼs purposes are served by invalidating a consensual scan, this is a difficulty. The first experimental study of this question has now been published and it discloses that in a laboratory lie-detection study, subjects could substantially undermine the accuracy of lie-detection by employing countermeasures (Ganis, Rosenfeld, Meixner, Kievit, & Schendan 2011).

Over time, however, these problems may ease as imaging and other techniques become less expensive and more accurate, as research designs become more sophisticated, and as the sophistication of the science increases generally. It is also an open question whether accurate inferences or predictions about individuals are possible using group data for a group that include the individual. This is a very controversial topic, but even if it is difficult or impossible now, it may become easier in the future.

Virtually all neuroscience studies of potential interest to the law involve some behaviour that has already been identified as of interest and the point of the study is to identify that behaviourʼs neural correlates. Neuroscientists do not go on general ʻfishingʼ expeditions. There is usually some bit of behaviour, such as addiction, schizophrenia, or impulsivity, that they would like to understand better by investigating its neural correlates. To do this properly presupposes that the researchers have already identified and validated the behaviour under neuroscientific investigation. I call this the ʻclear cutʼ problem. We typically get clear neuroscientific results only in cases in which the behavioural evidence was already clear.

On occasion, the neuroscience might suggest that the behaviour is not well-characterized or is neurally indistinguishable from other, seemingly different behaviour. In general, however, the existence of legally relevant behaviour will already be apparent. For example, some people are grossly out of touch with reality. If, as a result, they do not understand right from wrong, we excuse them because they lack such knowledge. We might learn a great deal about the neural correlates of such psychological abnormalities, but we already knew without neuroscientic data that these abnormalities existed and we had a firm view of their normative significance. In the future, however, we may learn more about the causal link between the brain and behaviour and studies may be devised that are more directly legally relevant. I suspect that we are unlikely to make substantial progress with neural assessment of mental content, but we are likely to learn more about capacities that will bear on excuse or mitigation.

(39)

know that many people with abnormal spines do not experience back pain, and many people who complain of back pain have normal spines. If the person is claiming a disability and the spine looks dreadful, evidence that the person regularly exercises on a trampoline without difficulty indicates that there is no disability caused by back pain. If there is reason to suspect malingering, however, and there is not clear behavioural evidence of lack of pain, then a completely normal spine might be of use in deciding whether the claimant is malingering. Unless the correlation between the image and the legally relevant behaviour is very powerful, such evidence will be of limited help, however.

If actions speak louder than images, however, what room is there for using neuroevidence? Let us begin with cases in which the behavioural evidence is clear and permits an equally clear inference about the defendantʼs mental state. For example, lay people may not know the technical term to apply to people who are manifestly out of touch with reality, but they will readily recognize this unfortunate condition. No further tests of any sort will be necessary to prove this. In such cases, neuroevidence will be at most convergent and increase our confidence in what we already had confidently concluded. Whether it is worth collecting the neuroevidence will depend on how cost-benefit justified obtaining convergent evidence will be.

The most striking example of just such a case was the US Supreme Courtʼs decision, Roper v Simmons,4 which categorically excluded the death penalty for capital murders who killed when they were sixteen or seventeen years old because such killers did not deserve the death penalty. The amicus briefs were replete with neuroscience data showing that the brains of late adolescents are not fully biologically mature, and advocates used such data to suggest that the adolescent killers could not be fairly put to death. Now, we already knew from commonsense observation and rigorous behavioural studies that juveniles are on average less rational than adults. What did the neuroscientific evidence about the juvenile brain add? It was consistent with the undeniable behavioural data, and perhaps provided a partial causal explanation of the behavioural differences. The neuroscience data was therefore merely additive and only indirectly relevant and the Court did not cite it, except perhaps by implication.5

Whether adolescents are sufficiently less rational on average than adults to exclude them categorically from the death penalty is of course a normative legal question and not a scientific or psychological question. Advocates claimed, however, that the neuroscience confirmed that adolescents are insufficiently responsible

4

Roper v Simmons, 543 US 551 (2005).

5

(40)

to be executed, thus confusing the positive and the normative. The neuroscience evidence in no way independently confirms that adolescents are less responsible. If the behavioural differences between adolescents and adults were slight, it would not matter if their brains are quite different. Similarly, if the behavioural differences were sufficient for moral and constitutional differential treatment, then it would not matter if the brains were essentially indistinguishable.

If the behavioural data are not clear, then the potential contribution of neuroscience is large. Unfortunately, it is in just such cases that the neuroscience at present is not likely to be of much help. As noted, I term this the ʻclear cutʼ problem. Recall that neuroscientific studies usually start with clear cases of well-characterized behaviour. In such cases, the neural markers might be quite sensitive to the already clearly identified behaviours precisely because the behaviour is so clear. Less clear behaviour is simply not studied or for less clear behaviour the overlap is greater between experimental and control subjects. Thus the neural markers of clear cases will provide little guidance to resolve behaviourally ambiguous cases of legally relevant behaviour. For example, suppose in an insanity defence case the question is whether the defendant suffers from a major mental disorder such as schizophrenia. In extreme cases, the behaviour will be clear and no neurodata will be necessary. Investigators have discovered various small but statistically significant differences in neural structure or function between people who are clearly suffering from schizophrenia and those who are not. Nonetheless, in a behaviourally unclear case, the overlap between data on the brains of people with schizophrenia and people without the disorder is so great that a scan is insufficiently sensitive to be used for diagnostic purposes.

Some people think that executive capacity, the congeries of cognitive and emotional capacities that help us plan and regulate our behaviour, is going to be the Holy Grail to help the law determine an offenderʼs true culpability. After all, there is an attractive moral case that people with substantial lack of these capacities are less culpable, even if their conduct satisfied the prima facie case for the crime charged. Perhaps neuroscience can provide specific data previously unavailable to identify executive capacity differences more precisely. There are two problems, however. First, significant problems with executive capacity are readily apparent without testing and the criminal law simply will not adopt fine-grained culpability criteria. Second, the correlation between neuropsychological tests of executive capacity and actual real world behaviour is not terribly high (see Barkley & Murphy, 2010). Only a small fraction of the variance is accounted for, and the scanning studies will use the types of tasks the behavioural tests use. Consequently, we are far from able to use neuroscience accurately to assess non-obvious executive capacity differences that are valid in real world contexts.

Assessing the radical claim that we are not agents

(41)

notions of responsibility based on mental states and actions guided by mental states would be imperilled. But is the rich explanatory apparatus of intentionality simply a post-hoc rationalization the brains of hapless homo sapiens construct to explain what their brains have already done? Will the criminal justice system as we know it wither away as an outmoded relic of a prescientific and cruel age? If so, not only criminal law is in peril. What will be the fate of contracts, for example, when a biological machine that was formerly called a person claims that it should not be bound because it did not make a contract? The contract is also simply the outcome of various ʻneuronal circumstancesʼ.

Given how little we know about the brain-mind and brain-action connections, to claim based on neuroscience that we should radically change our picture of ourselves and our legal doctrines and practices is a form of neuroarrogance. Although I predict that we will see far more numerous attempts to introduce neuroevidence in the future, I have elsewhere argued that for conceptual and scientific reasons there is no reason at present to believe that we are not agents (Morse, 2008). It is possible that we are not agents, but the current science does not remotely demonstrate that this is true. The burden of persuasion is firmly on the proponents of the radical view.

What is more, the radical view entails no positive agenda. Suppose we were convinced by the mechanistic view that we are not intentional, rational agents after all. (Of course, the notion of being ʻconvincedʼ would be an illusion, too. Being convinced means that we are persuaded by evidence or argument, but a mechanism is not persuaded by anything. It is simply neurophysically transformed.) What should we do now? We know that it is an illusion to think that our deliberations and intentions have any causal efficacy in the world. We also know, however, that we experience sensations such as pleasure and pain and that we care about what happens to us and to the world. We cannot just sit quietly and wait for our brains to activate, for determinism to happen. We must, and will of course, deliberate and act.

If we still thought that the radical view were correct and that standard notions of genuine moral responsibility and desert were therefore impossible, we might nevertheless continue to believe that the law would not necessarily have to give up the concept of incentives. Indeed, Greene and Cohen concede that we would have to keep punishing people for practical purposes. Such an account would be consistent with ʻblack boxʼ accounts of economic incentives that simply depend on the relation between inputs and outputs without considering the mind as a mediator between the two. For those who believe that a thoroughly naturalized account of human behaviour entails complete consequentialism, such a conclusion might not be unwelcome.

On the other hand, this view seems to entail the same internal contradiction just explored. What is the nature of the ʻagentʼ that is discovering the laws governing how incentives shape behaviour? Could understanding and providing incentives via social norms and legal rules simply be epiphenomenal interpretations of what the brain has already done? How do ʻweʼ ʻdecideʼ which behaviours to reward or punish? What role does ʻreasonʼ – a property of thoughts and agents, not a property of brains – play in this ʻdecisionʼ?

Referenties

GERELATEERDE DOCUMENTEN

A negative moderating effect of moral identity between the relation of general rules and moral rationalization was found, despite the fact moral identity was not found to

Chen Zhen said, “The people in the state all think that you, Master, will [make a plea to] distribute [from] the Tang for them again, but I apprehend you cannot do so again.” Mencius

Uit de proef 2000/2001 bleek dat de wijze van opbouw van de koude-eenheden (kunstmatig door koeling dan wel door de natuur in het veld) geen invloed had op de productie of

This behavior can be explained using the influence of the side walls on the tra- jectory of the ball and on the collapse time together with the closure depth: For the same pressure,

His belief in deity was basically subject to the scientific observation that nature obeys laws for its own existence and for that of life (Flew with Varghese 2007:89). He

emotional anthropomorphism. Emotional anthropomorphism which, contra de Waal who presented it in a negative light, I argued may play an important role in group identification

i) The crop is valuable and the potential risk regarding unproven alternative biological control strategies is simply too big. Producers cannot afford to take

Quadratic associations were present in all groups; both relatively high and low physical activity levels were associated with higher symptom severity in patients with CFS, patients