• No results found

Well-being: An Integrative Interdisciplinary Illumination

N/A
N/A
Protected

Academic year: 2021

Share "Well-being: An Integrative Interdisciplinary Illumination"

Copied!
29
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Well-being: An Integrative Interdisciplinary

Illumination

29-12-2014 Andrew Sutjahjo Supervisor: Ruut Veenhoven Co-assessor: Machiel Keestra Research Master Brain and Cognitive Sciences Cognitive Science track University of Amsterdam

(2)

Contents

Chapter 1: The Setting ... 3

Framework: ... 5

Chapter 2: Subjective Well-being ... 7

Chapter 3: Neo-classical economics ... 12

Chapter 4: Neuroeconomics ... 14

Choice ... 14

Value ... 16

Chapter 5: Integration of Well-being. ... 19

Translating it to one’s own discipline: ... 19

Interdisciplinary Cooperation: ... 19

Interdisciplinary Integration: ... 20

A cautionary tale: ... 22

Chapter 6: General conclusion ... 23

(3)

Chapter 1: The Setting

This thesis focuses on well-being, and the study and measurement thereof in the fields of Psychology and Economics. Many misunderstandings and heated discussions have been borne from the

discourse between these disciplines. This mostly stems from the lack of mutual understanding of the foundations upon which each of these disciplines stand, the bodies of knowledge that have been gained along the way, and accepted assumptions that each discipline has made. This thesis will attempt to bridge these gaps by providing a general overview of each of the disciplines’ forays into, and methodologies for the study of well-being in an attempt to further understanding between disciplines and to arrive at integrated interdisciplinary insights of well-being.

It is prudent to note that even within the disciplines, the precise definitions of well-being may differ wildly between researchers or are even, at times, omitted. We will thus first begin with some working definitions of well-being that are inferred to be the most prevalent within each discipline. Psychology has for the most part followed the tradition of subjectivity in its study of well-being (Diener 1984). If a person judges themselves to be happy, then that indicates that they have a high level of well-being. Psychological questionnaires have been developed to attempt to translate feelings into quantifiable metrics. These metrics range from single questions of “in the past three months, how satisfied have you been with life as a whole” (Veenhoven 1996) to the development of constructs which are measured through multiple questions (Diener 1984; Robinson & Clore, 2002). The field has also identified that certain judgemental or reporting biases can skew the answers that respondents give (Kahneman 1999; Shepperd 2002; Greifeneder, Bless & Pham 2010; Wilhelms & Reyna, 2014), and these questionnaires attempt to at least measure and report the magnitude of these biases (Kline 1991). Further development of measurement methods have attempted to control for certain cognitive biases of memory (Kahneman et al. 2004), priming and identity related beliefs, but introduce new problems such as data overload with these solutions (Scollon et al., 2003). The developments in different constructs and measurement methods have led to subtly differing working definitions of well-being, but all have the use of subjective report in common. These measures have then been used to understand which factors influence well-being.

The study of welfare in Economics has interchangeable used the terms well-being and welfare (Sumner 1996). While the concept of either does not subsume a specific definition, economists use a form of preference satisfaction account of well-being. People are better off in state X than in state Y if and only if they prefer state X over state Y. (Angner, 2005).

In neo-classical economics we find the concept of utility: the satisfaction of one’s wants and needs. Much like the subjective feelings used in psychology, utility cannot be measured explicitly. An assumption is thus made that agents will act “rationally” so as to maximise the degree to which preferences are satisfied; if a person chooses state X over state Y, then the utility that person gets in state X is above that received in state Y. This choice of X over Y reveals that person’s preference. The use of income as a measure of welfare also stems from this idea: “The rationale is that, given the

relative prices they face, people individually or collectively are free to spend their money in whatever way maximizes their satisfactions” (Denison, 1971). Thus the more opportunities people have for

choice, or the more money they have, has become synonymous to more welfare or well-being. However, there is a problem surrounding the assumptions that welfare economics makes that has mostly been eschewed: “Welfare economics identifies welfare with the satisfaction of preferences.

(4)

This identification is so automatic and ubiquitous that economists seldom realize how controversial it is” (Hausman and McPherson 1997, 17)

A problem is that we are not always rational actors. We sometimes prefer X over Y, Y over Z, but Z over X; we show intransitive choices (Tversky, 1969). While this can be due to making mistakes, more consistent intransitivity is also found (Kalenscher et al. 2010). We also show irrational behaviours when we’re asked to choose from many different options (Iyengar &Lepper, 2000).

Neo-classical Economics has used certain workarounds for this, including a distinction between actual (or manifest) and ideal preference satisfaction (Harsanyi, 1982). The former is open to irrationalities but is, at least, observable, while the latter is the preference one would have, given that all pertinent information is available to a rational actor, the latter cannot be observed, however. While the issue on whether actual preferences or ideal preferences has been discussed, most neo-classical economists often regarded the differences between the two as matters of detail (Hausman and McPherson 1996).

This seemingly lackadaisical mentality does not stem from sloth, but from a selection of focus. There are two classes of economic theory: the positive, and the normative. Positive economics focuses on prediction, whereas normative economics focuses on optimality, well-being and efficiency, it focuses on what one ought to do. The latter provided elegant models, designed to explain a limited

proportion of behaviour, simple enough in its number of free parameters so it can be utilized in the optimization of policy, but complex in the mathematical formulas it used. While it seemed

implausible that the formulas of this model were calculated every time a person was faced with a decision, Friedman (1953) responded that the person responded “as if” he knew all these formulas. However, if the person shows the behaviour that the model predicts, then it must be because the model is somewhere in the brain, where the behaviour comes from; there must be a mechanistic explanation that follows the model. This has given rise to the burgeoning field of Neuroeconomics, an interdisciplinary field focused on choice, which has put forth an explicit, positive theory of choice mechanism which can be tested simultaneously at the neural, psychological, and economic levels of analysis. Glimcher (2011) has synthesized the findings of behavioural economics, and cognitive sciences into a unified, two-staged, mechanistic architecture of choice, consisting of a choice circuit and a valuation stage. The choice circuit takes an input of a stochastic, normalized, subjective expected value signal, which stems from the valuation stage of the medial prefrontal cortex and the striatum. This architecture of choice improved upon neo-classical economics, and predicts the irrational behaviours mentioned earlier. (Palmer et al 2005; Gold et al 2007; Bartra et al 2013). While this perfect example of interdisciplinarity has garnered much predictive power, and is valuable in a positive sense, Glimcher (2011) has deliberately withheld himself from providing a normative theory. He reasons that the neural measurement tools are still in their infancy, and drawing conclusions about an (as of yet) undeveloped neural, normative theory of welfare could have significant consequences. This is a fate well illustrated by Ng (2013), who showcases an example of how not to do interdisciplinary research.

This leaves a gap in the study of well-being, for where does this leave the neo-classical concept of increasing choice opportunities, to automatically and maximally increase well-being? Even with the reluctance that Glimcher shows, neuroscience could provide at least some handholds towards a

(5)

defining a normative theory of well-being, and providing avenues of possible research to further home in on what sorts of aspects this theory would need to have.

Given the relative youth of the field of neuroeconomics, few theories of welfare on a micro-economic scale have been developed as of yet. To aid a possible integration we will review some

interdisciplinary papers between psychology, economics and neuroscience to analyse the methods of integration out there. Smith et al. (2014) attempt to reveal preferences using neural activity,

disentangling preferences from choices, while Krajbich, Oud & Fehr (2014) attempt to measure intensity of preference through reaction times of choice.

Furthermore we will review economic conceptual papers on the need for the conformation of SWB measures to preference satisfaction (Angner, 2013), and approaches to integrating limited rationality with economics which have an impact on the definition of utility (Rabin 2013a; Rabin, 2013b). Additionally we will see integration papers that attempt to find correlations between economic factors and Subjective well-being while phenomenologically integrating the two (Dolan, Peasgood & White, 2008), and those that attempt to find ordinality within different normative theories of well-being (Benjamin et al, 2012).

It is from the synthesis of this progress in psychological definitions of well-being, with the progress from neo-classical economics to the tentative definitions of well-being in neuroeconomics that we will attempt to find integrative interdisciplinary insights into well-being and set a path for further research to link these definitions together.

It is in this setting that this thesis will be grounded. The study of well-being lends itself well to interdisciplinary integration, and can build upon the integration of choice before it. Similar to Glimcher (2011) we will follow Repko’s (2008) steps for interdisciplinary research. We will strive to develop adequacy in the study of well-being in Psychology, neo-classical- and Neuroeconomics, analysing the problems they have faced, and the insights that were gained by the resolution of these problems. We will identify conflicts between disciplinary insights and document the found common grounds and integrate the findings into an interdisciplinary understanding and definition of well-being.

Framework:

Before we dive into the bodies of research on Well-being, it is good to provide a framework of the possible definitions of well-being, as there is a wide range available. To help us define the definitions used by researchers we look toward Veenhoven (2013), a sociologist who is affiliated with a

happiness economics research organization. The bulk of his academic career was focused on the integration of findings of research on well-being from different disciplines and their relation to happiness. He has put forward a phenomenological decomposition of what qualities of life there are with these findings in mind.

This decomposition starts off in the distinction between what constitutes quality of life: there is a distinction between the opportunity for a good quality of life, and having a good quality of life, bearing the distinction of life-chances, and life-results. Furthermore, there is a distinction between internal and external qualities: the difference between qualities of the individual and qualities of the environment. This results in a two-by-two matrix, yielding four qualities of life.

(6)

External Internal

Life chances Livability of environment Life-ability of person

Life results Utility of life Appreciation of life

Fig 1. Four qualities of life, taken from Veenhoven (2013)

The life chances are constituted of the Livability of the environment, which pertains to how suitable the environment is for a person, and the Life-ability of the person, which is a measure of how well people are equipped to cope with life. We will find later on that neo-classical economic measures of well-being are primarily focused on these two qualities of life, while assuming in their models that people will automatically maximize their life results based on the chances that they get.

Neuroeconomics, however attempts to create a biologically plausible version of this assumption. It is focused on the translation step from life chance to results.

The life results are constituted of the Utility of life, which represents that a good life must be good for something more than itself, not to be confused with the neo-classical definition. The final quality is the subjective appreciation of life, and contains most psychology researchers’ definitions of well-being.

This subjective appreciation of life is further subdivided by Veenhoven (2012) into qualities of

satisfaction. These are again based on two distinctions: The first distinction pertains to the breadth of satisfaction, resulting in the possibility of being satisfied with an aspect of life, and the possibility of being satisfied with life-as-a-whole. Furthermore there is a distinction between how long the experience of enjoyment is: it can be short-lived, and enduring. This again, results in a two-by-two matrix, yielding four qualities of satisfaction.

Passing Enduring

Part of life Pleasure Domain satisfaction

Life as a whole Top experience Happiness

Fig 2. Taken from Veenhoven (2009)

The part of life satisfactions are constituted of pleasure, which takes its parallel from hedonism, and domain satisfaction, which denotes enduring appreciation of life aspects such as marriage or job satisfaction. The life as a whole satisfactions are constituted of top experiences, which involve short bursts of intense feelings with perception of wholeness, and what Veenhoven describes as

Happiness, or the “degree to which an individual judges the overall quality of his/her life as a whole favourably”. In such the focus of Veenhoven’s research is focused on his definition of happiness, but that will be discussed at a later time. Veenhoven (2009) finds a further distinction between theories of life satisfaction, which lies between the more affective or cognitive definitions. The more affective theories focus on the feelings and emotions, while the cognitive theories argue that it is the

evaluation and appraisal of one’s life which constitutes a life well lived.

The framework he provides is one which will be helpful in sorting the different models of well-being as many of the models either focus on different qualities within these matrices, or attempt to find correlational and causational links between – and biases in translating from – different qualities of satisfaction and well-being.

(7)

Chapter 2: Subjective Well-being

Some of the first inquiries of Subjective well-being (SWB) in the social sciences are found in two studies by Hamilton (1929) and Davis (1929), both of whom wanted to find correlations between people’s sex lives, and other aspects of their lives. Both used questionnaires, of which one of the questions was: “Do you consider your life on the whole (a) happy, satisfactory, successful; (b) unhappy, unsatisfactory, unsuccessful? In each case why?” (Davis 1929). One of the comments that Hamilton wrote motivating a similar question was:

“My standpoint is that of the psychiatrist who believes that subjective phenomena, as these are experienced by the persons who report their occurrence, do not need to be translated into anything else in order to be dealt with as objectively as we deal with all other biological phenomena (Hamilton 1929, xi).

In this section we will focus more upon the more current models and theories of Subjective well-being (SWB), and see how the concept has changed since 1929. We will find that Hamilton’s view on the objectivity of subjective reports is not shared by current researchers: there are translational steps between the objective events that happen in one’s life and the experience that one gets from these. Furthermore, the method that Hamilton uses is one of self-report, and we will find that this translational step from experience to judging and reporting this experience is also rife with biases. We will briefly delve into the biases that are found in self-report, and examine the many ways in which scientists have tried to tackle the problems linked with self-report in an attempt to accurately measure a person’s subjective well-being.

Tautologically, we find that all current proponents of SWB can agree upon one thing: Subjective Well-being is subjective. While this may seem like a droll statement, it is important to emphasize that the study of SWB is inherently an experiential one:

“Notably absent from definitions of SWB are necessary objective conditions such as health, comfort, virtue, or wealth. Although such conditions are seen as potential influences on SWB, they are not seen as an inherent and

necessary part of it.” (Diener 1984)

In short, the subject’s experience of well-being, the mechanisms behind this experience, and the judgments upon this experience are what researchers of SWB are trying to measure.

The general consensus among SWB researchers is that mechanisms of SWB include all positive and negative moods and emotions that one can experience; these have been called positive and negative affect. Furthermore there needs to be some form of cognitive appraisal upon one’s positive and negative affect. Before we examine how the accounts of SWB differ in these concepts we will first look at the general framework of how psychologists have come to conclude that these three concepts constitute well-being.

A problem in the measurement of SWB is that it is abstract and not directly observable. Unlike length, which can be defined as the distance between one point and another, objectively comparable with a ruler with pre-set distances, SWB is a concept which has no outward direct measurability. In the field of Psychometrics, variables of this kind are called “constructs”, which according to Nunnally and Bernstein (1994) “reflect a hypothesis (often incompletely formed) that a variety of behaviors will correlate with one another in studies of individual differences and/or will be similarly affected by experimental manipulations”.

(8)

When it comes to the validity of these constructs, Kline says that “A test is said to be valid if it measures what it purports to measure” (1991). Nunnally and Bernstein (1994) further describe methods to confirm that the construct that has been formed is an accurate link between the abstract concept and the behaviours measured and call it construct validation:

There are three major aspects of construct validation: (1) specifying the domain of observables related to the construct; (2) determining the extent to which observables tend to measure the same thing, several different things, or many different things from empirical research and statistical analyses; and (3) performing subsequent individual differences studies and/or experiments to determine the extent to which supposed measures of the construct are consistent with “best guesses” about the construct (Nunnally and Bernstein 1994, 86-87).

Lucas, Diener and Suh (1996) used multitrait-multimethod analyses to show that affect, unpleasant affect and life satisfaction are all constructs in themselves, and that each of these constructs are valid observables for the construct of SWB. While these three constructs are a constant within the domain of SWB, there is still a discussion going as to what constitutes happiness or well-being.

Within the method of subjective report measures, there is another important characteristic besides validity that the measures need to have: Reliability. The reliability of a measurement means that it is “without variation regardless of when the measurement is made or who makes the measurement” (Kline 1991). In the case of SWB, there are two ways of defining the reliability of a measurement. Test-retest reliability is found when the test yields “the same score for each subject when he or she takes the test on another occasion, given that their status on the variable has not changed” (Kline 1991). If a questionnaire has multiple items, then the test also has internal consistency if “each item measures the same variable.”

Most methods of measuring SWB is done through the use of self-reports which focus on the

satisfaction construct, however, the reports that are used differ greatly. The most prevalent in use is the 5-item Satisfaction With Life Scale (Diener, 1984). This scale was developed from 48 different self-report items. After the items that pertained to positive and negative affect were dropped, a factor analysis was run, and items with loadings of under 0.60 were also discarded. From the 10 that were left, five were removed due to semantic similarity. The remaining five items were to be

responded to in on a 1-7 scale with 1 being “strongly disagree, and 7 being “strongly agree”: 1. In most ways my life is close to my ideal.

2. The conditions of my life are excellent. 3. I am satisfied with life.

4. So far I have gotten the important things I want in life. 5. If I could live my life over, I would change almost nothing.

While these questions are seen to have a +0.80 test-retest reliability correlation over a one-month time-span (Steger, Frazier, Oishi, & Kaler, 2006), we notice that when compared to the framework of Veenhoven (2013 & 2012), the first, third and fourth items are within the appreciation of life quality, while the second pertains to a life chance (though which is yet unclear). This reflects poorly on its internal consistency as it differs from the other items, and poorly on its validity as it differs from what it purports to measure. In a similar vein, the fifth item seems invalid as well, for a happy, but curious person could still opt to try life in a different way. Furthermore, the timeframes of the items are not well defined, they could be interpreted in a passing, or enduring way.

(9)

Veenhoven’s approach takes a different path in that he focuses on enduring satisfaction with life as a whole, or the Happiness quadrant of his appreciation of life matrix. In such, certain key-words which include a time frame (as a whole, in the past x months or presently) and satisfied-with-life as a whole are used. This ensures the validity of the item in that it is asking about what the item is trying to measure. However, the reliability of this item is quite low. Its test-retest reliability when asked twice in the same interview, finds a reliability correlation of around +0.7, and drops to +0.6 when

measured over a period of a week (Veenhoven 1996).

In all, the path Veenhoven has taken stems from a more first-principle methodology in that he clearly defines what he wants to measure, and derives the question asked from that definition, which gives good validity, but bad reliability. Diener’s more fuzzy method of scale construction through factor analyses and loading the items ends up with items that correlate well together and gives a reliable measurement, but leaves a question mark as to what is precisely measured. Furthermore, the final score of the SWLS is simply the sum of all the items. This leaves some error in that the weights that an individual subjectively places on each item may not coincide with the uniform weight the scale places on them, nor may the additive nature of the scale reflect the nature of how an individual integrates those factors into satisfaction with life as a whole.

A further question that has gripped researchers is how these judgments of life satisfaction are formed in people: How do people integrate the affects of a certain time frame into an answer on life satisfaction? We look towards the field of affective psychology for an answer.

Robinson and Clore (2002) provide a framework of how people access knowledge when reporting about their emotions. The access to knowledge of one’s own emotion is subdivided into four types, which run from most specific, to most general:

1.) Experiential knowledge, which is a direct access of what a person’s current emotion is. 2.) Episodic memory, in which past emotions cannot be re-experienced directly, but can be reconstructed by recalling relevant thoughts and event-specific details

3.) Situation-specific beliefs, in which people access a belief about what emotion would be elicited in a particular type of situation

4.) Identity-related belief, in which people access beliefs about their emotions in general.

In short, they propose that an emotion is impossible to be re-experienced, and as time goes on, the details of the conditions in which a certain emotion was felt fade away, which leaves beliefs to guide a judgment on how an emotion felt. When possible, then people rely on a situation specific-belief to guide a report on what emotion would be felt, whilst if these are not available, people access more general beliefs related to their identity.

They further argue that the further away an event is to the current time frame in which a judgment needs to be made, the more biases people accrue. Kahneman (1999) found that in a pain experiment in which people’s hands were placed in a cold water bath for a period of time, that the access of episodic memory of how much pain was felt depended on the peak pain level and the pain level experienced at the end of the experiment (Peak-Recency effect). Strangely enough time was not encoded, which was seen when people were asked to choose between two cold water bath

treatments in which the second was exactly the same as the first, but was extended by half the time of the first while the water was heated up by one degree unbeknownst to the participant. A

(10)

On a note more relevant to global, life-as-a-whole reports of satisfaction, when the time frame of a judgment is a long period of time, or of life-as-a-whole, it would mean that people would have difficulties in remembering and integrating all of their emotional moments that define their life-as-a-whole, and would rely on identity-related beliefs and self-representations. Robinson and Clore (2002) argue that “self-representation is intimately tied to the formation of personal semantic stories” (Conway 1990) which includes abstractions such as “what I was like in college”, and that people tend to organize stories by these personal semantic memories. A consequence thereof is that events that don’t fit within these personal semantic memories are forgotten or not so easily retrieved. There is a general tendency for people to remember positive events better than negative events, likely due to those people thinking of themselves in positive terms. Furthermore, memories that possess

coherence with these personal semantic stories are perceived to be more accurate, and will be easier to retrieve. Thus when a global report of satisfaction is asked of a person, a systematic bias towards the direction of their personal semantic stories will be seen.

Further confirmation of biases towards what people want, is seen in the field of comparative risk judgment, which theorizes that people focus on what they want so that they already get some of the affective spoils of experiencing that something that they want (Shepperd 2002).

A more short term bias can be seen in priming: feeling current affective feelings influences

judgments by influencing what sort of content comes to mind; the affectively congruent concepts or memories become easier to access as they are connected by a network structure to the experienced affective feeling (Greifeneder, Bless & Pham 2011).

In a sense these biases do not affect the measurements of Veenhoven and Diener in that they lower the validity of their scales of satisfaction. Both ask for a subjective judgment of the person’s life satisfaction and these biases are simply the mechanisms behind the formation of these judgments. Kahneman(1999) on the other hand, wished to find an objective measure of subjective well-being. He tries to dissociate the biases from asking global questionnaires of subjective well-being by accessing experiential knowledge, which can be linked back to Veenhoven’s pleasure quadrant of appreciation of life and created a more affective theory of well-being. He assumes that the brain creates an affective or hedonic commentary or judgment on the current state of affairs, and that this judgment is summarizable in a single value. While he admits that the latter assumption is an

oversimplification, he claims that this is a tolerable one. This single value can be found by the Experiential Sampling method, which involves outfitting people with electronic devices that frequently ask them to judge their current emotional experience and asks them what they are currently doing. By plotting these time points, he is able to take the integral of the curve and he sees the area under the curve as measure of objective happiness. While this method leaves out the long term biases found in attempting to judge one’s life as a whole, some unresolved points are the shorter-term biases, and if the simple summation of these judgments of one’s emotions right now is a valid method of coming to a conclusion on one’s well-being.

While this approach can be seen as a good alternative methodology to studying SWB, it produces a large mass of data to be analysed (Scollon et al., 2003), and is a demanding and intrusive method for the subject, which can lead to losing interest in correctly and truthfully giving responses.

Instead of the experiential sampling method, the Day Reconstruction Method (DRM) was developed (Kahneman, Krueger, Schkade, Schwarz, & Stone, 2004), which takes an intermediate position

(11)

between the Experiential Sampling Method and global questionnaires. Instead of having an

electronic device prompt them to give a report on their current emotional experience, at the end of the day people report on their experiences of the day in separate episodes, broken down into locations and interactions with different people in a diary format. These are then rated for their emotion content. This type of research has given a different type of data than global reports, and has linked certain activities to higher affect scores. Krueger & Schkade (2008) also tested the test-retest reliability of the affect scores of activities at a two week interval and found a +0.96 correlation across activities.

While this is a much more manageable test for the participants, it also comes at a cost. By switching to tests at the end of the day, the DRM also introduced the peak-recency effect into their

measurements.

A similar thought of summation can be seen in life satisfaction as a whole: Feldman (2008) posits that there is an objective level of life satisfaction which is distorted by the many biases of judgment, and proposes the concept of Whole life satisfaction at a moment. Feldman operationalizes this concept by getting momentary glances at multiple time points of the question “how satisfied with life are you as a whole”. These momentary glances on WLS are then aggregated to get to a concept of WLS during an interval. Similar to Kahneman’s objective happiness, the integral of these time points would be a measure for life satisfaction as a whole.

While Kahneman’s idea of an objective measure of subjective well-being paves the way to finding out how passing evaluations of life and their time frames relate to a whole life satisfaction judgment, the same questions of validity as with Diener mark his methodology. The DRM uses 12 positive and negative affect descriptors (warm/friendly, hassled/pushed around) with affect scales from 0(not at all) to 6(very much) which participants needed to use to describe each episode. These are then averaged to find a mean affect rating. In Kahneman et al.’s (2004) paper there is no mention of factor analyses of these affect scales, which makes the internal consistency of the ratings questionable, and the choice to use the average even more so. Furthermore, the choice to simply use the integral of the average affect rating of the time could also yield some problems that have not been controlled for: is each point increase worth the same to a person; are the weights of each affect descriptor the same for each person?

All in all, the psychological viewpoint of well-being is that it is a state of mind, and inherently subjective. Due to its abstract nature, there is a choice between starting from first principles and using the definition to come to a single item questionnaire, or view it as a construct and validate it on the basis of theory and finding measurable correlates of that construct. The latter leaves the

definition of what is being measured to be fuzzily defined, but measurable. The global questionnaire method of measuring SWB has been seen to have many biases, yet due to its subjective and

judgment-based qualities, these biases do not tarnish its validity, but shed light on the mechanisms of these judgments. The objective measurements of subjective well-being shed light on the more passing judgments, and together they can pave the way to understanding Subjective well-being.

(12)

Chapter 3: Neo-classical economics

The study of welfare in Economics has interchangeably used the terms well-being and welfare (Sumner 1996). Even though an entire branch of economics is dedicated to the subject of welfare, welfare in itself is not defined very well. However, we can surmise that neo-classical economists use a form of preference satisfaction account of well-being. People are better off in state X than in state Y if and only if they prefer state X over state Y. (Angner, 2005).

We further find the concept of utility which operationalizes preference satisfaction. Utility is a hidden variable within a person which encodes how much value a certain option has for that person. Much like the subjective feelings used in psychology, utility cannot be measured explicitly.

However, there are a couple of assumptions that are linked with utility that are used in the

operationalization of utility. The first is that choices are transitive. A simple example is that receiving €10 will increase one’s utility by a certain amount. If receiving €100 is better than receiving €10, and receiving €1000 is better than €100, then by the law of transitivity, receiving €1000 is better than receiving €10, and receiving €10 will never be better than €1000. In a generalized form: if a > b > c, then a > c, and c > a will never happen.

A further consequence of the transitivity of choice is that the utility function is monotonic, which means that the effect of having more of a certain good or experience will always either increase or decrease one’s utility. For example it cannot be that receiving €100 increases one’s utility, while a further gain of another €100 slightly decreases one’s utility. The monotonicity of the utility function does allow for diminishing returns on utility, in that the first €100 will increase one’s utility more than the second €100. That is to say that the current level of one’s resources (in this case wealth) affects how much utility one receives from obtaining more of these resources.

This brings us to a further assumption that is made: a person objectively knows how much he or she has of a certain resource, and can predict how much utility they will gain from obtaining more of that resource. A person can compare how much utility they will gain from choices that they can make, and will base their choice on this comparison. That is to say: agents will act “rationally” so as to maximise the degree to which their preferences are satisfied.

By using this operationalization, neo-classical economists can conclude that if a person chooses state X over state Y, then the utility that person gets in state X is above that received in state Y. Thus by inferring from choices, we can reveal preferences and make an ordinal ordering of which choices give people more utility.

Our previous use of income follows from the idea that income can be used as a measure of welfare: “The rationale is that, given the relative prices they face, people individually or collectively are free to

spend their money in whatever way maximizes their satisfactions” (Denison, 1971.) Thus how many

opportunities people have for choice, or how much money they have, has become synonymous to welfare or well-being.

This central assumption of maximisation of preference satisfaction in choices can also be visualized in Veenhoven’s (2013) qualities of life. It assumes that for each opportunity, or choice that we are faced with, we will choose that which will satisfy our preferences the most. This means that the translation

(13)

step from life opportunities to life results is assumed to be the best it can be, and renders this translation step from opportunities to results to be uninteresting.

While intuitively this seems like a step away from reality, neo-classical welfare economics has chosen these constraints for a good reason, which lies within the types of classes in economic theory. There are two classes of economic theory: the positive, and the normative. Positive economics focuses on prediction, whereas normative economics is about how things ought to be done and focuses on optimality, well-being and efficiency. The latter has started from a normative statement that preference satisfaction is good for an individual, and from there has sprouted elegant models which describe - under the conditions of the previous assumptions being true - how to best maximize preference satisfaction, and thus well-being of the individual.

This is, in a sense, at the heart of what welfare economics tries to do. “Welfare economics supplies

the economist – and the politician – with standards, at least with some standards, by which to appraise and on the basis of which to formulate policy” (Scitovsky 1951, 303). While they employ

theories on how to increase well-being on a societal scale using limited resources through changes of policies, the underlying mechanisms stem from theories on how to increase well-being in the sense of preference satisfaction on an individual level1.

A problem arises from this normative definition of well-being, which also doubles as an empirical strength: The assumptions that this theory of well-being makes are well-defined, and defined in such a way that they are falsifiable. While the starting point of welfare as preference satisfaction is a normative statement, the assumptions it hinges upon are positive statements. This makes it possible to dive into behavioural data to see if the assumptions made are consistent with reality.

Neo-classical economics has already identified that we are not always rational actors. We sometimes prefer X over Y, Y over Z, but Z over X; we show intransitive choices (Tversky 1969). There are some workarounds for this, including a distinction between actual (or manifest) and ideal preference satisfaction (Harsanyi 1982). The former is open to irrationalities but is, at least, observable, while the latter is the preference one would have, given that all pertinent information is available to a rational actor, the latter cannot be observed, however. While the issue on whether actual preferences or ideal preferences matter has been discussed, most neo-classical economists often regarded the differences between the two as matters of detail (Hausman and McPherson 1996), seeing as the difference of the results between the two were still within acceptable bounds. Kahneman (1992) however, has shown that the predictions of gained utilities and the actual gained utilities can vary immensely, adding to the argument that there is a large difference between ideal and actual preferences. We will see later on that the advent of neuroeconomics attempts to elucidate how this difference arises.

In short, the actual definition of welfare within neo-classical Economics has not well been defined. However, all of the working definitions have the use of a form of preference satisfaction in common. We know that this preference satisfaction is a normative theory, presupposing that the more

preferences are satisfied, and thus the higher one’s well-being is. While the underlying theory of preference satisfaction equalling well-being is a normative theory, it is operationalized upon the

1 The translational step between maximizing well-being on an individual level, and maximizing well-being on a societal level is an entire discipline on its own, and is beyond the scope of this paper

(14)

assumptions of transitivity of choice, the monotonicity of a utility function, and the objective encoding of rewards. Through this operationalization, it has been able to use utility as an index of well-being, and assume that people will make choices to maximize their utility, and in such, maximize their well-being. Thus by looking at choices people make, we will be able to learn about the ordering of how much well-being people get.

Chapter 4: Neuroeconomics

A further problem within neoclassical economics arises from how complex the models of choice had become: due to the necessity of having a low amount of free parameters within these models to calculate how to maximize preference satisfaction, the fixed parameters on calculating utility increased in complexity. While it seems implausible that the formulas of this model were calculated every time a person was faced with a decision, Friedman (1953) responded that the person

responded “as if” he knew all these formulas. That is to say: formulae of neo-classical models of choice showed a phenomenological explanation of choice within actors: there is an implication on what sorts of computations occur to come to a choice, but it does not imply that an actual

mechanism is present within the actor.

The advent of neuroeconomics has partially stemmed from this problem: If the behaviour described by this phenomenological explanation of choice coheres with a person’s actual behaviour, then there must be a mechanistic explanation that somewhat follows the phenomenological explanation. The field of neuroeconomics is an interdisciplinary field focused on choice, which has put forth an explicit theory of the choice mechanism which can be tested simultaneously at the neural, psychological, and economic levels of analysis. In the following section we will focus mostly on Gilmcher’s (2011)

synthesis of these disciplines into a unified, two-staged, mechanistic architecture of choice. We will find that the assumptions of monotonicity, objective encoding of rewards, and transitivity, upon which the neo-classical economics’ normative theory of welfare are based are not just violated, but are biologically implausible.

We will briefly describe Glimcher’s (2011) pertinent findings for the subject of well-being2, namely those of a stochastic choice circuit, and subjective expected value signal, as well as further

supporting evidence found after the publication of his findings.

Choice

One of the main aspects of the neo-classical mechanism of choice is that it has a deterministic nature: if I expect that a certain choice will bring me the most utility, then it is inevitable that I will make that choice. To introduce the evidence given against this, we offer a small neuroscientific fact: cortical neurons are stochastic (Tolhurst et al. 1983). While we often describe the firing rate of a neuron as it “firing at 50Hz”, what this means is that a neuron fires, on average, 50 times a second. The exact times it fires within that second closely follows a poisson distribution, a stochastic, random process. If we assume that decisions are made in the cortex, and that these decisions are made in the orders of 10’s to 100’s of milliseconds, then there is a significant chance that a 50Hz neuron will fire less times over those 100 milliseconds than one of 40Hz, even though on average the 50Hz neuron

2 This is by no means a comprehensive summary of the interdisciplinary integration, nor of the mechanism of choice.

(15)

will fire more frequently. This already lends evidence against a purely deterministic mechanism of choice. But how does a group of stochastically firing neurons come to a choice?

To answer this question, we will take you on a whirlwind tour of the neural network of eye

movements and how choices of eye movements arise. The key elements of this neural network are the Lateral Intraparietal cortex (LIP), the Frontal Eye Fields (FEF) and the superior colliculus. The outputs of the superior colliculus neurons lead to the generations of eye saccades, and every eye movement action or choice must flow through this path3. The organisation of these neurons is arranged in topographical maps. This means that an eye movement to the right will be fixed to a group of neurons at a certain location, while an eye movement to the left will be fixed to a different location. These different groups of neurons are active at the same time and the more inputs they receive, the faster they themselves will fire. Once the activity of so-called burst neurons of a single group reaches a fixed, absolute threshold, it dominates all the other activity of other groups of neurons, and the signal it sends out determines what movement will be produced.

An important feature of these topographical maps is that areas compete with each other. This is to say that the firing of neurons excites other neurons close to them, making them fire more, while they inhibit neurons that are far away from them, making them fire less. The input of this system (which will be discussed later) is the subjective value function. What this gives rise to, however, is a

competitive accumulator: the neurons accumulate evidence on how much value is placed on making their movement in a stochastic way from these value functions. Evidence gained by one group of neurons decreases the evidence for a competing group of neurons, with the eventual outcome being a winner-take-all action for the group of neurons that accumulates a set-threshold’s worth of

evidence. This is called the drift diffusion model (Palmer et al. 2005), which is based on a psychological theory of memory retrieval from the 70’s (Ratcliffe 1978). An implication of the stochastic nature of choice is that there is no inherent deterministic choice, it is possible that the group of neurons which accumulates evidence from a lower subjective value function will get a dense burst of input, pushing them over the threshold and allows them to win over other groups.

Furthermore, we can separate the concept of utility or subjective value from the actual choice process: the choices of people do not inherently reveal that person’s preferences, as there is a stochastic, random choice process between one’s expected utility from an option, and actually choosing that option.

While this stochastic choice circuit gives rise to near deterministic choices when one of the competing choices is clearly better than the other, an interesting phenomenon occurs when the values are close to each other. Daniel McFadden (2005) won a Nobel Prize in economics with a description of this phenomenon. When faced with a choice of two lotteries, the first being a 50% chance to win $5 and the second being a 25% chance of winning $5, all subjects chose the first lottery. When he incremented the amount you can win with the second lottery, people eventually switched to the second lottery, at how high a monetary prize was subject to individual differences. However, when individuals were repeatedly asked to choose between lotteries with values around this switching point, they did not always choose the same lottery: they repeatedly switched between the 25% and 50% lottery, which is predicted by the drift diffusion model.

3

If we want to be exhaustive in where all choice must flow through, we would add the motor cortices. An extensive amount of literature is available for this, which is more applicable to humans. Unfortunately it lies beyond the scope of this paper as the eye saccades suffice to explain the stochasticity of choice.

(16)

For a further implication of this stochastic choice model, we take you to the experimental room of Shadlen and Newsome (1998). Within this room a monkey is looking at a screen, focused on a cross in the middle. On this screen the monkey sees a field of flickering spots. While the locations of where these spots show up are mostly random, a percentage of these spots are seen to consistently

disappear, and consistently reappear slightly to the right, (or in other trials, slightly to the left). 2 seconds later, the monkey is forced to make a choice: If this monkey makes an eye saccade from the cross towards the right (or respectively to the left), it receives a small liquid reward. These monkeys were outfitted with microelectrodes to measure neuronal activity, which were placed on the LIP. What they found was that based on the firing activity of these neurons, they could predict to where the monkey would make an eye saccade, a few milliseconds before it actually made it. This

experiment is one of the bases upon which Palmer’s (2005) drift diffusion model stands. However, Shadlen later teamed up with Gold (2007) to perform a slightly altered version of this experiment: the monkey was given allowed to make a choice at any time, instead of after two seconds. What was seen is that the monkeys made a choice quite quickly. But when the motion signal was made to be more ambiguous, and a smaller percentage of spots drifted towards a

direction, the monkeys made a slower decision: their reaction time increased. This increased reaction time illustrates another feature of the drift diffusion model: as choices get harder and harder,

evidence accumulates at a relatively similar pace, and the competitive nature of the choice mechanism means that it takes longer for the monkey to make a choice.

In summary, we have seen that cortical neurons are inherently stochastic, and perform decision making processes using a drift diffusion model. This drift diffusion model has separated the representation of value from the decision making process. An inherent trait of having stochastic neurons means that decision making processes are not completely deterministic. When the

subjective values of the choice options are far apart, this drift diffusion model behaves in a relatively deterministic fashion. However, when the subjective values are close to one another, people make inconsistent choices, often switching between choices repeatedly. This is not due to irrationally computing utilities, but due to the mechanics of the choice making process. A further result of this process is that reaction times of choices increases as the choice becomes harder. While this falsifies the neo-classical axiom of the consistency of choices, it does arm us with biologically more plausible positive theory of how choices are actually made.

Value

We’ve gotten a glimpse of the neural architecture of the stochastic choice mechanism, and some consequences that arise from it, but where does this choice mechanism get its input from, and what does this value encoding system look like?

But first, what does it not look like: to recap, one of the neo-classical economists’ axioms lies in the objective representation of values, and the tacit knowledge of how much of a certain resource one has. This axiom does not seem biologically plausible: objective information taken in from any of our senses is lost as soon as it is transformed into a neural signal. If this does not make intuitive sense, then consider a physiological experiment from 1986 by Bertino, Beauchamp and Engelman, where a certain concentration of salt was poured onto a participant’s tongue. The signal that certain salt-specific receptor cells on a tongue send out for how salty something is depends on how much salt enters its sodium specific ion channels. When the salt concentration within this person’s blood was

(17)

increased, but the poured salt concentration stayed the same, the signal that was sent out from the tongue decreased. The information about objective properties is irrevocably transformed relative to a reference point, in this case the salt concentration within this person’s blood. To be clear, there is no possible way to determine the objective salt concentration, by looking at the signal coming from the tongue, which will be the input for all further computation of, and decisions regarding the salt in that person’s mouth.

While it is clear that an actual objective representation of our surroundings is biologically

implausible, what then, does a biologically plausible model of value look like? While we described in brief how relative value between two choices in the short-term can be seen in the choice network, longer representations of value must come from somewhere on a longer term-basis. While the previous chapter on choice has a more extensive literature list, allowing us to posit a mechanistic explanation of choice, the subject of value is slightly more fragmented. What we do know however, is the mechanism of learning about how we learn value, and neural correlates of which brain areas are involved with medium and long-term value encoding.

To introduce learning we need to enter the world of dopamine neurons and incentive salience. Incentive salience in its basest form is that dopaminergic neurons do not encode reward in itself, but the Reward Prediction Error (RPE): the difference between our expectations, and any information we receive that lead us to revise these expectations (Barto 1999). In such, dopaminergic neurons fire when something unexpected happens: this can be when we get an unexpected reward, or when we (unexpectedly) receive information that a reward is coming at a later time. Whenever a reward is received, it is attributed not only to the current time, but also to previous times, which if repeated enough times, links a reward to a certain stimulus. When the link between a certain stimulus and a reward is no longer surprising, there is no dopaminergic activity during stimulus, nor reward. Another mechanism that dopamine plays a role it comes from Wickens and Kotter (1995), who described that when two (certain types of) cells fire at the same time while dopamine is present, the connection between the two cells becomes strengthened, and increase the effect the firing of one cell will have on the other. This phenomenon is called Long Term Potentiation, and is colloquially better known as “cells that fire together, wire together”. These mechanisms allow us to learn to attribute values to events.

However, once we link certain values to certain events, where are these representations stored? It is here that we do not have a mechanistic explanation, but can only offer phenomenological

explanation in the form of neural correlates.

One of the tools that neuroscience brings to the table is that of functional Magnetic Resonance Imaging (fMRI). This tool uses a large electromagnet, which can be utilized to distinguish between oxygen-rich and oxygen-poor blood within the brain at roughly 2 second intervals. From the oxygen content of blood we can infer metabolism, and thus activity of certain brain areas. Many studies have been done to find the correlation between brain area activity and rewards to subjective value. Bartra et al (2013) collected 206 of these studies and performed a meta-analysis. By contrasting brain activities found during positive subjective values with those found during negative subjective values, they found that the ventral striatum, the ventromedial prefrontal cortex (VMPFC) (specifically the Orbitofrontal cortex (OFC)) and the Posterior cingulate cortex (PCC) were more active. While there is overwhelming evidence that the central representation of positive subjective value encoding resides

(18)

in these brain areas, the exact mechanistic explanation is still beyond us. The processing of negative values shares all of its neural correlates with those of positive correlates, implicating the Dorsomedial Prefrontal cortex (DMPFC), the left and right anterior insula, bilateral striatum and thalamus. These brain areas are active during both positive subjective values, and negative subjective values.

Tom et al. (2007) investigated a lottery in which subjects had a 50% chance of winning $X, and a 50% chance of losing $Y, with both X and Y being independent variables. Besides the well-known findings of loss aversion in which participants X had to be roughly twice as large as Y to accept the bet, they found that increases in activation in the MPFC were linearly correlated with gains, while decreases in activation in the MPFC were linearly correlated with losses. According to Glimcher (2011) this suggests that the MPFC could encode an individual baseline for a subject. However, Litt et al (2011) argue that this is a saliency signal. As mentioned earlier, our knowledge base is insufficient to provide a mechanistic explanation, we can however say with certainty that a reference dependant

mechanism is at work here.

One of the inputs into the MPFC is the Dorsolateral Prefrontal cortex (DLPFC) which has traditionally been linked to self-control. Glimcher (2011) hints that the “DLPFC may compute and represent

subjective values over much longer time courses and in more abstract terms”. Further research by

Hare et al (2014) show that increased activation in the left DLPFC was more active when subjects chose delayed rewards, after controlling for the subjective value of those rewards, while connectivity between the DLPFC and the VMPFC increased when subjects chose delayed rewards. In a cross-sectional design of 20 6-13 year old children Steinbeis et al (2014) found that connectivity between the DLPFC and VMPFC accounts for overcoming temptation as a function of age.

A further input of the MPFC is the Orbitofrontal cortex (OFC), which has been known for playing a key role in valuation. Animal studies by Padoa-Schioppa & Assad (2007) have demonstrated that firing rates in monkey OFC neurons are correlated with subjective values of options in a choice set. There is evidence that the OFC stores and represents a normalized (Padoa-Schioppa, 2009) subjective value in a long, but not the length of a lifetime (Glimcher 2011) timeframe.

In this sub-chapter we’ve seen that the axiom of objective representation of value is biologically implausible, and have found a mechanism of learning through reward prediction errors and long term potentiation. While the mechanism of valuation representation is still undiscovered, we have found overwhelming evidence as to where it is situated.

All in all, our foray into neuroeconomics has shown us that the axioms of neo-classical economic theory of utility do not cohere with our biological software or hardware. What has come in its place is an interdisciplinary positive model of choice as supposed by Glimcher (2011). However, what does this mean for the neo-classical assumption of welfare and well-being coinciding with preference satisfaction, if transitivity and monotonicity of choice and the objective encoding of reward have been shown not to be valid?

Within his publication, Glimcher preached caution: his is a positive theory by choice, as Kahneman (2008) has attempted to champion neuroeconomic measurement as a basis for a new normative theory, and has come under “intense scrutiny”. His advice is to go slow: while the mechanism of choice is well-known, the theoretical tools of value are still in their infancy, and the measuring tools for brain activations are primitive at best.

(19)

Chapter 5: Integration of Well-being.

With Glimcher’s advice in mind, we will attempt to integrate definitions of well-being in psychology, economics, with a cautious glimpse at the possibilities that neuroeconomics holds. In this chapter, we will examine methods of integration: those that wish to hold on to the disciplinary definitions of well-being, but attempt to describe correlations between the two; Those that wish to change the concept of utility given the new evidence provided by neuroeconomics; and those that wish to explore the integration of welfare by using neuroeconomic techniques. We will then examine what happens when integration is done without sufficient understanding of the disciplines you are trying to integrate.

With the consequences of integrating definitions with reckless abandon in mind, we will then attempt to thread the different traditions of the study of well-being together.

Translating it to one’s own discipline:

This method of integration is one that resembles the us versus them mentality. While the xenophobia and heated debates on which normative theory is better reached its peak in 2008 (Kahneman), and the animosity between disciplines is declining to make room for cooperation or integration, there are still those that hold on to a distant mentality.

Angner (2013) for example, holds on to the orthodox welfare model in that preference satisfaction still has a one-to-one translation to welfare through the vehicle of choice. In such his thesis is that SWB only makes sense if and only if it satisfies our preferences.

“The fact that preference-satisfaction accounts of well-being permit us to develop a principled answer to the question of under the conditions under which measures of happiness can serve as proxies for welfare means that such an account lends itself to the deliberation that we must do.”(18).

While this form of integration can still prove valuable in the sense of a narrow enrichment of one of the disciplines, it is not an avenue of integration that we will be striving for.

Interdisciplinary Cooperation:

The second group of integration is characterized by working together with another discipline, yet there is no complete integration between the disciplines. While there is contact between the disciplines, the definitions of well-being from within the discipline are kept, and there is a unilateral flow of methods or insights from one discipline to the other. Most of these cooperations stem from economics attempting to close the gaps of bounded- or ir-rationality.

Rabin (2013b) Attempts to use neoclassical optimization models to integrate insights from

psychology on the limits of rationality into economics. He tries to explain systematic errors people make in the maximization of their value function through enticement and biases. He further provides a framework (2013a) on a conservative methodology when testing a new economic model which incorporates a psychological facet, providing guidelines on what types of improvements are

(20)

models not only requires an increase in predictive power over the previous models, but requires the improvement to be imbedded in the parameters of a previous model, as well as having these parameters be domain unspecific. While these constraints would maximally benefit the improvements of economic models, it forgoes the possibility of developing a truly integrative interdisciplinary model. This methodology is an example of one discipline using findings of another discipline to improve their own.

Krajbich, Oud & Fehr (2014) is one of the few normative neuroeconomic papers. Krajbich suggests using the new insight of the increased reaction time during difficult decisions to use reaction time as a measure of strength of preference. They further go on to test an intervention in which there was an opportunity cost of time, and easy as well as difficult decisions. By prompting subjects to “Choose now” if they were taking too long on a choice, they were able to improve (monetary) welfare of subjects. While the type of welfare that is used was not based on a neuroeconomic framework, they used insights from neuroeconomics to increase welfare.

Another notable proof-of-concept paper is seen in Smith et al. (2014), an economic paper which explores the possibility of revealing preferences without using choices. Smith et al. placed subjects in an MRI machine, and first showed them different foodstuffs without instruction, afterwards asking them to make choices between pairs of foodstuffs not shown in the first phase, and later asking them to rate their preferences of foodstuffs. Based on half of the choice data, they built a relatively simple model to determine voxels which are predictive of choice in each individual. This model was then applied to the non-choice fMRI data, and predictions were made on which foodstuff would be preferred when given a choice, based purely on the fMRI data. This was then conferred with the actual choice data that was unused in building the model. The results were significantly better than chance, with a 68% overall success rate. However, large individual differences were found in that no significant predictions could be made for around half the participants. This paper proved the concept of using neurological data to infer preferences between options that were not regarded as choices yet. Although the model used was rudimentary, this could be improved by using processing

algorithms such as Multivoxel Pattern Analysis (Norman et al. 2006) to not just single out voxels that are predictive of choice, but to find patterns of activation of voxels that are indicative of choice which would further consolidate the predictive power of brain scans for revealing preferences. The

methodology shared between Smith et al (2014), and Krajbich et al (2014) is an example of one discipline utilizing technologies from other disciplines to gain new insights into their own discipline.

Interdisciplinary Integration:

The third method of integration is characterized by papers pooling resources and attempting to find an interdisciplinary integrative account of well-being.

Dolan et al. (2008) attempted to operationalize subjective well-being by modelling reported SWB as a reporting function of true SWB, which in turn is described as an additive function of a range of social, economic and environmental factors. He then reviews all surveys that examine SWB in the single question satisfied-with-life as a whole tradition and its determinants. Dolan et al. (2008) correctly surmises the judgmental, reporting nature of subjective reports, and models the biases that come with judgments as an error term.

(21)

This methodology is an example of a preliminary phenomenological integration: it attempts to simply find correlations between one discipline’s normative theory and factors from many different

disciplines without providing a mechanistic explanation.

Benjamin et al (2012) created a large 136 item stated preference survey; he accommodates measures from normative theories in judgmental and affective SWB and objective indicators from economics, and attempts to have a mutually exclusive and communally exhaustive set of measures from different disciplines. A central assumption is that what a person states as a preference as well as the magnitude of how much they prefer it to the other option, is an unbiased measure of their “true” preference. This is used as the operationalization of integrating the 136 items into a single, personal well-being index. By finding the order of preference for these items, he is able to weight the marginal utilities of each of the items. This ordering is not done in a pairwise manner (which would require a herculean endeavour of 136! = 3 x 10232 comparisons) but through a set of 30 randomly chosen comparisons, from which marginal utility and ordering within the set of 136 is estimated. Furthermore, he chose for using hypothetical choices (which have known biases) and insubstantially argues that hypothetical choices are more relevant for evaluating welfare than actual behaviour. While the effort for attempting to integrate an exhaustive amount of measures is applauded, there is a leap of faith being made in adding a form of cardinality within the preference magnitude, and using that to infer the preferences between all 136 of the items. From our neuroeconomics section we have seen that – especially when the subjective values of options are close – the revealed preference does not consistently follow the choice with the greater subjective value. When this is combined with only 30 comparisons, from which the rest of the preferences are estimated, a small stochastic flip of the coin with close comparisons can have large consequences for the total ordering of the items. While the revealed choice methodology is somewhat lacking, the advent of neuroeconomics could give an objective view on the subjective value that he places on either option, either with a choice paradigm that utilizes reaction time to see when the values are relatively close, or in the future by measuring it directly from the brain. However, even in its current form, this methodology could improve upon the SWB’s factor weighting of construct factors. We have seen that a problem with SWB constructs is that each of the items that make up a construct are equally weighed, even though a person could find certain items more or less important. By utilizing a revealed preference

paradigm, for now the ranking of the importance of items can be measured, while later on some form of magnitude can also be inferred.

This methodology follows a more intermediate phenomenological integration, attempting to find ordinality within the different normative theories on well-being using a preference framework, and even advances methodologies within other disciplines.

While we have found that there are many different types of methods of contact possible between disciplines, from translating the other discipline into your own, cooperation and integration, we find that many of the examples used to illustrate these methods of contact do not always take into account the biases of SWB, or the falsified axioms of neo-classical economics, nor use

(22)

neuroeconomics. However, each interdisciplinary work adds to the wealth of knowledge, and brings us closer to the truth or helps us optimize our normative theories and increase our well-being.

A cautionary tale:

Repko (2008) posited guidelines to aid in interdisciplinary research. The first guideline is to develop adequacy in the disciplines that are to be researched. One such example as to why can be found in a paper by Ng (2013) who attempts to argue for an absolute, universal, unidimensional, cardinally measurable, and interpersonally comparable measurement of happiness, based on the evolutionary biology of happiness.

His thesis proclaims that “a precondition for affective feelings (enjoyment and suffering) is

consciousness. One cannot enjoy or suffer if one is not conscious” (4). He furthers the argument that

“though many of our brain functions are at the sub-conscious or non-conscious levels, it is clear that

consciousness must also be energy-requiring if not more so. We lose consciousness when our brain is not sufficiently supplied with blood”(4). While technically correct, I fail to grasp how the second

sentence provides evidence for the first. He argues from an evolutionary standpoint that this energy cost must be off-set by an evolutionary advantage. This is surmised to be flexible choice guided by consciousness, which is made to be consistent with fitness by

“endowing the conscious species with a reward and punishment centre” (6) and operationalizes net happiness as “excess of pleasure over pain”(6). This reward and punishment centre is seen in all animal species, it is learning reinforcement pur sang, which is not exclusive to “conscious species”, neither is consciousness a precondition for affective feelings, a fact known since Darwin (1872/1998) until present day (Fraser, 2009)

He then equates maximization of net happiness with fitness maximization, and from there concludes “When risk is involved, rationality requires the maximization of expected utility/welfare (Ng, 1984).

Such choices cannot be consistent with the maximization of expected fitness if the cardinality of the welfare values involved is not the same as (or at least cardinally equivalent to) that of fitness. As fitness is cardinally measurable, pleasure and pain or welfare values must also be cardinally measurable” (6-7).

We have found in the previous two chapters that the consistent rational maximization of expected utility has been disproven; meaning that the deductive steps linking the cardinality of expected utility with fitness is moot. However, the cardinality of excess pleasure over pain can also be disputed. We have previously seen Kahneman’s experiential sampling method (1999) and Day Reconstruction Method (2004), and commented that issues such as if the value of a single unit of affect is

independent of the current state of affect, or the weighting of each affect type in its contribution to total affect are still unresolved.

We have further seen from neuroeconomics that every experience (affect included) is subject to a subjective transform the moment that the objective experience enters our nervous system. This transform not only occurs on a basic level in the first-order - where the stimulus is transformed based on the current environment - but also on the level of experienced value an event has, which

undoubtedly affects amounts of positive and negative affect.

Referenties

GERELATEERDE DOCUMENTEN

Perceptions and expectations of regular support meetings between staff and people with an intellectual disability.. Reuzel, E.A.A.; Embregts, P.J.C.M.; Bosman, A.; van de

Akkerbouw Bloembollen Fruitteelt Glastuinbouw Groenten Pluimvee- houderij Rundvee- houderij Schapen- en geitenhouderij Varkens- houderij AL kosten (administratieve

Met behulp van bestaande graadmeters voor natuurkwaliteit (doelsoorten, positieve indicatoren, zeldzame soorten) worden vervolgens twee verschillende Nederlandse watertypen

[r]

1) Die eerste agtien ver se is ·n inleidende kompendium, dit wil s~: 'r1 kart, ge dronge sametrekking van die sentrale motiewe en boodskap van die geskrif in

Voor hypothese 4 – hoe hoger de voorspellende waarde van een selectie-instrument wordt beschouwd, hoe groter de kans dat deze methode ingezet wordt – is voor geen van de

SC was assessed employing a Dutch translation of the Self-Con- cealment Scale (SCS; Larson & Chastain, 1990; Wismeijer, Sijtsma, van Assen, & Vingerhoets, in

Wanneer de gemiddelde aantallen insecten na 6 weken worden vergeleken dan blijken zowel voor bladluis als voor trips en witte vlieg de aantallen onder kasdek PC het laagst te