• No results found

ANCHORING HEURISTIC IN HOTEL CHOICE

N/A
N/A
Protected

Academic year: 2021

Share "ANCHORING HEURISTIC IN HOTEL CHOICE"

Copied!
49
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

1. Anchoring: How Does the First Positioned Offer Affect a Purchase Decision in Hotel Choice.

Herman Blöte

29-01-2020

Master Thesis MSc Marketing

MSc Marketing Intelligence

Faculty of Economics and Business Rijksuniversiteit Groningen

(2)

2. Abstract

Companies such as Google or Expedia use sophisticated algorithms to rank millions of choice alternatives to aid consumers in making a decision. These rankings affect consumer choice and influence businesses dependent on a high positioning for certain searches. As a result, online firms are using these sorting algorithms as a source of competitive advantage by providing the most relevant options to a given query. Despite the importance of this

positioning, there has been relatively little research into how positioning affects biases in the decision-making process. This research examines how the offer on the first position affects the perception and purchase probability of offers on latter positions – better known as the anchoring effect. To this end, a big dataset provided by Expedia containing 399.344 hotel search sessions is analysed. The main method of analysis is a hierarchical Bayesian model, that is estimated using a Markov Chain Monte Carlo (MCMC) algorithm. This method allows for the level of analysis to be individual while analysing choice conditional to other observed alternatives. Though no significant result was found that anchoring influences choice

perception of later alternatives. Both the direction and probabilistic uncertainty suggest this would be an interesting path for future research. This paper did find an important moderating effect of error cost on the importance of positioning, suggesting a higher position is not always as important.

(3)

3.

Table of Contents

Introduction ... 5 Literature Review ... 7 Choosing a hotel ... 7 Anchoring Effect ... 8

Search and error Cost ... 10

Method ... 12

Data Description ... 12

Data Cleaning... 13

Intrinsic Anchor Effect ... 14

Measures ... 16 Error Cost ... 18 Control Variables ... 19 Model Selection ... 20 Bayesian Inference ... 21 Speeding Convergence... 23 Logit Model ... 24 Results ... 24 Model Robustness ... 24 Logit Model ... 25

(4)

4.

Discussion ... 28

Conclusion and Implications ... 31

Limitations and Future Research ... 33

Bibliography ... 35

(5)

5.

Introduction

People are faced with thousands of choices each day. To this end, our brains have developed a multitude of ways to make choices. Research in many fields, such as

psychology, economics, medicine, and marketing have studied processes anywhere from cognitive biases to purely rational choice as a means to explain choice. Cognitive biases or heuristics are the brain's ‘rule of thumb’ and reach decisions without expending a lot of effort and do so with relative speed (Cho, et al., 2017). Rational choice requires more effort since the brain has to collect relevant information and make a calculation to achieve an outcome aligned with the interest of the individual (Coleman & Fararo, 1992). This rich body of research can be applied to explain or predict choice in a wide variety of subjects. One clear and well-researched cognitive bias influencing choice is the anchoring and adjustment heuristic (Tversky & Kahneman, 1974). The anchoring heuristic is the overreliance on the first piece of information someone is presented with. The overreliance on this anchor can lead to a skewed evaluation of all the latter options, resulting in a choice inconsistent with stable preference.

The goal of this paper is to research the anchoring heuristic within hotel choice data. This anchoring heuristic is already proven in a wide range of researches (Furnham & Boo, 2011). However, these researches were mostly done using an experimental design and focused on explanatory validity rather than being useful for predictive purposes. This study analyses a large body of real-world hotel choice data to examine the anchoring heuristic in practice and uses methods useful for both explanatory and predictive purposes.

(6)

6. criteria and give a list of applicable choices for that query. The shown alternatives are then sorted based on some metric of relevancy (Ursu, 2018). Showing the most relevant

alternatives to a specific query is how OTA’s reduce search cost for consumers. In turn, online stores that provide consumers with lower search cost receive higher loyalty rates (Wolfinbarger & Gilly, 2003). Therefore, understanding consumer choice is a vital part of the sorting algorithm behind OTA’s. Increasing understanding of how consumer choice is

influenced by cognitive biases is not only important for OTA’s. It is also important for

policymakers or consumer associations. The positioning and context of displayed offers could be shown in such a way as to nudge consumers to the most profitable product. Pushing

consumers towards more profitable offers, instead of aiding in the purchase, is bad in terms of customer value and welfare. However, the customer's perception of feeling manipulated can be harmful to OTA’s as well (Bart, Shankar, Sultan, & Urban, 2005; Palmer, 2019).

Besides the anchoring heuristic, this paper also uses two other cognitive biases to explain consumer choice better. One type of cognitive biases influencing choice is centred around context (Huber, Payne & Puto, 1982; Tversky, 1972). Offers are not compared in a vacuum, they are compared to each other. For example, some restaurants have a deliberately high-cost menu item, to make the rest seem cheaper. The context in which an offer is

compared, matters in the evaluation of that offer (Prelec, Wernerfelt, & Zettelmeyer, 1997). Later research found that including these context effects in models, improves the prediction of choice (Berkowitsch, Scheibehenne, & Rieskamp, 2014). Hence, the estimates of our model will be conditional to the seen alternatives. To improve our analysis of consumer choice further, we also need to grasp that not all choices are made equal. Some choices are more consequential and are therefore processed differently. This effect is captured in the term ‘error cost’ and its effect on choice has been researched in a wide range of settings. Error management theory (Haselton & Buss, 2000; Haselton & Nettle, 2006), states that error cost can create biases for entire organizations. While on the other end of the spectrum, even choices rats make are influenced by error cost (Magalhães, white, Stewart, Beeby, & Vliet, 2012). To give an extreme example: most people make the choice to buy a house in a

different manner than they would on deciding to have coffee or tea. Adding this error cost of a choice to our model gives a representation of how a consumer is processing the choice given.

(7)

7. marketing this notion that every consumer is different is fundamental. To take individual differences in preferences into account, the model created is hierarchical at the individual level. The method used, implements a Markov chain Monte Carlo (MCMC) algorithm, to estimate a hierarchical binomial model, representing individual conditional utilities for hotel attributes.

By conducting this analysis on a large number of search session, this research provides important contributions to literature. First, the effect positioning has on purchase probability is central to the practice of Search Engine Optimization (SEO). Here, the goal is to get an offer as high on the result page as possible. The underlying logic is that consumers will only consider the first few alternatives, and being one of those is critical (Cutrell & Guan, 2007). This research finds a moderating effect of search error cost, on the importance of positioning. Razing questions on the justification of spending resources on securing a high position on the Expedia landing page for some offers. Secondly, our findings suggest that the anchoring and adjustment heuristic results from an underlying confirmation bias, rather than form an insufficient adjustment after observing an initial offer. Though our findings could not prove anchoring as being a significant predictor for choice, the direction found, given some of the limitations of studying a psychological effect in noisy data, does leave room for further study. Beyond theoretical contributions, this paper makes suggestions for using Bayesian updating as a means of website morphing based on individual estimates of attribute importance.

Literature review Choosing a hotel

To model the effect anchoring has on purchase probability of hotels, we first need to model how consumers choose hotels without anchoring. Increased competition and

(8)

8. respondents what product attributes provided value to them and, they found this was highly personal and idiosyncratic. To capture this wide interpretation, there are multiple definitions of customer perceived value in use (Gale 1994; Lin, Sher, Shih 2005; Roig, Garcia, Tena 2006). However, these definitions follow one general theme. They refer to a trade-off between what benefit a customer receives (utility, quality) and what a consumer sacrifices (time, cost). This rational trade-off between the positive and negative aspects of an offer to get an estimation of perceived value and the resulting purchase probability of an offer fits within the scope of the economic theory of rational choice. Here we gain understanding of perceived value and subsequent choice by quantifying that consumers infer benefits by subjectively evaluating all product attributes (Kashyap & Bojanic, 2000). This insight lies at the basis of means-end theory. This theory suggests that individuals are goal-directed and use product attributes as a means to reach a desired outcome (Gardial, et al., 1994).

In case of choosing a hotel, this implies that the perceived value of a hotel offer is determined by the extent a hotel helps the consumer achieve this desired end goal and the importance of this desired end goal. Imagine a poor traveller who is looking for a place in the middle of a city. This person will value hotel offers on how well its attributes fit the criteria. Hence, a hotel with a central location will result in a higher perceived value. However, the attribute of price will also influence this decision, given limited funds. This traveller will pick the hotel which combination of attributes resulting in the highest perceived value. This means that in means-end theory, a hotel is seen as the summation of all the attributes the hotel has, rather than as a single entity. The customer will then pick the hotel whose attribute levels optimize the perceived value for that customer. This approach works well for the scale of our data, by combining means-end theory to the economic principle that consumers seek to maximize utility (Fishburn, 1968). Utility is an latent economic measure of satisfaction, which is derived by observing choice. By using this economic principle, we can create a model that predicts choice, assuming a purely rational consumer. However, a consumer does not rely solely on rational choice alone, cognitive biases affect purchase decisions as well.

Anchoring effect

To reduce the computational load and to cope with limited information of choices, our brains have created shortcuts. These shortcuts are called heuristics and there are many

(9)

9. used in this paper is the anchoring heuristic, “an ubiquitous phenomenon in human

judgement” (Furnham & Boo, 2011). The anchoring heuristic, as introduced by (Tversky & Kahneman, 1974), is the disproportionate influence on decision-makers to make judgements that are biased towards an initially presented value. For example, Galinsky and Mussweiler (2001) found that in negotiations, the first offer made correlates strongly to the outcome. This effect is also prevalent among holiday choices, where (Oppewal, Huybers, & Crouch, 2015) suggest anchoring effect as a moderator for holiday choices. In the case of choosing a hotel on Expedia, the anchoring effect suggests that consumers do not make choices purely rational, but are influenced by the first alternative they see.

For this research, the assumption is made, that the first option showed will be the first alternative seen (Lieder F. , Griffiths, Huys, & Goodman, 2017). This assumption seems logical since most people scroll from top to bottom and are therefore more likely to see the first positioned offer first. Research into eye movement confirmed that people view internet pages from top to bottom (Simola, Salojärvi, & I, 2008). Additionally, Mussweiler and Englich (2005) found that an anchor does not necessarily has to be obvious. Even when the anchor value is outside of the awareness of an actor, the anchor still affects the decision subliminally. So, even when an Expedia visitor quickly looks over the first page and would be unable to recall the first offer specifically. The anchoring effect should still affect the decision.

To better understand and make use of the anchoring heuristic, we now discuss some theories on the underlying mechanisms of the anchoring effect. The original theory as presented by (Tversky & Kahneman, 1974), suggest that the anchoring effect is the result of an insufficient adjustment after an initially presented value. Or simply, the anchoring effect results from a failure in adjusting one's estimation either up, or downward after an initial value. This interpretation relies on the effortful process of adjustment (Furnham & Boo, 2011). Therefore, in the context of Expedia data, we would expect customers who put more effort into comparing alternatives, to be influenced less by the anchoring effect.

(10)

10. explanation differs in practice from the first one, in that the amount of effort put into

comparing alternatives does not lessen the anchoring effect to the same extent. Based on results found by many studies (Furnham & Boo, 2011), we expect the anchoring effect to influence Expedia customers when making a choice between hotel alternatives. Whether resulting from insufficient adjustments being made or confirmatory bias, we expect made choices to be similar to the anchor. Therefore, the hypothesis is:

H1: “The more dissimilar an option is to the anchor, the less likely it is selected”

Search and error cost

For a perfectly rational choice, perfect information is needed. However, gathering information is time-consuming. Which, in terms of the utility of time, comes at a cost. In economic literature, the time an individual spends gathering information is called search cost. Comparing different alternatives is an intrinsic part of any buying process. However, due to search cost, only a finite amount of alternatives are compared. Reducing search cost increases the consumer's ability to gather additional information, and could be an important factor in creating additional consumer welfare (Ursu, 2018). With the start of the digital age and the creation of electronic marketplaces and search engines. Consumers experienced a decrease in the cost of acquiring information, thus reducing search cost (Bakos, 1997). Companies such as Expedia even compete on minimizing search cost for the consumer. Platforms who minimize search cost have increased customer satisfaction and retention (Santos & Forcum, 2018). This suggests that decreasing search cost is both beneficial for consumers, as well as being a potential competitive advantage for firms. Given the importance of search cost and the effect, it has on consumer choice. Search cost is a necessary part of choice models (Wang & Sahin, 2018).

(11)

11. of computational resources. Therefore, depending on the error and search cost of a decision, people make rational use of limited cognitive resources. Which, in some situations, means relying more on heuristics rather than rational choice.

When combining these insights with theory regarding the anchoring heuristic discussed earlier, we get some interesting paths. If we assume search cost to be uniform for different individuals on the same Expedia platform. The amount of cognitive resources used on comparing alternatives should differ by the error cost for that search. In case the more classical interpretation of the anchoring heuristic holds. We would expect searches with lower error cost, to invest less cognitive resources in that search. Resulting in a higher likelihood of an insufficient adjustment being made after seeing the anchor. Thus, increasing the anchoring effect. However, in case the more contemporary interpretation of the anchoring effect holds. The confirmation bias underlying the anchoring heuristic is to a lesser extent determined by an individual exerting effort into adjusting its original evaluation of the first seen alternative. Therefore, we would expect that additional cognitive resources or effort put into a search, would not influence the anchoring effect. To test which of these mechanisms behind the anchoring effect holds, the following hypothesis is proposed:

(12)

12.

Method Data description

The data used in this research contains information regarding searches from Expedia, bundled over 34 websites domains from 218 different countries. This is being recorded over a period from November 1, 2012, till June 30, 2013. In this time, 9.917.530 offers are being seen over 399.344 searches. In these searches, 136.886 unique properties are being shown spread over 172 countries. The purchases from these searches resulted in a gross revenue of USD 106.911.133,00. When downloaded, the file takes roughly 2.2GB of space.

The dataset can be divided into two levels of information. First, is individual or search level information, the second is offering or attribute level information. The search level information consists out of: search date and time, country ID of origination and destiny, length of stay, booking window (number of days till searched stay), number of adults, number of children, whether or not search contains a Saturday night and number of rooms. There are also some historical search data, such as average paid price per night (for previous bookings), and previously given star rating, but these are unavailable for ~ 94.7% of searches.

(Table 1 & 2)

Offering level data gives information about the attributes of offerings seen by the searcher. This includes property country, star rating, average review score, two locations scores, price, whether or not the offer has a brand, and if the offer is part of a promotion. Besides these seen attributes, information regarding previous trading price and the distance from the searches is given too. Though, these are missing in ~96.3% of the data. Our data is recorded in order the visitor saw the offers on the first page they saw. On average, they scroll past 24.8 offers, before they choose one of them. All the sessions in our dataset contain at least 1 click and 69.8% of this group continued to make a booking. This means our dataset only contains the most “successful” consumers. Though good for the purpose of modeling. When inferring results about the website at large, we have to account for the fact that we observe only “successful” site visits. This means we cannot make any assumptions about website KPI’s such as click-through rates or conversion rates.

(13)

13. The data provided by Expedia is completely anonymized. This means that any

personal information such as IP-address or age is removed. Prohibiting the chance to follow consumers over multiple searches or describe groups by personal characteristics. Therefore, we treat every search as a unique individual. Given that our data only denotes return visitor information in ~5% of our data and Ursua (2018) found that 40% of customers only search once. We can assume that the liability of not being able to discern individuals in our dataset, seems limited. Further, data regarding the origination or destiny of a search is transformed into an ID, rather than the name of the country. However, this is where “big data” is useful. According to Alexa (2019), 71% of Expedia’s main domain visitors are from the US. The country ID with the highest frequency is 219 who occurs 4.788.318 times more often than the second biggest. Hence, it can be assumed that ID number 219 is the USA. Now, by using the average timestamp of the united states and compare this to those of other countries. This gives us the relative longitude in terms of time zone, compared to the United States. Next, by adding the assumption that countries that have more traffic between them are geographically closer together, when controlled for their size, as by the gravity model in international trade (Tinbergen, 1962). Put next to a map, the resulting information gives some indication of which country each ID could represent, even though only one country could be distinguished from background information. Though, a good example of the benefits of a dataset of this scale. The fact that Expedia provided it anonymized for a reason and the inherent uncertainty of labelling these ID’s with names, means a complete overview of the estimated country names will not be provided.

Data Cleaning

Usually, when analysing data from a database, it requires some data engineering to get the data in a format compatible for analysis. This was already done when I received the data. However, some irregularities remained. The variable “price_usd” has a range from 0$ to 9.381.309$. Though this seems unlikely high, there are over 800 offers in the dataset costing more than 100.000$ per night. Suggesting that these are natural outliers rather than faults in the data. Similarly, there are roughly seven thousands observation below 2$. To prevent findings from being skewed by these unlikely prices, we remove all offers below 10$ per night and above 600$, in line with Ursu (2018). Doing so leaves us with the remaining 99.65% of data.

(14)

14. This depends on the laws in the country where the search is made. Given we don’t know exactly which ID’s represent which country, we cannot account for this discrepancy a priori. The only exceptions are country 197, 172, and 156. These countries have far higher averages prices compared to the sample and geographically close countries. Indicating some different laws. Hence, we divide prices for these countries by the number of booking days searched for. Note, that the main method of analysis is hierarchical at the individual level and conditional to the other seen alternatives. This means, our model provides an estimate per search, given a search is from country X to country Y, these price differences resulting from laws are inherently controlled for in our model.

This paper uses a special type of Markov chain Monte Carlo (MCMC) algorithm, to estimate unit level estimates. This will be explained further in this method section. But, this method gets computationally heavier, the more alternatives a consumer sees. To let this chain run within the boundaries of the available computing space, all alternative offers above thirty were removed. Or in simple terms, if a consumer sees 32 alternative offers, two are removed at random. Finally, if one of the above-mentioned removals, resulted in the subsequent removal of the interaction between the search ID and the offers. All the corresponding search ID’s are removed. All these deletions add up to 122.554 offers in total, which is 1.24% of the total sample.

Intrinsic anchoring effect

Following the law of large numbers, if the ordering of offers was done at random, we would expect that position wouldn’t matter. Any offer would be equally likely displayed first or seventh. This would make researching a potential benchmark effect easier since any importance given to position would come solely from users. However, in the data, ordering is not done at random, it is done by a sophisticated algorithm. Our data does include a “random” variable, which indicates that the ordering of the offers is done at random and not by an algorithm. However, when we plot the data on position, in Figure 1, we observe some positioning effects even among the “random” labelled data.

(15)

15. If ordering was done at random, we would expect flat lines. Instead, we see trends for

positioning values for both the random and normal data. Perhaps, the ‘random’ variable only indicates that no search history was used for ordering or that some underlying property evaluation determined the order in which alternatives are shown. In either case, the ‘random’ data appears to be not random. Therefore, we cannot use this data with the intention to exclude prior positioning biases generated by the Expedia algorithm. Instead, we use

(16)

16.

Measures

In this part, we operationalize the measures for the anchoring effect and error cost. We start by operationalizing the anchoring effect. To capture the anchoring effect, we want to know per individual how similar any offer is to the first seen alternative, the anchor. If most individuals are more likely to choose alternatives that are similar to the anchor, we can assume some anchoring bias within the data. Therefore, measuring the similarity or dissimilarity between offer i and the corresponding anchoring offer gives a measure for a potential anchoring effect. For each offer, we compare the price, the star rating, and the location score to those of the corresponding anchor for that offer. Though numerous similarity measures exist, we will discuss three: Manhattan distance squared Euclidean distance, and Minkowski distance. These are measures that allow for comparing overall similarity over the three dimensions mentioned above.

Manhattan distance. Also called city block distance, is mathematically the simplest

similarity measure. It is the cumulative absolute difference between x points and was first created by Hermann Minkowski (1907). The name Manhattan refers to the grid-shaped roads of Manhattan. If one starts at point A in Manhattan and drives towards point B. The fastest route is the absolute horizontal difference between A and B plus the absolute vertical

difference between A and B. Usages of this metric are innumerable, anything from analysing end positions in chess (Gerdelsenberg, 2019) to designing the layout on chips. In recent years the measure gained popularity due to its consistency in machine learning algorithms, working with high dimensionality data (Aggarwal, Hinneburg, & Keim, 2001). There is good evidence that Manhattan distance is a robust measure of difference. Which is relevant in measuring the difference between our offer and its anchor.

Square Euclidean distance. Euclidean distance is the shortest distance between two

(17)

17.

Minkowski distance. Minkowski distance, also created by Hermann Minkowski

(1907), is a generalization of the Euclidian distance and the Manhattan distance. It has an additional p-value which can take any non-negative value. When this p-value takes a 2, the difference is squared, resulting in the Euclidian distance. When p takes 1, the result is the sum of the difference, resulting in the Manhattan distance. However, in our data, price varies further from the anchor then star rating, resulting in greater importance on the similarity measure. Using the Minkowski distance with a p-value between 0 and 1 could mitigate this effect. For example, when set to 0.7, Minkowski distance represents an inverse circle around the intercept of the x and y-axis. In this generic example, four on the x-axis is placed on the same ‘importance line’ as 2 on the area between x and y. This allows the metric to adjust better between multiple smaller effects, compared to single big differences (Shahid, Knudtson, & Bertazzon, 2009). While being a more flexible metric, there are no concrete laws for setting the p-value. Multiple machine learning algorithms can optimize this p-value to explain an observed difference (Singh, Yadav, & Rana, 2013). However, there are no unused variables that can be used to model this on. Collecting additional data to optimize the p-value, is a good option, but this falls outside of the scope of this research. For this research, several values were tried without running any analysis prior. By comparing scores a p-value of 0.4 matched the observed difference best.

Choosing. All measures are constructed relatively similarly, which is reflected in

their descriptive statistics in table 6. Especially Euclidean and Manhattan distance are very similar. Minkowski difference takes higher overall values, but the trend remains the same. To choose the best measure, a comparison of information criteria is needed. Two often-used information criteria are the Akaike Information Criteria (AIC) and Bayesian Information Criteria (BIC). A sound comparison between these measures is made by Burnham and Anderson (2004), but both work fine with our data. We compare the scores per offer, and conditional to the seen alternatives per individual. The anchor level is removed from the data, so the scores do not include prediction upon themselves. As shown in table 7, the Minkowski distance has the lowest scores for both scores, on individual level and offer level. This lower score indicates that the variable used is a stronger predictor. Hence, for this paper,

Minkowski difference with a p-value of 0.4 is used to measure difference of offer i from the anchor offer.

(18)

18. test this directionality, the variable quality was created by dividing star rating by price. We then calculated the difference of this quality per offer i by subtracting the quality score of the anchor offer per search. In simple terms, the difference between the offer and the anchor quality. This assumption was tested by running a conditional logit regression. Given the size of the dataset, it is unsurprising that this result was significant at p < 0.001. The direction of this effect is positive, with a marginal effect of around 0.05. Meaning, if the anchor is not chosen, every marginal increase of quality above the anchor for offer i increases the purchase probability by 5%, ceteris paribus. Comparable effects were also found when using review score and location score to calculate quality. Since this research wants to distil the anchoring effect without it being affected by quality or price, Minkowski difference seems a good choice. Since this measure can only be positive, a difference of one better or one worse than the anchor offer gives the same results. Though in reality, the better offer is more likely to be chosen, this increased probability comes from higher underlying quality, not the anchoring effect. Hence, measuring anchoring as a variable with only positive values means, any variation among choice quality will not be contributed to anchoring, but one of the other attributes.

Error cost

Measures for error cost will now be discussed. Error cost can be seen as the importance of a decision. When the stakes are higher, making the wrong choice is worse compared to an incremental decision. Hence, the error cost of an important decision is higher. Therefore, a measure of search importance is needed to capture error cost. Though estimating the importance of any given search is highly dependent on the individual placing the search, some generalizations can be made. To start, the length of the stay searched for seems a logical estimator of error cost. For example, if the error cost of a given search for one day is X, the error cost for the same individual making the same search for two days would

approximate 2X. Though it is unlikely that this effect would be linear, we can assume that the longer an individual plans to stay somewhere, the more ‘risk’ is involved and the higher the error cost.

Another factor influencing search importance in a predictable manner is the number of individuals included in the search. When a search for two people is made, the

(19)

19. Additionally, T. Takahashi (2007) found evidence that people tend to place higher

importance on making decisions for others, compared to decisions for themselves. Hence, the error cost of a search including multiple people is likely higher than a search for a single individual. We calculate the number of people included in the search by combining the number of children included in the search plus the number of adults included in the search.

Combining these two insights results in a generalizable score of importance, which should reflect error cost. This is done by cumulating the number of days and persons included in a search. This number is then transformed using the natural logarithm. The log

transformation accounts for the diminishing increase of importance after each marginal increase. Simply put, for a search of two weeks, one additional day does not make the search considerably more important. Whereas an additional day on a search of one day does lead to a higher increase in importance.

Control variables.

Besides offer level measures such as price, star rating, location score, and similarity to the first option. Our data contains individual-level data as well. While the amount of

information about individuals is either anonymized or missing for most of the data, there is a lot of search data available. Data such as length of the search, days from booking, adult count, children count, room count, and if the search includes a Saturday. All these variables indicate what type of hotel an individual is looking for. Though, preferences for hotel attributes differ per person and by traveler type (Yang, Mao, & Tang, 2018; Garcia, Rossel, & Coenders, 2011). We partially control for this in two ways. Firstly, we assume that individual preferences are partially reflected in search characteristics (Kashyap & Bojanic, 2000). For example, individual x is unlikely to be a business traveler if the search includes two kids. Secondly, the main method of analysis in this paper is hierarchical at the individual level. Meaning, we allow hotel attribute preferences to be different per individual. Individual data we do know is the domain from which the customer is searching. Research indicates that culture affects hotel attribute preference and subsequent evaluation (Bodet, Anaba &

(20)

20.

Model selection

“All choices mean that one alternative is selected over another” (Rittenberg & Tregarthen, 1996). Our data represents the choice of nearly 400.000 individuals choosing between alternative offers. Maintaining this discrete choice conditional to the seen alternatives, is a valuable part of the data. By doing so, we learn as much from what a consumer does not choose, as we learn from the actual choice. This conditionality also represents the context in which a choice is made, which is an important part of modelling choice (Prelec, Wernerfelt, & Zettelmeyer, 1997). Hence, our model is conditional to both increase the amount of usable data and increase its performance.

We model this by assuming that choices in our data are made by an

utility-maximizing consumer (McFadden, 1973). We then derive the unobserved utility 𝑈𝑖𝑗 from

choice 𝑦𝑖𝑗. Where i represents the choice situation for offer j out of 1…j, number of alternatives. Which is modelled as the sum of a deterministic and stochastic part:

Uij = Vij+ εij

In marketing, there have been many applications of conditional choice analysis. The field of conjoint analysis is based on conditional choice (Louviere & Hout, 1988).

Additionally, numerous other uses can be distinguished based on work by (McFadden, 1973) or (Arcidiacono & Miller, 2011). Estimating a model with a conditional binomial dependent variable is classically done using a conditional probit or logit regression. However, our data has some problems for regular conditional logit models. First of all, the amount of

(21)

21.

Bayesian inference

Fundamental to the marketing perspective is the notion that every customer is different. Decisions are made at the individual or disaggregate level. Understanding and reacting to behaviour of individuals is therefore paramount in determining optimal marketing actions (Rossi & Allenby, 2003). To gain this insight into our data, the required unit of analysis is at the individual level. Note that it is possible to get aggregate results from these individual estimates by integrating over the distribution of heterogeneity. This is one of the reasons why we use a Bayesian method as a main means of analysis in this paper. It allows for individual level of analysis which can be easily aggregated to make predictions to the sample as a whole, or smaller previously unknown subgroups. An easily understandable description of the benefit of a hierarchical Bayesian model is done by Rossi and Allenby (2003, p304):

Bayesian hierarchical models offer tremendous flexibility and modularity and are particularly useful for marketing problems…”

Another reason for using a Bayesian method has to do with the relative sparsity of the dependent variable, a booking. Though we have a large pool of data, every individual only makes one booking at maximum. This means only 2.78% of our data corresponds to a choice, with a maximum of one choice per individual. Here, hierarchical Bayes has the ability to produces reasonable estimates even in the case of scarce data.

Bayesian statistics have become more popular over the last few years. Though, the theory has been around since 1763 (Bayes, 1763). The computational difficulty of obtaining the posterior distribution has prevented widespread usage of Bayesian statistics in practice. This changed with the Markov Chain Monte Carlo (MCMC), which is a type of algorithm that simulates direct draws from complex distributions. Thereby, estimating the likelihood by simulation, instead of directly calculating it. The combination of MCMC methods and

increased computing power for simulations is making Bayesian statistics increasingly

(22)

22. technical books I would recommend Bayesian Data Analysis (Gelman, et al., 1995) and Doing Bayesian Data Analysis (Kruschke, 2014).

To implement a chain that estimates the hierarchical model betas, needed to estimate individual unobserved utilities. We use a hybrid Gibbs sampler with a random walk

metropolis step for the coefficients of each individual, as created by Ryan Sermas and John V. Colias (2012). The inclusion of the extra metropolis step is especially useful for our data since we only have one choice per individual. This extra step simulates individual level draws, based on the overall draw. Which decreases individual level overfitting.

To explain more clearly what the MCMC algorithm is doing, imagine 10 individuals, having J alternatives out of which they make a choice, either make a booking or don’t make a booking. Our chain now draws a beta from an uninformative prior distribution and evaluates how well this draw explains the choices made by our 10 individuals. The chain now uses this same draw, but goes on the so-called “random walk”, and test how well a draw similar to the first draw explains the likelihood of individual 1 choosing alternative i out of the other j alternatives. This is also done for the other 9 individuals. This yields a posterior distribution on both individual and group level. The prior is now updated using this posterior distribution for both levels, and the process repeats itself 49.999 times. After some time, the chain reaches convergence and remains around the same position. Here, we start recording the draws and likelihoods, to estimate the distribution of the posterior probability. In (Gelman & Hill, 2007) they call this a “random walk in probability space”. By randomly walking in this space we see which distribution fits our data best.

To further increase the model’s ability to explain the variance found among

individuals. We added a linear interaction function of every search level characteristics on the means of the overall distribution of heterogeneity, which are separately recorded (Sermas & Colias, 2015; Galán, Veiga, & Wiper, 2014). Given, the bayesian model combines individual-level estimates on the distribution of heterogeneity. Recording if certain search individual-level

(23)

23. higher average utility. Hence, in the previous example, a marginal increase of search

importance leads to a higher utility for star rating. Or simply, the more important the search, the more important star rating becomes for the chosen alternative.

Speeding convergence

As mentioned earlier, a downside of Bayesian statistics is its computational difficulty. Here the chain used is no exception. Estimating over 30 parameters (5 for star rating alone, plus 8 coefficients for the search indicators, plus hyperparameters), for nearly 10 million rows of data. These are simulated 50.000 times, for both the main chains and the extra metropolis step chains. Most computers are not able to compute this, let alone within a reasonable amount of time. Hence, we need to simplify the base model.

Instead of using the complete dataset, a random subsection of the data is used instead. To this end, we let R pick 20.000 search id’s at random, and only use the choice data

corresponding to these specific search id’s. With an average of 24 alternatives seen, this brings our sample to roughly half a million hotel options.

To reduce the number of parameters to be estimated, star rating is modelled as a linear variable. A strict prior for this parameter is set where the draws taken are bound to a range within 1 to 5 stars. A similar method is used by (Pang & Lee, 2005). To verify if this linear assumption also holds within our data, we plot the odds ratio effect per star rating on booking choice. These are found using a Logit regression. (figure 2)

(24)

24.

Logit model

The main method of analysis in this paper will be the Bayesian method as discussed above. However, to compare these results to more commonly used methods of analysis, both the conditional and a regular logit regression will be used. These logit methods allow for the dependent variable to be binary. Given choice in our data is binary, either the booking is made or not, this aspect is necessary. Though, neural networks, support vector machines, decision trees and propensity score matching, were considered as well. These methods were found to be too much of a black box, did not fit the data structure well, or would be hard to compare to the hierarchical Bayesian model. Another advantage of the logit models as a comparison to the Bayesian model used is the mathematical similarity between the two. They are both based on the underlying utility of an alternative, from which conclusions can be made. The standard logit does this per single alternative, the conditional logit model (McFadden, 1973) does this conditional to the other seen alternatives and the hierarchical Bayes does this based on individual preferences. Each of these models increases the amount of information gained from the data, but are still part of the same ‘family’ of analysis. These logit-based methods are used as a comparison to the hierarchical Bayesian model to validate the results.

Results Model robustness

The MCMC methods used in this paper, are not as commonly used or straightforward as alternatives such as logit regression. Hence, we first test if the used combination of

algorithms is working. As described by Gellman and Hill (2007), the algorithm is first tested on simulated data. Simulating data ourselves means that the “true” values for all the

parameters can be set in advance. The outcome of the model can then be compared to the “true” parameter values, giving a reliable indicator if the model works. To create this fake data, we used a function written by J. Colias (2011). The length and values of the fake data is set to be relatively similar to the actual data. The results of predicting the simulated

(25)

25. the walks are being recorded after 20.000 iterations, our draws should be consistent with the underlying parameters.

To further validate the results from the Markov Chains, two other models on the same data sample are included. These two other models are the logit model and the conditional logit model. Before doing any interpretations, we test for multicollinearity. Testing for multicollinearity is done using the VIF score, usually, a score of 4 or higher indicates multicollinearity. The highest VIF score within our data is price with a VIF score of 1.51. Therefore, we can assume there is no multicollinearity in our data.

Before we start interpreting outcomes, first a simple explanation of the difference between Bayesian and frequentist statistics. The fundamental difference between the two lies in their different notions of probability. In frequentist statistics, only repeatable events have probabilities, while Bayesian probability describes uncertainty. Any statistic with a p-value falls in the category of frequentists statistics. Sometimes, a p-value of 0.05 is falsely

interpreted as: “there is a 5% chance that the null hypothesis is true”. While the correct interpretation is: “If repeated many times, using new data, and if the null hypothesis is really true, then in only 5% of those occasions would we falsely reject it.” Bayesian statistics gives us the distribution of probabilities where a beta could be. However, concepts such as

significance or p values do not directly exist within Bayesian statistics.

Logit models. Hypothesis one states that the first alternative is positively related to

the chosen alternative. The coefficients of the linear logit model suggest that Minkowski distance has a small positive effect on the booking chance. Looking at table 7, for every margin increase in Minkowski distance, the odds ratio of a booking being made, increases by 1.0004 – ceteris paribus. At a significance level of p < 0.001. This is the opposite direction as hypothesized, thus granting no support. If the hypothesis were true, we would expect the chosen alternatives to be more similar to the first alternative, not less similar, which a higher Minkowski distance would suggest. However, when looking at the conditional logit model, we observe that Minkowski distance is not significant in this model. In the discussion, this difference will be discussed further. However, these results are both hard to understand and suggest different results. Hence, we will now look at our hierarchical Bayesian model.

Hierarchical Bayesian model. As explained earlier, the Bayesian model gives us the

(26)

26. analysis, the result are simple to understand. A utility of zero, means people are indifferent towards this attribute with regards to booking. A positive utility means people view this attribute as positive when making a booking. Negative utility is the opposite of this. To gain insight into the utility of hotel attributes, the density plot of the expected values of every individual in our samplecombined by the distribution of heterogeneityhas been plotted in figure 3. The outcomes represent the posterior probability distribution of the expected utility increase, per margin increased of the dependent variable. Note that the utilities for brand and promotion have been matched to the used data. This mitigates the need for having a utility distribution for both having a brand and not having a brand.

(Figure 3)

Multiplying the utility by the seen attribute levels results in the total effect of the attribute on the utility per offer. Or simply, the importance and direction of the attribute. This is plotted in figure 4. Using these estimates of probability space, predictions or descriptions anywhere in this space can be made. Prediction about individuals, groups, or the sample as a whole can be made. For example, without doing any further analysis, we can estimate the willingness to pay for star ratings, by dividing the utility of star rating by the utility of price (Yu, Jamasb, & Pollitt., 2009). This gives: an average individual is most probable, willing to pay 23.35$ more for a star rating 1 higher – ceteris paribus.

(Figure 4)

We will now continue to interpret the outcomes of the hierarchical Bayesian model to test if the stated hypothesis holds within the data. The first hypothesis was related to the anchoring effect. It stated that the first alternative should be positively related to the chosen alternative. Therefore, we would expect a negative utility of Minkowski distance. This means, individuals see an offer more similar to the first alternative as a positive thing and are therefore more likely to pick a more similar offer.

Looking at figure 3, the posterior probability distribution of the utility of Minkowski distance skews towards negative. We can state with a 95% probability that the utility lies between -0.168 and 0.0803. The most probable value for this utility is -0.0489. We can accurately state that the probability of hypothesis one being true, is 73.3%, which is not enough for statistical significance. Hence, even though it is probable that the anchoring effect is in play here, there is too much unexplained variance to support hypothesis one.

(27)

27. level characteristics and their respective effect on the utility of the Minkowski distance (Figure 5). Here, we see that none of the search level characteristics result in a different mean on the distribution of heterogeneity. In other words, none of the search level characteristics explain variation in the utility of the Minkowski distance in a predictable manner. This includes the effect of search importance on the utility of the Minkowski distance. Here, the mean difference of the distribution of heterogeneity is only 0.001 with a standard deviation of 0.0055. Therefore, we can be fairly certain this effect is non-existing. This means, the second hypothesis, the moderating effect of search importance on the relationship between the first alternative and the chosen alternative is not supported by the data.

The nature of the Bayesian method used result in a lot of information drawn from the data, not all of which is useful to this paper. For example, recording the distribution of heterogeneity for every search level characteristics creates fourth nine credibility intervals of the search characteristics interacting with the alternative characteristics. Going further, estimating Willingness to Pay, making purchase predictions using a probability function or, estimating how altering alternatives could influence choice. These can all be inferred from the several nested matrixes that form the outcome of the MCMC chains. To this end, some additional results related to error cost will be discussed, but most outcomes will not be discussed. Looking at figure 6, we see the distributions of heterogeneity for the hotel

(28)

28.

Discussion Interpretation

In the following section, the results given in the results section will be discussed and clarified. These will be compared to the theory mentioned in the introduction and literature review. Given, most of the results did not sufficiently support the hypothesises, some

alternative explanations are proposed. We start by discussing the results relating to our main research question, why and to what extent consumers are being affected by the anchoring effect. Here, we hypothesized that the more dissimilar an option is to the anchor, the less likely it is selected

Anchoring effect. Our results indicated that Minkowski distance skews towards

negative. This means that people skew towards preferring offers with a lower Minkowski difference. Since Minkowski difference is a measure of dissimilarity, people lean towards preferring alternatives more similar to the first alternative. Though this is in line with the theory behind the anchoring effect, the effect is not strong enough for statistical significance. There are multiple explanations for the gap between the research and results of this paper, but a notable difference is the context in which the anchoring effect is researched. Most studies on the anchoring effect were done in controlled experiments with questions and context specifically designed to find the anchoring effect (Tversky & Kahneman, 1974; Mussweiler & Strack, 2001). Later experiments in more practical contexts still use an experimental design with a high and low anchor group (Arierly, Loewenstein, & Prelec, 2003; Galinsky & Mussweiler, 2001). As opposed to these controlled studies, this paper uses a large sample of real-world choice data. In this real-world setting, numerous variables are affecting the search, which can not be controlled for. We do not know the age, gender, or income of the individual placing the search. Additionally, Expedia is likely not the only reviewed source of

(29)

29. There also exist some variables that lessen or skew the effect of anchoring in certain situations, which could have been at play in our dataset as well. The first is prior knowledge. The more an actor knows, the smaller the range of plausible options and the smaller the effect of anchoring (Strack & Mussweiler, 1997). Imagine a man looking for a hotel in Indonesia, when it is his twentieth visit to the country, he likely knows what prices he can expect and where he wants to stay. Compare this to someone looking at hotels there for the first time, not knowing what to expect, this person is far more likely to rely on the anchoring heuristic. Another situation in which anchoring effects can be skewed arises when the anchor is not placed on the first alternative. This can happen when not the first, but another alternative is seen first. This happens for a multitude of reasons, such as eye movement when opening a page, extra bright offers or enticing pictures that immediately attract attention (Lieder F. , Griffiths, Huys, & Goodman, 2017).

Error cost. As stated in the results, error cost has some credible results one the

distribution of heterogeneity resulting in different utilities for some hotel attributes. But before we discuss these results further, first an clarification on how to interpret these results in figure 6. The distribution of heterogeneity is how the results from our estimated individual distributions are combined into an aggregate distribution for the sample at large. If a certain search criterion appears on a consistently low placement on the distribution of heterogeneity for a specific attribute, then this search criteria will be more likely to correspond to a more negative utility than average. Since our outcomes on the distributions of heterogeneity are mean-centered, a score of zero means evenly distributed, a positive score means this search characteristic is more commonly found in the right part of the distribution, and negative the opposite of this. Keep in mind that effect sizes only describe credibility intervals of utility distributions of attributes, and as such should be used to explain variation in utility for an attribute, not to explain choice directly.

(30)

30. heterogeneity for every level of increase of error cost. Given we do not observe this, the anchoring effect does not decreases with additional error cost and subsequent cognitive effort. Put simpler, the importance of a search does not affect the anchoring effect at all. This is evidence that the original adjustment interpretation behind the anchoring effect (Tversky & Kahneman, 1974), does not hold. Otherwise, we would expect anchoring utility to increase even slightly given additional error cost and cognitive effort. Further, this outcome does not support the resource-rational view (Lieder & Griffiths, 2020) on the anchoring effect either. At least to the extent that additional cognitive resources, do not increase how rational an individual acts in ignoring its own inclination for relying on the anchoring heuristic.

Another view on how people rely on heuristics is nicely captured in “the elephant and its rider” (Haidt, 2006). Where, for a lack of a better word, your ‘gut feeling’ determines most of the decisions you make, and your ratio ‘the rider’, mostly tries to rationalize these decisions afterward. Our results suggest that this might be the case for hotel bookings too. This also suggests that the conformation bias theory as mechanism behind the anchoring effect (Wegener et al., 2010; Chapman & Johnson, 1999), is a better explanation of how the anchoring effect works. At least compared to the insufficient adjustments being made after seeing the first offer. Note that we cannot research the confirmation bias with our available data, but given the alternative explanation literature provides us. This explanation fits our observed results the best.

(31)

31. distribution of heterogeneity too (figure 6). However, if we look at the distribution of

promotion (figure 3, promotion), we see that the utility for promotion is closely centered around zero. Hence, being more towards the right of this distribution does not make a big difference in utility and subsequent hotel booking. The effect of error cost on the distribution of heterogeneity of position is positive. When we compare this to the distribution of position (figure 3, position), we see that position has a negative utility. Thus, being more towards the right of this distribution means having a utility for position, closer to zero. Again, translating this to real-world choices. If a consumer compares different hotels, he/she would normally be more likely to pick one of the earlier seen alternatives. However, if the search has a high error cost, this individual makes a choice less biased towards the early positioned offer. A possible explanation for this effect is the additional cognitive effort exerted when error cost is higher (Dunn, Inzlicht, & Risko, 2017). When consumers exert more cognitive effort they can compare more offers at their merit, at which point the value of being one of the first offers is not as great. Especially for businesses who pay money to place their offers further to the top of a page, this is a valuable insight. Error cost has a similar effect on price, compared to positioning. However, there is a 7.5% probability that this finding is not true, which should be taking into consideration. As with position, a marginal increase above mean error cost leads to a higher placement on the distribution of heterogeneity for price. Since price also has a negative utility (figure 3, price), this means that searches with a higher error cost are on average less price sensitive. Here, a possible explanation is. Consumers tend to perceive higher-priced products as being of higher quality (Wheatley, Walton, & Chiu, 1977). In case of high error cost, consumers could be more likely to pick a higher perceived quality offer, to reduce uncertainty. Since consumers see a higher price as an indication of quality, they are more likely to pick a more expensive offer.

Conclusion and Implications

(32)

32. we found that the anchoring heuristic was not sufficiently significant as a predictor of choice. However, the direction of this effect does hint at anchoring as a possible effect, but this will be discussed in the future research part.

Another effect researched in this paper was the error cost. Beyond theoretical contributions, some actionable insights were found as well. The main one being that higher error cost correlates to a utility for positioning closer to zero. Simply put, more important searches tent to be less sensitive to the positioning of an offer. Hotels that offer bookings with a high error cost, a long family holiday for example, should be aware that customers for these searches are less sensitive to positioning, decreasing the value of a higher positioning. This is an important finding because a lot of companies place significant effort and funds into getting their offers placed on a higher ranking on various platforms. On google, companies pay a lot of money to get their websites on the first position. Additionally, the practice of SEO is based around the idea of optimizing a webpage to get an offer as high as possible. However, we found that with increased error cost, positioning decreases in importance. This suggests that depending on the importance of an offer, resources committed to SEO face a diminishing return on investment. Hence, depending on the error cost of an offer, companies should adjust their SEO investments. For example, hotels that focus on long family holidays with a high corresponding error cost, do not benefit as much from being first on an Expedia landing page.

Some website morphing techniques use Bayesian updating to infer cognitive styles and adjust the basic look and feel of the website accordingly (Hauser, Urban, Liberali, & Braun, 2009). In case individual-level data would be available, the Bayesian methods used in this paper could, with some adaptation, be applied for website morphing as well. Instead of updating utility distributions for attributes on a large data frame, the same Bayesian updates could be looped using a multi-armed bandit approach (J.Gittins, Glazebrook, & Weber, 2011). These predicted utilities could be used and optimized to apply dynamic pricing (Faruqui, Hledik, & Tsoukalis, 2009). By identifying individual utility for certain attributes and for price, a maximized prize range could be estimated per individual. Another

(33)

33. and consumers alike. Though potentially mutually advantageous, morphing websites

offerings based on individual data might give rise to some privacy concerns. Research suggests that companies who ignore privacy concerns, tend to receive lower customer trust (Bart, Shankar, Sultan, & Urban, 2005). Whether the benefits of gathering and analysing data on this individual level are worth it, is something that OTA’s should take into consideration.

Limitations and Future Research

This research and the data underpinning it, come with some limitations. Both these limitations and opportunities for future research will be discussed in this section. The main limitation of the Expedia data used for this paper is the lack of information about the

individuals placing the search. The usage of large amounts of personal data is a hotly debated subject and not sharing any personal information and anonymizing countries is

understandable given regulations and public opinion. However, especially in marketing, where the unit of analysis is preferably individual, the lack of individual data is missed. The lack of IP-address tags makes it impossible to follow individuals over multiple searches. This means individual preferences have to be estimated from very little data. To this end, we used methods especially suited for this situation. Combining conditional estimates, with Bayesian sampling methods, an extra metropolis step and other discussed measures to increase

individual-level data and reduce overfitting. Applying this, ‘squeezing water from stone’ approach on a dataset of nearly 10.000.000 offers, allows to partially account for this lack of data. However, when using this method to predict at the individual level, either model

accuracy or precision has to be sacrificed to stop overfitting on the data. Having access to the IP-address tags that connect searchers to individuals, allows for a relaxation of some anti-overfitting measures and subsequent improved individual-level predictions.

(34)

34. Though this paper was unable to prove the anchoring heuristic as a predictor for choice, the results and direction of the effect did hint towards some anchoring effect influencing decisions. To get a more definitive answer if the anchoring heuristic has any predictive validity, a simple ab-test could prove an useful tool. This can be done by showing a part of all visitors a first offer that is relatively dissimilar to the rest of the alternatives. If the confirmatory search theory behind the anchoring effect holds true, as our results suggest, then consumers in this group should model their choice after the first alternative.

The found moderating role of error cost on the effect of positioning on choice, could be present in different contexts as well. Current theory focuses mostly on optimal biding strategies for being showed first (Rusmevichientong & Williamson, 2006) or improving SEO. While, the underlying cognitive biases in the context of decision support systems or search engines remains mostly unstudied. If further research finds that error costs affects the importance of positioning beyond hotel choice. This would be an important finding. For example, paying for a higher position on platforms with intrinsically high error cost products such as Funda (housing platform), might not be justified when taking error cost into

(35)

35.

References

Aggarwal, C. C., Hinneburg, A., & Keim, D. A. (2001). On the Surprising Behavior of Distance Metrics. Database theory, 420-434.

Alexa, S. (2019). Traffic Statistics Retrieved from. Opgehaald van https://www.alexa.com/siteinfo/expedia.com

Aljukhadar, M., Senecal, S., & Daoust, C. (2012). Using Recommendation agents to cope with information overload. international journal of electronic commerice 17(2), 41-70.

Arcidiacono, P., & Miller, R. (2011). Conditional choice probability estimation of dynamic discrete choice models with unobserved heterogeneity. The Review of Economic Studies, 497-529. Arierly, D., Loewenstein, G., & Prelec, D. (2003). “COHERENT ARBITRARINESS”: STABLE DEMAND

CURVES WITHOUT STABLE PREFERENCES. The Quarterly Journal of Economics, 73-105. Bakos, J. (1997). Reducing Buyer Search Cost: Implications for Electronic Marketplaces. Management

Science, 43, 1676-1692.

Bart, Y., Shankar, V., Sultan, F., & Urban, G. (2005). Are the Drivers And Role of Online Trust the Same for All Web Sites and Consumers? A Large-Sclae Exploratory Emirical Study. Journal of Marketing, 133-152.

Bayes, T. (1763). An Essay towards solving a Problem in the Doctrine of Chances. Philosophical Transactions, 370-418.

Berkowitsch, N., Scheibehenne, B., & Rieskamp, J. (2014). Rigorously testing multialternative decision field thoery against random utility models. Journal of Experimental Psychology 143(3), 1331-1348.

Bodet, G., Anaba, V., & Bouchet, P. (2017). Hotel Attributes and Consumer Satisfaction: A Cross-country and Cross-Hotel Study. Journal of Travel & Tourism Marketing, 52-69.

Burnham, K., & Anderson, D. (2004). Multimodel Inference: Understanding AIC and BIC in Model Selection. Sociological Methods & Research, 261-304.

Chapman, G., & Johnson, E. (1994). Journal of behavioral Decision Making, 223-242. Chapman, G., & Johnson, E. (1999). Anchoring, activation, and the construction of values.

Organizational Behavior and Human Decision Processes, 1-39.

Cho, I., Wesslen, R., Karduni, A., Dou, W., Santhanam, S., & Shaikh, S. (2017). The Anchoring Effect in Decision-Making with Visual Analytics. University of North Carolina university press, 52-63. Coleman, J., & Fararo, T. (1992). Rational Choice Theory. London: Sage Publications.

Cutrell, E., & Guan, Z. (2007). What Are You Looking For? An Eye-tracking Study of Information Usage in Web Search. Computer Human Interaction , 51-62.

(36)

36. Dunn, T., Inzlicht, M., & Risko, E. (2017). Anticipating cognitive effort: roles of perceived

error-likelihood and time demands. Psychological Research, 74-98.

Faruqui, A., Hledik, R., & Tsoukalis, J. (2009). The power of dynamic pricing. The electricity journal v22, 42-56.

Fishburn, P. (1968). Utility Theory. Management science 14(5), 335-378.

Furnham, A., & Boo, H. (2011). A literature review of the anchoring effect. The Journal of Socio-Economics, 35-42.

Galán, J., Veiga, H., & Wiper, M. (2014). Bayesian Estimation of Inefficiency Heterogeneity in Stochastic Frontier Models. Journal of Productivity Analysis, 105-126.

Gale, B., Gale, B., & Wood, R. (1994). Managing customer value: Creating quality and service that customers can see. simon & Schuster.

Galinsky, A., & Mussweiler, T. (2001). First offers as anchors: the role of perspectivetaking and negotiator focus. .Journal of Personality and Social Psychology, 657-669.

Garcia, E., Rossel, B., & Coenders, G. (2011). Profile of business and leisure travelers on low cost carriers in Europe. Journal of Air Transport Management,

doi:10.1016/j.jairtraman.2011.09.002.

Gardial, Fisher, S., Clemons, D., Woodruff, R., Schumann, D., & Burns, M. (1994). Comparing Consumers Recall of Prepurchase and Postpurchase Evaluations. Journal of consumer research, 548-560.

Gelman, A., & Hill, J. (2007). Data analysis using regression and multilevel/hierarchical models. Cambridge university press.

Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D., Vehtari, A., & Rubin, D. B. (1995). Bayesian Data Analysis.

Gerdelsenberg, J. (2019, November 29). Information for "Manhattan-Distance". Opgehaald van Chess programming Wiki: https://www.chessprogramming.org/Manhattan-Distance Gigerenzer, G. (2008). Why Heuristics Work. Perspectives On Psychological Science, 20-30. Haidt, J. (2006). The Happiness Hypothesis, Finding Modern Truth in Ancient Wisdom. INGRAM

PUBLISHER SERVICES US.

Haselton, M., & Buss, D. (2000). Error management theory: A new perspective on biases in cross-sex mind reading. Journal of Personality and Social Psychology 78(1), 81-91.

Haselton, M., & Nettle, D. (2006). The paranoid optimist: An integrative evolutionary model of cognitive biases. Journal of Personality and Social Psychology 10(1), 47-66.

Hauser, J., Urban, G., Liberali, G., & Braun, M. (2009). Website Morphing. Marketing Science 28.2, 202-223.

(2018). Holiday Habits Report. London: ABTA.

(37)

37. J.Gittins, Glazebrook, K., & Weber, R. (2011). Multi-armed bandit allocation indices. Chichester, :

John Wiley & Sons Ltd.

JCF, R., Garcia, J., & Tena, M. (2006). Customer Perceived Value in Banking Services. International Journal Of Bank Marketing, 266-283.

Kahn, B. (2017). Using Visual Design to Improve Customer Perceptions of Online Assortments. Journal of Retailing, 93(1), 29-42.

Kashyap, J., & Bojanic, D. (2000). A Structural Analysis of Value, Quality, and Price Perceptions of Business and Leisure Travelers. Journal of travel research, 45-51.

Kashyap, R., & Bojanic, C. (2000). A Structural Analysis of Value,. Journal of travel research Vol. 39, 45-51.

Kruschke, J. (2014). Doing Bayesian Data Analysis. Academic Press.

Law, R., & Leung, R. (2000). A study of Airlines Online Reservation Service on the Internet. Journal of Travel Research 39(2), 202-211.

Lieder, F., & Griffiths, T. (2020). A resource-rational analysis of human planning. Behavioral and Brain Sciences 43, 1-60.

Lieder, F., Griffiths, T., Huys, Q., & Goodman, N. (2017). Empirical evidence for resource-rational anchoring and adjustment. Psychonomic Bulletin & Review volume 25, 775-784.

Lien, C., Wen, M., Huang, L., & Wu, K. (2015). Online hotel booking: The effects of brand image, price, trust and value on purchase intentions. Asia Pacific Management Review 20, 210-218. Lin, C.-H., Sher, P., & Shih, H.-Y. (2005). Past progress and future directions in conceptualizing

customer perceived value. International Journal of Service Industry Management, 318-336. Louviere, L., & Hout, M. (1988). Analyzing decision making: Metric conjoint analysis. Newbury Park:

SAGE publications.

Magalhães, P., white, K., Stewart, T., Beeby, E., & Vliet, W. v. (2012). Suboptimal choice in nonhuman animals: Rats commit the sunk cost error. Learning and Behavior 40, 195-206.

McFadden, D. (1973). Conditional logit analysis of qualitative choice behavior. University of califirnia at berkeley, 105-143.

Metropolis, N., & Ulam, a. S. (1949). The Monte Carlo method. J. Amer. Statist. Assoc., 335-341. Minkowski, H. (1907). Diophantische Approximationen. Eine Einführung in die Zahlentheorie. Leipzig:

Physica-Verlag Würzburg.

Mussweiler, T., & Englich, B. (2005). Subliminal anchoring: judgmental consequences and underlying mechanisms. Organizational Behavior and Human Decision Processes, 98, 133-143.

Nasomyont, T., & Wisitpongphan, N. (2014). A study on the relationship between search engine optimization factors and rank on google search result page. In Advanced Materials Research , 1462-1466.

Referenties

GERELATEERDE DOCUMENTEN

This study in rural South Africa aimed to investigate if it is possible to identify individuals with recent-onset psychosis in collaboration with traditional health

The aims, objectives and hypotheses of this dissertation can be divided into three sections: environmental, social and combined outcome. The aim of the environmental assessment of

H1: The more dissimilar an option is to the anchor, the less likely it is selected H2: Search importance weakens the anchoring effect in online searches.... ›

As the results of most of the prior studies that were discussed showed that online reviews have a positive effect on sales or willingness to pay (e.g. Wu et al., 2013; Kostyra et

Door de ontwikkeling van de productie van bio-energie kan het interessant wor- den om primaire grondstoffen en reststromen uit de voedings- en genotmidde- lenindustrie (VGI), die op

In de houtbijgroeigrafieken wordt de gemiddelde jaarlijkse maximale bijgroei gedurende de gehele omlooptijd van een bepaalde boomsoort per GVG-waarde afhankelijk gesteld van

Schematic of the adsorbent treatment setup, schematic of the adsorbent adsorption and regeneration setup, long- term oxidative degradation at ambient condition, SEM of fresh

Also cross-presentation by dDCs after intradermal injection of liposomes containing both tumor antigen and MPLA was enhanced compared with injection of soluble MPLA, demonstrating