• No results found

The Influence of Recommendation Type on Attitudes towards Recommendation and Entertainment Content: the Case of Online Film Recommendations

N/A
N/A
Protected

Academic year: 2021

Share "The Influence of Recommendation Type on Attitudes towards Recommendation and Entertainment Content: the Case of Online Film Recommendations"

Copied!
36
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Graduate School of Communication Master’s Programme Communication Science

Master’s Thesis

The Influence of Recommendation Type on Attitudes towards Recommendation and Entertainment Content: The Case of Online Film Recommendations

Daria Kanter, 11578793 Supervisor: dr. K.M. (Karin) Fikkers

(2)

Abstract

Online video streaming platforms (e.g., Netflix) include content recommendations in their interfaces in order to help users navigate the variety of options and attract consumers. Previous literature have not compared various recommendation types for the case of entertainment content; it also failed to investigate which recommendation type is the most persuasive when it comes to users’ willingness to pay and perceived recommendation quality. In an online between-subjects experiment with 230 Russian adults, three types of film

recommendations (personalized, social-based, expert-based) were compared in their influence on content-related and recommendation-related attitudes and behavior intentions. The results showed that the expert-based recommendation type led to increased perceived

recommendation intelligence compared to social-based and personalized recommendation types (but no difference in the other outcome variables). Therefore, entertainment platforms may benefit from using the experts (e.g., film critics) as a source of recommendations. Also, participants with a strong preference stability belief (the belief that person has distinct, stable, identifiable film preferences) exhibited decreased levels of content preference judgement; decision to follow recommendation and watch the suggested films; perceived

recommendation accuracy and intelligence. The research expands the list of observed preference stability belief effects and shows that they are applicable to all examined recommendation types.

(3)

The Influence of Recommendation Type on Attitudes towards Recommendation and Entertainment Content: The Case of Online Film Recommendations

A modern digital environment is characterized by an increasing amount of information available for a human to process (Syuntyurenko, 2015). This includes entertainment media: the number of music, books, TV shows online grows constantly, making consumer choices harder (Cosley, Albert, Konstan, & Riedl, 2003). To assist individuals with navigating numerous content options and to become more attractive, digital entertainment platforms (e.g., Netflix, Hulu) implement online recommendations that provide item suggestions potentially interesting for users. Research shows that companies financially benefit from recommending items to consumers by increasing sales, making people more tolerant to higher prices for goods and services (Pathak, Garfinkel, Gopal, Venkatesan, & Yin, 2010) and contributing to unplanned purchases (Hostler, Yoon, Guo, Guimaraes, & Forgionne, 2011).

Online recommendations come in various forms. Most combine user’s individual input (e.g., star ratings given to previously consumed content items) and various background data (e.g., other users’ preferences) to generate relevant suggestions (Ochi, Rao, Takayama, & Nass, 2010). Overall, the basis for recommendations can be narrowed down to three sources: data on user’s individual characteristics including previous content preferences; data on other people’s preferences and characteristics; expert opinions (Ansari, Essegaier, & Kohli, 2000). These categories are the building blocks behind various kinds of (entertainment) recommendations.

Previous research comparing all three basic recommendation types (RTs) only studied the case of material goods (Senecal & Nantel, 2004), but not entertainment content. Also, for any type of recommended items, it is still unclear which RT is more influential in terms of increasing willingness to pay and perceived recommendation quality (i.e., intelligence, accuracy). Therefore, key knowledge about recommendations’ persuasive capability is still

(4)

missing. On the practical side, understanding the outcomes of various RTs can help developing more attractive and financially successful interfaces for digital content platforms.

In this paper, the impact of personalized, social-based and expert-based film recommendations on a range of outcomes is compared. Specifically, this study investigates the recommendations’ effects on people’s content preference judgement (an individual’s subjective film perception), their willingness to pay for watching a film, their perception of recommendation intelligence (the degree to which user feels the recommendation is intelligent, accurate, competent, useful and trustworthy – Ochi et al., 2010) and recommendation accuracy (“the extent to which a customer perceives that system recommendations closely meet his or her own needs, interests, and preferences” – Shen & Ball, 2011, p. 72), as well as on users’ decision to follow recommendation and watch a suggested film (a likelihood of picking it for further use). Therefore, the following principal research question is posed: To what extent do personalized, social-based, and expert-based types of film recommendations influence users’ content-related and recommendation-related attitudes and behavior intentions?

Recommendation effects may vary between individuals. Users with well-defined, well-realized preferences and those who are less sure about own likes or dislikes may respond better to different content recommendations (Shen, 2014). For example, individuals with stable preferences may want recommendations to be perfectly aligned with their interests. The so-called preference stability belief (PSB), or a “consumer’s belief that he or she has context-free preferences for marketers to learn and usefully apply to personalized recommendations” (Shen & Ball, 2011, p. 72), can moderate the relationship between recommendation type and mentioned recommendation outcomes. Previously, the concept of preference stability belief was only studied in the context of personalized RT, and its already observed outcomes are limited. Online content providers may also benefit from better understanding of the potential

(5)

differences between audiences with higher and lower PSB. Examining the role of PSB is the second key goal of this study.

The discussed relationships are investigated in a sample of Russian adults. This population may be of particular interest, since the recent growth of paid online video content in Russia (European Audiovisual Observatory, 2017) does not seem to match with the longstanding prevalence of piracy and peer-to-peer file sharing in the country (Karaganis et al., 2011). The Russian people’s response to film recommendations, which are often attributed to paid video streaming services, can give a sense of the issue and provide consumer insights for locally functioning digital entertainment platforms.

RecommendationTypology: Personalized, Social, and Expert-based

One way to categorize the wide variety of modern online recommendations is by the data source they are based on. In sum, there are three main information sources for online recommendations: the user him/herself, other users, and experts (Ansari et al., 2000). Each source constitutes a certain recommendation type (RT). The personalized RT is based on the information about the particular user (e.g., content likes and dislikes); the social-based RT is based on aggregate, non-personalized information about other users' preferences; and the expert-based RT is based on the opinion and judgement of human experts. In this study, the persuasive power of these categories will be compared in the context of film recommendations.

Real-life recommendations often consist of more than one proposed RT (Ochi et al., 2010). For example, so-called collaborative filtering techniques analyze individual’s preferences to mimic the choice of people with similar taste (Park, Kim, Choi, & Kim, 2012). When a user gives a high rating to a film, the recommendation suggests titles that other users who highly rated this film also liked. Therefore, this way of recommending uses both data

(6)

from the particular person and social data from other users. Despite this, the three basic RTs will be studied separately to compare different information processing schemes behind them. Theoretical Mechanisms Behind the Influence of the Recommendation Types

Since each of the RTs relies on distinct information sources, different theoretical schemes may explain their persuasive capacity. In general, when individuals face the information in the form of any RT, they may feel more positive about the suggested content and the recommendation itself due to the execution of processing patterns called heuristics. Heuristics refer to “a mental generalization of knowledge based on previous experience that provides a shortcut in processing information” (Fiske & Taylor, 1984, as cited in Sundar et al., 2013, p.2). Sundar (2008) explains that in the situation of uncertainty (lack of information, or lack of assurance in information quality) commonly encountered in the digital environment, technology users tend to rely on so-called nominal cues attributed to the information (e.g. message source, medium credibility) to assess it. Such cues trigger heuristics that enable easier and faster processing. That can result in the improved perceived credibility of the message and, in turn, to increased attitude, perception and behavior intention outcomes.

Different cues can be identified and grouped into categories based on four affordances of digital media (modality, agency, interactivity, navigability) which constitute the MAIN model of media-related psychological effects (Sundar, 2008). Of these four affordances, agency is most relevant in the context of discussed recommendation types, because it refers to the perceived message source (e.g., other people, website/computer, authority/expert). Agency-related heuristic mechanisms are therefore most helpful in explaining possible differences in processing personalized, social-based, and expert-based RTs. Sundar (2008) lists six heuristics linked to the agency affordance: identity, machine, bandwagon, helper, authority, social-presence.

(7)

The potential persuasive power of personalized RT may rely on both machine and identity heuristics. The machine heuristic is activated when users think that the information they are exposed to comes from impartial and non-biased algorithms or systems (as opposed to humans) and therefore is trustworthy. The identity heuristic occurs when individuals get to express themselves via the medium (e.g. by providing information about their own film preferences) and thereby co-create the message. In this situation, a message (i.e., film recommendation) appears more credible as it seems to reflect users’ sense of self (Sundar, 2008). Personalized RT combines both processes (it employs information from the users to generate automatic recommendations through special computer algorithm), so it can successfully influence users’ attitudes and behavior intentions.

The possible mechanism behind the social-based RT is the so-called bandwagon heuristic, or the tendency to follow the opinion of the majority. Individuals have the internalized need to conform to social rules (“blend in”), and therefore tend to feel that popular things are more credible (Sundar, 2008). Social-based film recommendations provide the information about what films are liked by others, so users may automatically assume that the suggested films are good, as well as recommendation itself.

Last, the expert-based RT may be persuasive due to the sense of authority endorsement that an individual gets when exposed to such recommendation. The authority heuristic, or the susceptibility to an expert opinion, is activated when user identifies expert or official figure(s) in the message author (Sundar, 2008). In case of film recommendations, film critics’ positive evaluation of the movie may lead users to thinking about it positively as well.

The MAIN model provides comprehensive insight into the ways each of three RTs can be processed, and explains why they may lead to increased content preference judgment, willingness to pay, decision to follow recommendation, perceived recommendation intelligence and accuracy. However, it does not indicate if any of the RTs can be more

(8)

persuasive than others, so the MAIN model alone is not sufficient to hypothesize about this. In the next section, empirical evidence is reviewed to develop research hypotheses.

Previous Studies on Recommendation Types’ Comparisons

Some previous studies on digital recommendations examined the effects of one particular RT, while others compared the impact of different RTs. The following review briefly mentions one-RT articles and focuses on comparative studies. It was aimed to include all articles that examined online recommendations and where it was possible to identify the RT. Also, the examples of studies that did not claim to study recommendations explicitly, but evaluated the influence of aggregated social or expert data (e.g. Zhu & Huberman, 2014) were discussed1.

Content-related outcomes (content preference judgment and willingness to pay).

The concept of content preference judgement, or preference rating given by the user, was scrutinized in several recommendation studies. Evidence shows that personalized RT can influence viewers’ opinion about the content (Adomavicius, Bockstedt, Curley, and Zhang, 2013; Cosley et al., 2003).

Studies that actually compared the effect of different RTs on participants’ content preference judgement showed that a social-based RT has an advantage over expert-based or personalized RTs. First, Adomavicius, Bockstedt, Curley, and Zhang (2016) exposed individuals to personalized RT and social-based RT (in the form of ratings for jokes). When reporting their own preference judgement, participants were more likely to incline towards personalized rating. Second, Sundar et al. (2013) found that positive aggregate user ratings (social-based RT) led to the increased liking of an electronic device. At the same time, a special expert approval seal influenced attitude towards the product only when there was no incongruence between the expert and other consumers’ reviews. Therefore, the literature shows that a social-based RT has a stronger influence on preference judgement than both

(9)

expert-based and personalized RTs. That is why the first hypothesis can be formulated as follows:

H1: The social-based recommendation type will lead to higher film content preference judgement compared to expert-based and personalized recommendation types.

Willingness to pay for content is among the most practically important outcomes of RT influence, since it can give platforms an insight into potential income. Indeed, Adomavicius, Bockstedt, Curley, and Zhang (2017) showed that a personalized recommendation for a song can affect the price users are ready to pay for it. Comparative research of RT effects on willingness to pay is rather scarce. The study by Sundar et al. (2013) mentioned above found that a social-based RT influenced purchase intention (a concept close to willingness to pay) more than expert-based RT. Based on this limited evidence, the second hypothesis can be formulated as follows:

H2: The social-based recommendation type will lead to higher willingness to pay for film compared to expert-based recommendation type.

As no previous research has compared social-based and personalized RTs, or expert-based and personalized RTs in relation to willingness to pay, the general research question is also posed:

RQ1: What recommendation type (personalized, expert-based or social based) will lead to higher willingness to pay?

Recommendation-related outcomes. While analyzing the recommendation types’

effects, it is important to pay attention not only to content attitudes, but also to the ways recommendations themselves are perceived. The decision to follow a recommendation was found to be influenced by social-based RT (Zhu & Huberman, 2014) and expert-based RT (Flanagin & Metzger, 2013). However, only one comparative study by Senecal and Nantel (2004) tested how likely participants were to follow recommendations about calculators or red

(10)

wine in all three RT conditions. The results have shown that personalized RT was the most followed one among all RTs. Next hypothesis is based upon these conclusions:

H3: The personalized recommendation type will lead to higher decision to follow recommendation compared to expert-based and social-based recommendation types.

The perceived characteristics of a recommendation (i.e., perceived recommendation intelligence and accuracy) can also depend on the presented RT. In this field, empirical results are the most contradictory, probably due to the variety of concepts authors used when assessing perceived recommendation quality (e.g., perceived expertise, trustworthiness, credibility). Ochi et al. (2010) assessed the perceived recommendation intelligence, comparing content-based recommendations (personalized RT) and collaborative filtering (personalized and social-based RT combined) of rugs and perfume. Authors found no direct effect of recommendations on perceived. To assess the influence of user-generated and expert-generated information about film quality, Flanagin and Metzger (2013) measured the perceived credibility while manipulating the source of the movie rating (experts/other users) and the amount of ratings given. It turned out that individuals perceive an expert film rating as more credible when there are only few ratings, while in case of many ratings they tend to favor the aggregated preferences of other users. Nevertheless, only one study reported significant main effects for recommendation quality (operationalized as perceived recommendation source expertise and trustworthiness). Senecal and Nantel (2004) have found that social-based RT was perceived less expert comparing to expert-based and personalized RT. At the same time, the social-based recommendation was perceived most trustworthy among all three RTs.

All in all, the empirical evidence shows that in case of perceived recommendation quality, the expert-based RT can win over social-based RT (and vice versa in case of many ratings), and social-based RT can be perceived as least expert and most trustworthy in the

(11)

same sample. Since the literature is highly mixed, the following research questions are posed instead of hypotheses:

RQ2: What recommendation type (personalized, expert-based or social based) will lead to higher perceived recommendation intelligence?

RQ3: What recommendation type (personalized, expert-based or social based) will lead to higher perceived recommendation accuracy?

The Role of Preference Stability Belief

Taking into account differences between users may provide additional clarity in understanding the discussed recommendation type effects, as RTs can be more or less influential on certain user categories. The concept of preference stability belief (PSB), or the belief that one’s preferences are distinct and do not change over time (despite the fact that real undividual choices can be inconsistent and context-dependent), may be especially relevant (Shen & Ball, 2011). The way people approach their film preferences can be a strong predictor of their response to movie recommendations. Supporting that point, qualitative research found that participants who have good knowledge of their own preferences expect recommendation preciseness, while those who claim to have non-defined preferences are waiting for new discoveries (Shen, 2014).

Shen and Ball (2011) provided two participant groups with what was said to be individually customized personalized recommendations. The first group was exposed to film suggestions close to random (non-customized), while the second saw recommendations developed by a special tailoring algorithm (customized). The authors found that participants with stronger PSB perceived customized recommendations as more accurate, and participants with lower PSB level felt that non-customized suggestions were higher in accuracy.

The first takeaway important for the present research is that people with stronger PSB can be more critical of provided recommendations. In the logic of heuristic thinking, higher

(12)

level of PSB can prevent users from using mental shortcuts, possibly by reducing uncertainty so crucial for heuristic activation (Sundar, 2008). While only the effect on perceived accuracy was covered in the literature, this study aims to investigate whether the PSB level influences other content-related and recommendation-related outcomes (e.g. content liking, decision to follow recommendation). The following hypothesis is posed in accordance with previous research (Shen, 2014; Shen & Ball, 2011).

H4(a-e): Individuals with high preference stability belief will demonstrate reduced (a) content preference judgement, (b) willingness-to-pay, (c) decision to follow recommendation, (d) perceived recommendation accuracy, (e) perceived recommendation intelligence compared to those with low preference stability belief.

The second takeaway is that such PSB effect was only found for personalized RT (no studies have investigated it in condition of expert-based or social-based RT). Personalized recommendations may be a special case, since they claim to provide individually tailored suggestions. Individuals with high PSB can be more skeptical towards personalized RT than towards expert or social recommendations, since they may not expect these suggestions to guess their own strongly specific preferences. Therefore, the moderation effect hypothesis for PSB can be formulated as follows:

H5(a-e): In the condition of personalized recommendation type, individuals with high preference stability belief will demonstrate lower (a) content preference judgement, (b) willingness-to-pay, (c) decision to follow recommendation, (d) perceived recommendation accuracy, (e) perceived recommendation intelligence compared to those with low preference stability belief than in conditions of social-based and expert-based RT.

(13)

Method

Sample and Procedure

The target population for data collection were Russian adults (age 18 or older). Data collection took place between April 16 and May 6, 2018 via an online Qualtrics questionnaire. Convenience sampling was used; public posts containing the link to the questionnaire “about film preferences” (in Russian language) were distributed on Facebook and Vk.com (most visited Russian social networking service– “Vk.com…”, 2018). Of the 334 people who participated in the study, 81 (24.3%) left unfinished questionnaires, and 8 (2.4%) did not report living in Russia for the longest period in life. These participants were excluded from the sample. A manual suspicion check answers categorization resulted in the additional exclusion of 18 suspicious individuals (7.3% of the remaining sample). Therefore, the final sample consisted of 230 respondents (73.5% female, mean age 25.82, SD = 10.03).

In a between-subjects experiment, each participant was randomly assigned to one of three conditions. After filling in the informed consent and background information, they evaluated statements measuring their PSB and completed an imitated preference-elicitation process. A set of filler questions regarding film-watching was placed next to prevent fatigue. Afterwards, the experimental statement about the recommendation type appeared on the screen for 5+ seconds, and the dependent variables were measured. Finally, suspicion and manipulation checks were performed.

Preference-Elicitation Process Imitation

A preference-elicitation process (PEP) is a “procedure used to capture users' likes and dislikes” (Gretzel & Fesenmaier, 2006, p.82), which is usually required for personalized recommendation development. To create a convincing imitation of personalized

recommendations, PEP was included in the experimental design. Participants were shown 16 films (a film title, short description and a poster) and were asked whether they had seen each

(14)

film before and how would they rate it (similarly to Adomavicius et al., 2017). Films rated by the biggest number of users on Kinopoisk.ru were chosen (“Самые оцениваемые фильмы …”, 2018). Kinopoisk.ru is a popular Russian analog to IMDb.com (“Kinopoisk.ru…”, 2018). The decision to include 16 films in PEP is related to the need to serve two opposing

objectives: to make up a convincing preference elicitation and to minimize fatigue (for full list of films, see Appendix).

Experimental Manipulation and Stimuli

After engaging in the PEP, participants were told that on the following screens they would see 10 films recommended for them to watch. For each experimental condition, a different statement regarding the recommendation source was shown (Personalized RT: “This set of films is generated by an algorithm which analyzed your preferences and individual characteristics”; Social-based RT: “This set of films is based on the movies which have previously gotten the highest evaluation from other viewers”; Expert-based RT: “This set of films is hand-picked by the film experts, who acknowledged the high quality of these films”). The experimental statements also appeared on the top of each of the following film pages.

Because the dependent variables were measured in response to concrete movies, a set of recommended films, same for each condition, was selected. It was aimed to include films with (1) similar likeability to maximize the list homogeneity; (2) nice genre variety to

alleviate difference in average ranking based on genre preferences; (3) lower probability to be already seen to increase uncertainty, so the participants would supposedly rely more on heuristics (cf. Adomavicius et al., 2017).

An April 4, 2018 version of a Kinoposk.ru daily updated top-250 of highest rated movies was used, assuming that films in it are similarly likeable (user scores varying from 8.03 to 9.19 out of 10). The quartile of least rated (i.e., least popular) films was derived. Then, using the genre keywords attributed to each film, 10 films representing a variety of genres

(15)

were chosen (see Appendix). The layout was the same as in PEP (one page per movie with the title, poster, and short description).

Dependent Variable Measures

Сontent preference judgement. For each film, participants indicated their liking

based on a slightly modified scale used by Adomavicius et al. (2017):“How would you rate this film?” followed by options (1) “Hate it”, (2) “Don’t like it”, (3) “Neutral”, (4) “Like it”, (5) “Love it”. A mean score was calculated based on the ratings for the 10 films (Cronbach’s α = .64, see Table 1 for mean scores of all DVs), with a higher value indicating higher content liking.

Willingness to pay. Based on a scale from Ye, Zhang, Nguyen, and Chiu (2004), for each movie participants answered the question “Imagine that you have an option to watch this film on a video streaming service. To what extent would you be willing to pay to watch this movie?” with answer options from (1) “Definitely not willing” to (5) “Definitely willing”. Mean score was based on 10 films (Cronbach’s α = .70), with a higher score indicating higher willingness to pay.

Decision to follow recommendation. For each film, participants were asked “How

likely it is that you will follow the recommendation from the algorithm/other users/the experts and watch this film (again)?” depending on the experimental condition. The answer options ranged from (1) “Very unlikely” to (5) “Very likely”. Mean score was based on 10 films (Cronbach’s α = .86), with a higher score indicating higher probability to follow

recommendation.

Perceived recommendation intelligence. The five-item scale of perceived

recommender system intelligence from Ochi et al. (2010) was used after participants had been exposed to all ten recommended films (e.g., “These recommendations are intelligent”, 1-to-7 scale from “Completely disagree” to “Completely agree”). Internal consistency was

(16)

confirmed by principal components analysis (PCA): one component with eigenvalue above one (same on scree plot) explained 69.5% of the variance. Score was calculated as a mean of items (Cronbach’s α = .89) with higher score values coding higher perceived intelligence.

Perceived recommendation accuracy. The recommendation accuracy scale

developed by Shen & Ball (2011) was modified to discuss not only system-generated recommendations, but recommendations in general. Five items (e.g., “Most of the

recommendations were for movies I really want to see”) were measured with 1-to-7 scale from “Completely disagree” to “Completely agree”. The variable (calculated as items’ mean, higher value referring to higher level of perceived accuracy) demonstrated high internal consistency (Cronbach’s α = .86), confirmed by PCA (one component with eigenvalue above one, explaining 64.5% of the variance; same on scree plot).

Moderator Variable: Preference Stability Belief

Participants reported their PSB before the DV measurement and PEP. The statements from the PSB scale (Shen & Ball, 2011) discuss personal film preferences, so they were included in PEP to comply with the research cover (film preferences study). Items showed adequate internal consistency (Cronbach’s α = .66); their mean ranged from 2.00 to 6.33 (M = 4.30, SD = 0.832). For moderator testing, the score was transformed to a binary variable (Low PSB/High PSB) demarcated by the mean. Therefore, 108 (47.0%) participants were

categorized as having low PSB and 122 as having high PSB. Control Variable, Suspicion and Manipulation Check

Number of films watched (out of 10 films shown during DV measurement) was used to control for the participants’ uncertainty level. Individuals who watched fewer films and therefore are more uncertain may rely more heavily on heuristic thinking than those who watched more films (Sundar et al., 2013). The variable was significantly correlated to all DVs. It was measured as a sum of yes/no questions about participant having watched the film

(17)

(asked for each of the 10 films, M = 2.23, SD = 1.58). Suspicion check was registered with an open question “In your own words, what was the present study about?” (Blackhart, Brown, Clark, Pierce, & Shell, 2012). A manipulation check (MC) question asked what previously received film recommendations were based on and had answer options “On my preferences and characteristics (algorithm)”, “On other user’s preferences”; “On the expert choice”, “Other”, “I don’t know”.

Results

Randomization and Manipulation Check

ANOVAs and chi-square tests showed that neither age, F(2,227) = .36, p = .702, partial η² < .01, nor gender, χ² (2, N = 230) = 1.97, p = .374, nor education level, χ² (6, N = 230) = 4.24, p = .645 , nor occupation, χ² (8, N = 230) = 6.73, p = .566, differed between experimental groups, indicating successful randomization.

The manipulation check (MC) showed that 43.9% of participants did not identify or recall RT correctly. Also, the participants in expert-based group were more likely to fail the MC (59.3%) than those in social-based (39.9%) or personalized (33.2%) groups, χ²(8, N = 230) = 119.52, p < 0.001. Investigating the effect of (not) recalling the recommendation type during MC, a series of post-hoc analyses tested the difference in results for MC passers and non-passers separately.

Recommendation Type Effects

To research the recommendation type influence (H1-H3; RQ2-RQ3), a MANOVA was performed with RT as independent variable, content preference judgement, willingness to pay, decision to follow, perceived intelligence and perceived accuracy as dependent variables, and number of films watched as covariate (Pillai’s trace statistic was used). MANOVA was chosen due to the significant DVs intercorrelation (see Table 1). Bonferroni adjustment was used for all post-hoc multiple comparisons.

(18)

A weak significant effect of experimental condition was observed, F(10, 446) = 2.18, p = .018, partial η² = .05. Only perceived intelligence was significantly influenced by RT, F(2, 226) = 6.71, p = .001, partial η² = .06. A post-hoc test indicated that the average

perceived intelligence in expert-based condition was .57 higher than in personalized condition (p = .005) and .58 higher than in social-based condition (p = .005). Other variables – content preference judgement, F(2, 226) = .45, p = .638, partial η² < .01, decision to follow

recommendation, F(2, 226) = .83, p = .437, partial η² = .01, willingness to pay, F(2, 226) = 1.84, p = .161, partial η² = .02, perceived accuracy, F(2, 226) = 1.17, p = .314, partial η² = .01, did not show significant differences between groups (see Table 2 for means per condition). Preference Stability Belief: Direct and Moderation Effects

To test the interaction effects between experimental condition and preference stability belief, as well as direct PSB effects (H4a-e; H5a-e), separate univariate ANOVAs with

recommendation type, PSB, their interaction as independent variables and number of films watched as control variable were performed for each DV.

The influence of experimental condition found in previous MANOVA remained. The weak main effect of RT on perceived intelligence was observed, F(2,223) = 6.91, p = .001, partial η² = .06. Post-hoc comparisons demonstrated that average perceived intelligence in expert-based RT was .58 higher than in personalized RT (p = .004) and .59 higher than in social-based RT (p = .004). At the same time, there was no effect of RT on willingness to pay, F(2, 223) = 1.95, p = .145, partial η² = .02, content preference judgment, F(2,223) = .70, p = .496, partial η² = .01, decision to follow recommendation, F(2,223) = 1.06, p = .350, partial η² = .01, or perceived accuracy, F(2,223) = 1.50, p = .225, partial η² = .01.

Preference stability belief directly affected most DVs. A weak significant maineffect of PSB on content preference judgment was observed, F(1,223) = 5.62, p = .019, partial η² = .03., such that participants with low PSB had a higher content preference judgment compared

(19)

to participants with high PSB (see Table 3 for means). Also, a weak significant main effect of PSB on decision to follow recommendation was found, F(1,223) = 8.36, p = .004, partial η² = .04. Average decision to follow recommendation in a low PSB group was higher than in a group with high PSB. Furthermore, a weak direct effect of PSB on perceived intelligence was observed, F(1,223) = 4.08, p = .044, partial η² = .02. Group with low PSB had higher

perceived intelligence than group with high PSB. Likewise, a weak direct effect of PSB on perceived accuracy was found, F(1,223) = 11.39, p = .001, partial η² = .05. Perceived recommendation accuracy in the group with low PSB was significantly higher than in the group with high PSB. On the other hand, PSB had no direct effect on willingness to pay, F(1, 223) = .47, p = .492, partial η² < .01.

An interaction effect of Experimental condition x PSB was not significant for any DV: willingness to pay, F(2, 226) = 1.17, p = .314, partial η² = .01., content preference judgement, F(2,223) = .13, p = .878, partial η² < .01, decision to follow, F(2,223) = .15, p = .857, partial η² < .01, perceived intelligence, F(2,223) = .43, p = .651, partial η² < .01, or perceived accuracy, F(2,223) = .34, p = .711, partial η² < .01.

Post-Hoc Analysis for Manipulation Check

The fact that 43.9% of participants did not identify their assigned RT correctly brings up the question whether the obtained findings are accurate and robust. To ensure this, all analyses were repeated separately for the subsamples of MC-passers (n = 129) and MC-non-passers (n = 101). In the present section, only the results that changed in their pattern of significance are reported.

Main effect of recommendation type. The previously found significant effect of RT

on perceived intelligence (RQ2) was replicated in the analyses with passers. For MC-non-passers, this main effect became nonsignificant3, F(10, 148) = .96, p = .480, partial η² =

(20)

.06. The results for all other RT effects (H1-H3, RQ1, RQ3) remained the same (i.e., non-significant) in both post-hoc analysis variations.

Effects of preference stability belief. For MC-non-passers, the previously observed

main effect of PSB (H4a-e) became nonsignificant for three variables: content preference

judgement, F(1, 94) = 3.11, p = .081, partial η² = .03, decision to follow recommendation, F(1, 94) = 2.68, p = .105, partial η² = .03, and perceived intelligence, F(1, 94) = 2.46 p = .121, partial η² = .03. Also, the direct effect of experimental condition on perceived intelligence, which remained significant, F(2, 94) = 3.72, p = .028, partial η² = .07, lost one of the post-hoc pairwise differences: perceived intelligence in the expert-based condition was .68greater than in personalized condition (p = .049), but no longer significantly greater than in social-based condition.

For MC-passers, the main effect of PSB became nonsignificant for all originally influenced variables: content preference judgement, F(1, 122) = 1.58, p = .211, partial η² = .01, decision to follow, F(1, 122) = 3.31, p = .072, partial η² = .03, perceived intelligence F(1, 122) = 1.03, p = .313, partial η² = .01, perceived accuracy, F(1, 122) = 1.55, p = .216, partial η² = .01.

Discussion

The first aim of this study was to investigate the influence of film recommendation type (personalized, social-based, or expert-based) on the attitudes towards the recommended content (content preference judgement, willingness to pay for it) and the recommendation itself (decision to follow, perceived recommendation accuracy and intelligence). While no differences in any other dependent variables were found across conditions, average perceived recommendation intelligence (RQ2) was the highest when participants were exposed to expert-based RT compared to both social-based or personalized RTs. This result is in line with some of the previous findings (Flanagin & Metzger, 2013; Senecal & Nantel 2004).

(21)

To fulfil the second aim, the direct and moderation effect of preference stability belief (PSB) on mentioned Dependent variables was researched (H4-H5). Although no interaction effects were found, PSB had a direct influence on content preference judgement, decision to follow recommendation, perceived recommendation accuracy and intelligence (i.e., all dependent variables except willingness to pay). In all cases, participants with high PSB held lower average levels of dependent variables exhibiting increased skepticism towards content or recommendation. This outcome is similar to the observations of Shen and Ball (2011), who compared perceived accuracy of film recommendations in the conditions of high

customization (offering participants individually tailored film list) and low customization (offering random or non-preferable films). These authors found that in case of low

customization, higher PSB lead to lower perceived accuracy levels. Providing that research described in present paper is based on non-customized film suggestions (i.e., recommended the same set of films to all participants), the results from the current study correspond to those findings.

Conceptual Take-Aways

The theoretical basis for this study was the MAIN model of media-related

psychological effects by Sundar (2008). In the logic of this model, each recommendation type can activate certain mental shortcuts (heuristics), improving users’ perception of presented suggestions. The observed advantage of expert-based RT leads to conclusion that the

authority heuristic behind the expert-based recommendation “outplayed” the heuristics related to other conditions (bandwagon heuristic for social-based RT; machine and identity heuristics for personalized RT).

The discussed superiority of expert-based recommendations suggests that this RT should not be overlooked when studying online recommending. Some researchers in the field focus on the personalized and social-based recommendations, reasoning that they are the most

(22)

popular or the most modern (e.g., Adomavicius et al., 2016; Shen & Ball, 2011). For example, Ochi et al. (2010) did not include the expert-based recommendation in the RT comparison and did not observe any direct effects of RT on perceived intelligence, while present study did. This supports the point that expert recommendations should be incorporated in the further categorization of RT, so the full spectrum of recommendation sources is examined in later research.

To further understand how these recommendation types (and processing mechanisms behind them) compare, next studies can analyze effects of RT interplay. Some authors have already tried to make heuristics compete by providing participants with both expert-based and social-based RT at the same time (e.g., Sundar et al., 2013), but to date no studies have

investigated the presence of all three RTs at once. Is there a synergistic effect when all recommendation types are unanimous? What shortcut(s) win when they contradict each other? Posing these questions can be an efficient next step towards the understanding of heuristics in regard to online content recommendations.

Unexpectedly, none of the dependent variables except perceived intelligence were directly affected by recommendation type. Providing a personalized, social-based, or expert-based film recommendation did not result in significantly different scores in participants’ content preference judgement, willingness to pay, decision to follow recommendation, or perceived accuracy. A possible explanation for this may be partly rooted in the types of variables. First, the fact that content-related variables (content preference judgement, willingness to pay) were non-significant while recommendation-related outcome (perceived intelligence) was influenced significantly suggests that manipulating RT is more likely to affect recommendation-related variables, as they are more directly related to the manipulation itself. Second, variables referring to behavioral intentions (decision to follow, willingness to pay) may be less sensitive to external influence than perceptions and attitudes like perceived

(23)

intelligence (Flanagin & Metzger, 2013, obtained similar results). Behavior intentions may be harder to influence than attitudes since theoretically, attitudes are just one of the components of behavior intention determinants (as in the theory of reasoned action; Shimp & Kavas, 1984).

Nevertheless, perceived recommendation accuracy can be categorized the same way as perceived intelligence (as recommendation- and attitude-related variable), so its insignificance is rather counter-intuitive. Moreover, perceived accuracy (“the extent to which a customer perceives that system recommendations closely meet his or her own needs, interests, and preferences” – Shen & Ball, 2011, p. 72) is conceptually close to perceived intelligence (a degree to which user feels the recommendation is intelligent, accurate, competent, useful and trustworthy – Ochi et al., 2010). To discuss why the results were opposite for these similar variables, comparing the question form may be the key. While the items measuring perceived intelligence were unified and short (e.g. “These recommendations are intelligent”; on average 3.20 words per sentence in translated questionnaire), those for accuracy were long (9.00 words per sentence) and demanded more complex thinking, asking the participant to compare recommendations with own preferences (e.g. “The recommendations were mostly for movies I think would be worth trying”). Such wording could provoke so-called satisficing, or floppy answering style characterized by choosing the first seemingly fitting answer option, agreeing with assertions, non-differentiating between items (Krosnick, 1991). The resulting data distortion could cause significant findings for perceived intelligence but null results for perceived accuracy. This explanation is speculative, so further investigation on

recommendation quality measures is needed to test it and develop necessary methodological advice for future research.

Another unexpected result is the absence of significant interaction effect of preference stability belief and recommendation type. Increased skepticism towards content and

(24)

recommendations expressed by individuals with high PSB was not higher in personalized RT condition as was hypothesized; instead, this effect was equally observed in all three

experimental conditions. In other words, this study shows that PSB influence is more universal than previously thought. Also, the present research extends the list of studied PSB effects, showing that apart from non-behavioral and recommendation-related perceived accuracy (Shen & Ball, 2011), it can reduce behavior intentions (decision to follow

recommendation) and content-related attitudes (content preference judgement). The next step in studying this concept would be testing if these newly explored PSB-dependent variables are also sensitive to the level of actual customization. For example, Shen and Ball (2011) found that people with high PSB perceive recommendations as more accurate when

recommended films are actually tailored to their preferences, while individuals with low PSB consider recommendations which are close to random more accurate. Future research can investigate if such interaction between PSB and level of actual customization applies to other mentioned outcomes.

It is important to note that willingness to pay turned out to be the only variable susceptible to neither PSB nor experimental condition. Absence of any effects may characterize the collected Russian sample as quite rigid when it comes to paying for films online. Paired with rather low average willingness to pay, this supports the point that long-time trend for piracy in Russia (Karaganis et al., 2011) can be still strong despite the emergence of paid services. Clearly, as most online studies that employ convenience sampling, the present one cannot make conclusions generalizable for the entire studied population. Nevertheless, future comparisons with populations that are more used to paying for content (e.g., “Western Europe…”, 2017) can provide insight into national differences in willingness to pay and its relation to RT or PSB.

(25)

Limitations and Directions for Future Research

This study has two main limitations. First, almost half of the participants (43.9%) did not pass the manipulation check. Also, the probability of failing the check was not equal among conditions: in the expert-based group, participants were more likely to answer the manipulation check question incorrectly. Examination of the open-ended participant

comments at the end of the questionnaire indicated that the introductory phrase placed before each of the experimental statements ("These films are recommended for you to watch") suggested to some individuals that there should be a personalized component in each of the provided recommendations. In Maslowska’s (2013) terminology, the sentence raised the expectation of personalization. While it did not cause that many manipulation check fails for personalized RT (for obvious reasons) or for social-based RT (probably due to familiarity with collaborative-based recommendations, which are based on both individual and social data), in expert-based case people could be more puzzled and fail the MC more often. Future research designs studying recommendation types separately should carefully pre-test their recommendation messages to avoid low MC-pass rates.

Generally, a low manipulation check rate can be a sign of reduced internal validity (O’Keefe, 2003). Additional post-hoc testing for manipulation check passers and non-passers poses arguments both for and against this point. The effect of recommendation type on perceived intelligence was absent among non-passers, but for individuals who passed manipulation check it remained identical to the results obtained from the entire sample. Therefore, the reported RT influence can be characterized as rather robust, and the research results should be considered valid at least for MC-passers group. Interestingly, these

observations contradict with the suggestions of O’Keefe (2003), who argues that the

experimental message is capable of affecting individuals no matter how (un)successful were these individuals in identifying it (i.e., passing manipulation check). Post-hoc analysis

(26)

demonstrated that manipulation check question did matter for the results. These conclusions raise the issue of the manipulation check role in message-manipulating experiments and do support the requirement to add such questions in these designs.

The second limitation refers to the absence of a control condition, which made it impossible to assess the “absolute” effect of various recommendation types. When the current findings revealed no differences between RTs, did this mean that they were equally

persuasive or non-persuasive? Future research designs can investigate this question by adding neutral experimental condition. The challenge here is to develop it in a way that the

measurement of recommendation-related dependent variables will still be possible. Using defferent stimuli layout or data collection method (for example, like Senecal and Nantel, 2004, who registered decision to follow recommendation not with a scale question, but by observing if users picked suggested item in the list with non-recommended alternatives) may allow that.

Practical Implications

The study results also offer some practical steps for online film streaming platforms (e.g. Netflix, Amazon). First, the popularity of social-based and personalized recommending on current online entertainment video market (Adomavicius et al., 2017) may leave expert recommendations underestimated in their persuasive capacity. Activated in the presence of expert-based RT, the authority heuristic can make users perceive the suggestions as more intelligent. Video and film platforms can benefit from including some elements of expert recommendation in their interfaces (for example, mark expert-recommended films with the special seal, create lists of top movies according to critics, etc.).

Second, developers should keep in mind that regardless of the used recommendation type, people’s PSB has an effect of overall level of skepticism towards both content and recommendation. Combined with conclusions from Shen and Ball (2011), the suggestion for

(27)

video platforms can be formulated as follows: before making content suggestions, it can be helpful to get to know how concrete or blurry users’ preferences are (e.g. by asking them related questions and/or organizing preliminary preference elicitation). People with high PSB may respond more negatively towards broad, non-customized recommendations, while users with low PSB may favor just that. That is why platforms can create alternative

recommendation sets/algorithms for these two user types: broader ones, allowing individuals to explore various films, and those focused on more precise and accurate suggestions. Conclusion

The aim of the present study was to compare the influence of three recommendation sources (individual’s personal preferences processed by algorithm, other users, and experts) on observed content-related and recommendation-related attitudes and behavioral intentions. The expert-based recommendation was perceived as most intelligent compared to social-based or personalized suggestions. This finding demonstrates the persuasive capability of the authority heuristic and supports the need for further recommendation source comparisons in the field of entertainment content. In addition, the study showed new effects of people’s preference stability belief on decision to follow a recommendation, perceived

recommendation intelligence and content preference judgement in all three experimental conditions. Overall, this study suggests that the response to online recommendation depends on both who is speaking (experts, user community, or personalizing algorithm) and who is listening (a person with distinct or undefined preferences).

(28)

References 250 лучших фильмов (2018, April 4). Retrieved from

https://www.kinopoisk.ru/top/day/2018-04-04/#list

Adomavicius, G., Bockstedt, J. C., Curley, S. P., & Zhang, J. (2013). Do recommender systems manipulate consumer preferences? A study of anchoring effects. Information Systems Research, 24(4), 956-975.

Adomavicius, G., Bockstedt, J., Curley, S., & Zhang, J. (2016). Understanding effects of personalized vs. aggregate ratings on user preferences. In CEUR Workshop Proceedings (Vol. 1679, pp. 14-21).

Adomavicius, G., Bockstedt, J., Curley, S., & Zhang, J. (2017). Effects of online

recommendations on consumers’ willingness to pay. Information Systems Research, 29(1), 84-102.

Ansari, A., Essegaier, S., & Kohli, R. (2000). Internet Recommendation Systems. Journal of Marketing Research, 37(3), 363-375

Blackhart, G. C., Brown, K. E., Clark, T., Pierce, D. L., & Shell, K. (2012). Assessing the adequacy of postexperimental inquiries in deception research and the factors that promote participant honesty. Behavior research methods, 44(1), 24-40.

Cosley, D., Lam, S. K., Albert, I., Konstan, J. A., & Riedl, J. (2003, April). Is seeing

believing? How recommender system interfaces affect users' opinions. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 585-592). ACM.

European Audiovisual Observatory. (2017). The Russian Legal On-Demand Video Services Market. Retrieved from https://www.obs.coe.int/en/web/observatoire/-/the-russian-legal-on-demand-video-services-market.

(29)

Fiske, S., & Taylor, S. (1984). Social cognition. Reading, MA: Addison-Wesley Flanagin, A. J., & Metzger, M. J. (2013). Trusting expert-versus user-generated ratings

online: The role of information volume, valence, and consumer characteristics. Computers in Human Behavior, 29(4), 1626-1634.

Gretzel, U., & Fesenmaier, D. R. (2006). Persuasion in recommender systems. International Journal of Electronic Commerce, 11(2), 81-100.

Hostler, R. E., Yoon, V. Y., Guo, Z., Guimaraes, T., & Forgionne, G. (2011). Assessing the impact of recommender agents on on-line consumer unplanned purchase

behavior. Information & Management, 48(8), 336-343.

Karaganis, J., Flynn, S., Primo, N., Lloyd, L., Sezneva, O., Mizukami, P. N., ... & Stobart, H. (2011). Media piracy in emerging economies. New York, NY: Social Science

Research Council.

Kinopoisk.ru Analytics - Market Share Stats & Traffic Ranking (2018). Retrieved from https://www.similarweb.com/website/kinopoisk.ru

Krosnick, J. A. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied cognitive psychology, 5(3), 213-236.

Maslowska, E. H. (2013). " Just for You!": A Study Into the Effectiveness and the Mechanism of Customized Communication. Universiteit van Amsterdam [Host].

Ochi, P., Rao, S., Takayama, L., & Nass, C. (2010). Predictors of user perceptions of web recommender systems: How the basis for generating experience and search product recommendations affects user responses. International Journal of Human-Computer Studies, 68(8), 472-482.

O'Keefe, D. J. (2003). Message properties, mediating states, and manipulation checks: Claims, evidence, and data analysis in experimental persuasive message effects research. Communication Theory, 13(3), 251-274.

(30)

Park, D. H., Kim, H. K., Choi, I. Y., & Kim, J. K. (2012). A literature review and classification of recommender systems research. Expert Systems with Applications, 39(11), 10059-10072.

Pathak, B., Garfinkel, R., Gopal, R. D., Venkatesan, R., & Yin, F. (2010). Empirical analysis of the impact of recommender systems on sales. Journal of Management Information Systems, 27(2), 159-188.

Senecal, S., & Nantel, J. (2004). The influence of online product recommendations on consumers’ online choices. Journal of retailing, 80(2), 159-169.

Shen, A. (2014). Recommendations as personalized marketing: insights from customer experiences. Journal of Services Marketing, 28(5), 414-427.

Shen, A., & Ball, A. D. (2011). Preference stability belief as a determinant of response to personalized recommendations. Journal of Consumer Behaviour, 10(2), 71-79. Shimp, T. A., & Kavas, A. (1984). The theory of reasoned action applied to coupon

usage. Journal of consumer research, 11(3), 795-809.

Sundar, S. S. (2008) The MAIN Model: A Heuristic Approach to Understanding Technology Effects on Credibility. Digital Media, Youth, and Credibility. Cambridge, MA: The MIT Press. 73–100.

Sundar, S. S., Xu, Q., & Oeldorf-Hirsch, A. (2013, June). How deeply do we process online recommendations? Heuristic vs. systematic processing of authority and bandwagon cues. In 63rd Annual Conference of the International Communication Association London, UK (pp. 1-17).

Syuntyurenko, O. V. (2015). The digital environment: The trends and risks of development. Scientific and Technical Information Processing, 42(1), 24-29. Vk.com Analytics - Market Share Stats & Traffic Ranking (April, 2018). Retrieved from

(31)

Western Europe to reach 65 million SVOD subs (2017, September 20). Retrieved from https://www.broadbandtvnews.com/2017/09/20/western-europe-to-reach-65-million-svod-subs/

Ye, L. R., Zhang, Y., Nguyen, D. D., & Chiu, J. (2004). Fee-based online services: exploring consumers’ willingness to pay. Journal of International Technology and Information Management, 13(1), 12.

Zhu, H., & Huberman, B. A. (2014). To switch or not to switch: understanding social influence in online choices. American Behavioral Scientist, 58(10), 1329-1344. Самые оцениваемые фильмы – КиноПоиск (2018, April 5). Retrieved from

(32)

Footnotes

1

For the sake of uniformity, recommendation/aggregated data will be referred to as personalized/social-based/expert-based RT depending on its source. Unless otherwise stated, studies discussed here did not use real recommendations (i.e., produced by real algorithm, users or experts), but made participants think so to observe the message manipulation effects.

2 In PCA, Kaiser’s criterion of one identified two components, but point of inflexion

on the scree plot was observed after the first component (37.55% of variance explained). Field (2009) suggests that for samples containing 200-250 cases scree plot is a primary information source regarding factor quantity. Moreover, two integrative subscale variables based on two factors suggested by Kaiser’s criterion could not be interpreted and had lower Cronbach’s α (.61 and .56).

3 First, Levene’s test for willingness to pay (F = 4.50, p = .014) and Box’s test (M =

22,825.73, p = .032) were significant, F(10, 188) = 1.17, p = .316 partial η² = .06. After random exclusion of cases in order to make group sizes equal (so the Box’s test can be

ignored – Field, 2009), the analysis results remained the same (see the main text for statistics), and Levene’s test for willingness to pay turned insignificant (F = .53, p = .590).

(33)

Table 1

Means and Intercorrelations of Dependent Variables

rs (p < .001 for all analyses)

Variable M SD 1 2 3 4 5

1 Сontent preference judgement 3.08 .52 -- .51 .85 .59 .65

2 Willingness to pay 2.08 .76 -- -- .60 .36 .42

3 Decision to follow recommendation 2.84 .64 -- -- -- .56 .65 4 Perceived recommendation intelligence 3.58 1.18 -- -- -- -- .72 5 Perceived recommendation accuracy 3.42 1.18 -- -- -- -- --

(34)

Table 2

Dependent Variable Means per Condition

PersonalizedRT (n = 81) Social-based RT (n = 75) Expert-based RT (n = 74) Variable M SD M SD M SD

Content preference judgement 3.10 .45 3.05 .60 3.10 .51

Willingness to pay 2.00 .71 2.05 .77 2.19 .79

Decision to follow 2.80 .64 2.83 .65 2.90 .63

Perceived intelligence 3.43a 1.08 3.39b 1.27 3.93ab 1.11

Perceived accuracy 3.42 1.06 3.29 1.23 3.54 1.24

(35)

Table 3

Dependent Variable Means for High and Low Preference Stability Belief Groups Low PSB

(n = 108)

High PSB (n = 122)

Variable M SD M SD

Content preference judgement 3.16a .52 3.08a .52

Willingness to pay 2.11 .80 2.05 .71

Decision to follow 2.97a .66 2.73a .60

Perceived intelligence 3.71a 1.17 3.45a 1.18

Perceived accuracy 3.68a 1.18 3.19a 1.13

Note. Significantly different means are marked pairwise.

(36)

Appendix

Films Used in Experimental Design Films Used For Preference-Elicitation Section

Inception (2010), The Shawshank Redemption (1994), Intouchables (2011), Forrest Gump (1994), Fight Club (1999), Titanic (1997), Shutter Island (2009), Léon (1994), Avatar (2009), Knockin' on Heaven's Door (1997), Pirates of the Caribbean: The Curse of the Black Pearl (2003), The Matrix (1999), Ivan Vasilievich: Back to the Future (1973), The Lord of the Rings: The Return of the King (2003), A Beautiful Mind (2001), Back to the Future (1985). Films Used For Dependent Variable Measurement

For a few dollars more (1965), Temple Grandin (2010), Father of a Soldier (1964), Love and Lies (1980), Like Stars on Earth (2007), Three Idiots (2009), Song of the Sea (2014), Singin’ in the Rain (1952), L.A. Confidential (1997), Star Wars: Episode V - The Empire Strikes Back (1980).

Referenties

GERELATEERDE DOCUMENTEN

By expanding the developed energy management solution to include energy management of the underground compressed-air system, optimal reduction in electricity consumption can

For the control variable Great-Britain the results are insignificant and differ in the event windows tested, the [-2,2] event window suggests a positive influence of 5.5% on the

Thereafter, the enabling technologies which include context-aware systems, user profiling, the cybernetic control system, the relation between the heart rate and music, and the

The first column provides the original four music preference dimensions included in Rentfrow and Gosling‟s (2003) model, followed by the genres contained within

In de plattegrond XIII (fig. 6) en waarschijnlijk in XII is één der nokpalen in de bin- nenruimte weggelaten en vervangen door één zware wand- stijl in elke langszijde.

Variatie op DNA-niveau wordt veroorzaakt door (i) puntmutaties, (ii) kleine deleties waardoor geen compleet eiwit gemaakt kan worden, en (iii) grote deleties waardoor het

The influence of dose adaptations in different protocols (Table 1) was evaluated in patients in the EPOC study specifically affected by protocol differences (age C1 year, body

It tells us that the correlated cross-occurrence algorithm best fits the data available, because it allows for easy ingestion of various interaction sources and item and user