• No results found

Improving the Customer Onboarding Process at Mollie B.V. Using the Fogg Behaviour Model

N/A
N/A
Protected

Academic year: 2021

Share "Improving the Customer Onboarding Process at Mollie B.V. Using the Fogg Behaviour Model"

Copied!
14
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

U

N I V E R S I T Y O F

A

M S T E R D A M

M

A S T E R

T

H E S I S

Improving the Customer Onboarding Process at Mollie B.V.

Using the Fogg Behaviour Model

Thomas Hoornstra

10203974

M

A S T E R

I

N F O R M AT I O N

S

T U D I E S

H

U M A N

C

E N T E R E D

M

U L T I M E D I A

Supervisor

Dr. M. J. Marx

ILPS, IvI

Faculty of Science

University of Amsterdam

Science Park 904

1098 XH Amsterdam

Second Examiner

Dr. F. M. Nack

Informatics Institute

Faculty of Science

University of Amsterdam

Science Park 904

1098 XH Amsterdam

Host organisation

Mollie B.V.

Keizersgracht 313

1016 EE Amsterdam

Host supervisor

Hoang Pham

June 29, 2017

(2)

Improving the Customer Onboarding Process at Mollie B.V.

Using the Fogg Behaviour Model

Thomas Hoornstra

University of Amsterdam

Graduate School of Informatics

thomas.hoornstra@student.uva.nl

ABSTRACT

A challenge many companies have is helping (potential) cus-tomers to optimally use their product. Being able to do so could lead to increased sales and new customers. This re-search investigated which methods are effective for the online payment-processing company Mollie B.V. to increase their number of merchants who receive payments. Recommenda-tions of the Fogg Behaviour Model were used to form hypothe-ses for potential improvements related to Mollie’s customer onboarding process, which should lead to more of their mer-chants receiving payments. The hypotheses were tested using experiments on Mollie’s website. Two out of three imple-mentations resulted in a statistically significant increase – but a small effect – in the number of merchants who received payments.

1. INTRODUCTION

The majority of companies would gain considerable advan-tages if they could increase the influence on their (potential) customers’ behaviour. Adjusting a message so that it is well-received and efficient potentially influences attitudes and be-haviour, which could lead to customers using the product [20]. This type of persuasion is especially useful for companies that apply a software-as-a-service (SaaS) model. SaaS com-panies offer an online software system (generally a website) as their product/service. Customers pay for the software in a subscription-based form, or pay a small fee for every use (e.g. transaction costs). SaaS companies are dependent on cus-tomers who sign up and start using their product. The process between sign up and the first successful use of the product, called the customer onboarding process, is always a part where a significant amount of SaaS companies’ customers stay stuck [5]. This means that although these customers signed up for the product, they never successfully used the product.

1.1 Mollie

Mollie B.V. (Mollie) is a technology-based online payment-processing company that provides merchants a way to accept a variety of payment methods (i.e. credit card or iDeal) on their website. Mollie is a SaaS company that experiences a significant number of merchants who fail to complete the customer onboarding process. Although many merchants are signed up at Mollie, some never receive any payments (and thus do not gain value from Mollie). Mollie’s customers are called merchants, as Mollie only accepts companies (so called

business customers) as customers. These companies – Mollie’s merchants – are intermediaries who receive payments from end-users. For example, when a webshop owner sells products to a consumer, it can use the Mollie payment system to receive payments, being a merchant of Mollie. The consumer on its turn uses the Mollie payment system to pay the merchant, being the end-user. In this context the customer onboarding process is the process between the merchant’s sign up and its first received payment because this can be seen as the first successful use of the product. For Mollie the onboarding process is the most adequate process to improve, as currently less than one third of the merchants who sign up actually receive end-users’ payments. Mollie makes profit on each merchant’s received payments in the form of transaction costs. The more merchants receive payments, the higher the profit for Mollie. Therefore, it is relevant for Mollie to know how the number of merchants completing the onboarding process (and thus receive payments) can be increased.

1.2 Research goal

This research investigates which methods are effective in in-fluencing customers to perform according to a prescribed be-haviour (target bebe-haviour), which is "merchants will receive payments". The hypotheses in this research only focus on Mollie’s onboarding process. As soon as merchants receive at least one payment, they have completed the onboarding process. Although the percentage of merchants who receive payments partly depends on the possibility of merchants to attract end-users, this research assumes that another part of the reason is due to a non-optimal customer onboarding process offered by Mollie. This would mean that although merchants are be able to attract end-users, the merchants are not always able to implement the Mollie payment system into their web-site because the process is not well defined or too complicated. These merchants thus never complete the onboarding process. The recommendations of the Fogg Behaviour Model (FBM) are used to form hypotheses on how the merchants’ behaviour can be influenced so that they successfully receive payments. This could be achieved by providing the right trigger at the right time. For instance, by supplying merchants with instruc-tions to integrate the Mollie payment system into their website, but only showing these instructions in the right context.

(3)

1.3 Thesis structure

The outline of the thesis is as follows. Section 2 describes the relevant concepts for this thesis. Based on that literature the main research question, the sub-questions, and related hypotheses are formed in section 3. Each hypothesis is tested using an experiment, which runs on Mollie’s company website. The experiments’ methodologies are described in section 4, after which their results are evaluated in section 5. Finally the thesis is discussed in section 6 and concluded in section 7.

2. LITERATURE OVERVIEW 2.1 Customer journey

Modern companies are increasingly using a customer-focused approach as a strategy for success. For these companies it is important to fully understand how customers experience experience the company’s product or service. Most customers of SaaS companies experience the entire journey solely on the company’s website. This means that having an optimal com-pany website is becoming more and more important. Multiple studies have shown that customers form a positive or negative opinion about a company’s web site in a short amount of time. In just half a second [29] or sometimes even 50 milliseconds [18] people form a consistent opinion about a website’s visual appeal. This could have an influence on whether a customer prefers one company’s website over the other. Thus, deliver-ing the highest customer experience as possible is crucial for competition [8].

In marketing contexts the customer journey is a valuable con-cept. It is described as the beginning-to-end process that customers experience in receiving the product or service they need [9]. By systematically mapping the customer journey of a company’s product, it explains how customers interact with the product, how they perceive the company at each moment of interaction, and how customers want their experience to be. Although a good customer journey has a considerable influence on end sales [22, 7], a survey among 700 digital marketing companies [7] shows that the majority of them do not perform customer journey analyses.

The customer experience can be improved by streamlining and simplifying the customer journey. An example of a company streamlining the customer journey is a telecommunications company’s that implemented faster mobile-phone sign-ups, which resulted in a 20% customer satisfaction increase, with 30% reduced costs for the company [9]. If a market contains many similar services, offering a superior customer experience plays an important role in achieving company growth [25]. In such markets, the delivered customer experience can be one of the most noticable aspects, which makes it a critical aspect in standing out. The customer experience refers to the value the customer receives from the product or service [17]. By performing data analysis on the data from each customer’s behaviour, issues, and desires, insight can be created con-cerning customers. This insight can be used to improve their experience [25], for example by making it more personalized. Gaining insight is important not only for the company but also for the customer. Insight can be defined as ’a sudden compre-hension that can result in a new interpretation of a situation

and that can point to the solution of a problem’ [26]. This is sometimes described as a "sudden breakthrough", "Wow! moment", or "Aha! moment" [14]. Solving problems with insight, instead of analytically, requires different processes in the brain. There is noticeable brain activity immediately before a person experiences the "Aha! moment". With re-gard to problem solving, insight enables the user to suddenly become aware of the solution, while with analytic problem solving this awareness is created incrementally [16]. Creating an "Aha! moment" within the customer experience is crucial for customers to feel they they gained value from using the company’s product. Once a user experiences an "Aha! mo-ment" the likelihood that the user will become a long-term user is significantly increased [6].

2.2 Customer onboarding

Customer onboarding is the sum of methods and elements that helps a new customer to become familiar with the product, and to achieve value from it (e.g. by successfully using the product) [23]. The customer onboarding process is one of the components of the customer journey. An onboarded customer may be seen as one who experienced the "Aha! moment", gained value by using the product and is able to use the core functionality of the product. For example, Dropbox has de-fined its "Aha! moment" as the moment the user uploads a file to a folder. Facebook has determined that its "Aha! mo-ment" is when the user gains 10 friends over a 7-day time span. When individuals achieve that goal, it is very likely that they remain active Facebook users [6].

After an initial sign-up, the onboarding process is the second process in the customer journey. Once customers sign up for a company’s product or service, it is not guaranteed that they will indeed use the product, and a significant number of customers stay stuck during this process [5]. Sometimes cus-tomers discover the product does not fit with their preliminary expectations, or they have problems understanding how to use the product.

Typically used in a business environment, the term "onboard-ing" is also known as "organisational socialisation", to de-scribe the process whereby new employees introduced to the norms, procedures and culture of the organisation [19]. More recently, the term onboarding is described more generally as the formalized process within an organisation that is designed to help facilitate newcomer success [4]. Because the term "newcomer" also applies to new customers, [30], it can also be described as the formalized process within an organisation that facilitates new customers’ success with the company’s product. In this thesis the latter is the definition is used.

2.3 Fogg Behaviour Model

Experimental psychologist and researcher Fogg [12], director of the Persuasive Technology Lab at Stanford University, in-troduced the FBM to explain the drivers of human behaviour change. The model offers an explanation of which actions people can take to achieve a desired behaviour, and why some actions in the end do not result in the anticipated behaviour change. According to the model, a person must have (1)

(4)

sufficient motivation, (2) sufficient ability, and (3) must be ef-fectively triggered. All three must happen at the same moment for a target behaviour to occur [12].

Figure 2.1. A visualisation of the FBM. A target behaviour only occurs when sufficient motivation, sufficient ability and an effective trigger hap-pen at the same moment in time.

2.3.1 Motivation and ability

The FBM suggests that motivation and ability can trade off. A person with high motivation but few abilities is likely to work hard with their limited abilities to still perform the task. The reverse is also true. A person with low motivation but a high level of ability may still perform the task when it is very simple (when the ability is high). Generally people have at least a moderate level of motivation and ability, which both can be influenced.

2.3.2 Motivation types

For a target behaviour to occur, the user must cross the be-haviour action threshold. If a user has a high ability but low motivation, the motivation must be increased to make it more likely for the target behaviour to occur. There are three types of motivators for the purpose of persuasive design. Each type of motivator has two sides. The first type of motivation is plea-sure / pain. Because these are basic drives, they happen almost immediately, without thinking or anticipating. Although this type of motivation is powerful, it may not be a practical ap-proach to apply within a design. The second type of motivation is social acceptance / rejection. People can be motivated to win social acceptance, or avoid being socially rejected. Social networking websites mainly focus on this type of motivator. The third type of motivation is hope / fear, which is identified by the prospect of an outcome. This could be the hope for a positive outcome or fear of a negative outcome. Hope is likely the most ethical and empowering motivator within the FBM [12].

The Fogg Behaviour Model assumes that the motivation to perform a certain behaviour is also determined by the proba-bility and desiraproba-bility of the behavioural outcomes. If a person expects that performing a certain behaviour will not result in

a desired outcome, it is unlikely that the person will be mo-tivated to perform the behaviour. Social psychologist Starck [27] describes this concept as behavioural decision. If re-searchers learn what type of outcomes people anticipate, they can predict their intentions and how they will behave. Inten-tions are based on a person’s expectaInten-tions about the probability and desirability of behavioural outcomes. This means that if a person’s expectations about a behavioural outcome is positive, intention and thus motivation is likely increased as well. This theory is in line with Fishbein & Ajzen’s theory of reasoned action[10, 3] – one of the classic models of persuation – and Ajzen’s theory of planned behaviour [1, 2, 3], an elaboration on the classic model. The theory of reasoned action describes that the best predictors of performing a certain behaviour are the attitude toward the behaviour and a person’s subjective norms. A positive attitude towards a certain behaviour likely results in the accomplishment of that behaviour. Subjective norms refer to beliefs about how people in ones social en-vironment expect one to perform. In the theory of planned behavioura person’s intention and the subjective norms are again the key predictors of behaviour, but the additional factor of a perceived behavioural control is added to predict both behavioural intentions and behaviour itself. Behavioural con-trolrefers to the extent to which people believe that they are capable of performing a certain behaviour. If the perceived behavioural controlis high, this results in an increase in moti-vation, which makes it more likely that the behaviour will be performed.

2.3.3 Ability elements

Increasing ability is one way to increase the behaviour activa-tion threshold, thus accomplishing behaviour change. Because many people are reluctant to learn new things and to be trained to perform certain tasks, increasing their ability builds ex-tensively on the strength of simplicity. It is therefore more effective to make a certain task simpler – so performing the task is more intuitive – than to teach someone how to perform the same non-simplified task. Simplicity can be seen as a composition of a person’s scarcest resources at the moment a behaviour is triggered. The FBM identifies six elements that determine simplicity: time, money, physical effort, mental effort, social deviance (going against the norm), and non-routine. These elements are interconnected. If a person lacks the adequate level of only one element, the behaviour will not occur. For example, if a person has all the required elements except time, and the target behaviour requires time, the person will not perform the target behaviour. In general, achieving a target behaviour is more successful when increasing ability (making a behaviour more simple to do) than when increasing motivation [12].

2.3.4 Triggers

In addition to motivation and ability, relevant triggers are also an important part of the FBM. This part is often overlooked in persuasion attempts. A trigger is something that indicates to a person to immediately perform a specific behaviour. Without a relevant trigger, a target behaviour will not occur, despite the high levels of motivation and ability. A trigger is successful when one notices it, the trigger is associated with the target

(5)

behaviour, and the trigger is only present when one’s motiva-tion and ability accompany the trigger. When one’s motivamotiva-tion is low for a target behaviour, a trigger can be experienced as distracting or annoying. Similarly, when a trigger accompa-nies high motivation but low ability, frustration is likely. In many persuasion attempts the target behaviour does not occur because of the trigger’s timing may not be right.

The form in which triggers are presented or the medium that is used to present the triggers is not important as long as a person associates the trigger with the target behaviour and the trigger is presented when the user has the ability and motivation to perform the task. The FBM describes three types of triggers that achieve their goal in different ways. Sparks are triggers specifically for situations with high ability and low motivation, and focus on triggering the subcomponents of motivation (e.g. motivational messages). Facilitators are triggers for situations with high motivation and low ability. They focus on the sub-components of ability, and make the target behaviour easier (e.g. by explaining how one can achieve a certain outcome). Finally signals are triggers specifically for situations with high ability and high motivation. They do not focus on increasing motivation or simplifying an action, but instead remind one to perform a target behaviour. When a person already has high ability and high motivation this type of trigger works best because the other two types of triggers may be experienced as annoying or out of proportion [12].

3. RESEARCH PROBLEM

Currently less than one third of Mollie’s merchants actually receive payments done by end-users. As described earlier, the more merchants receive payments, the higher the profit for Mollie, as Mollie makes profit on each received payment in the form of transaction costs. Therefore, it is relevant for Mollie to know how the percentage of merchants who receive payments can be increased. By using the recommendations of the Fogg Behaviour Model (FBM) to implement improve-ments in Mollie’s customer onboarding process, this should increase the number of merchants receiving payments. This assumption leads to the following research question:

To what extent can an implementation of the FBM’s rec-ommendations within the customer onboarding process in-crease the number of merchants receiving payments at Mol-lie B.V.?

The Fogg Behaviour Model [12] suggests that motivation, abil-ity and triggers must occur simultaneously in a sufficient or effective amount for a target behaviour to occur. By improving the triggers and abilities, the target behaviour of merchants receiving payments may be triggered more often, resulting in more merchants receiving payments. To test these assump-tions, we mapped multiple bottlenecks in Mollie’s onboarding process, and analysed whether these bottlenecks had the po-tential to be improved when the recommendations of the Fogg Behaviour Model were implemented. This analysis resulted in the following subquestions:

• RQ1: To what extent does eliminating of the approval delay after sign up increase the number of merchants receiving payments?

• RQ2: To what extent does supplying merchants with pay-ment integration instructions immediately after sign up in-crease the number of merchants receiving payments? • RQ3: To what extent does sending merchants an email

with the encouragement to accept another payment method increase the number of merchants receiving payments from that encouraged payment method?

3.1 Hypotheses

• H1: Eliminating the approval delay after sign up increases the number of merchants receiving payments.

– Outcome: The time it took merchants to receive their first payment was reduced by 0.518 days, and 2.2% more merchants received payments within three weeks after signing up. The decrease in days and the increase of merchants received payments within three weeks after signing up compared to the control group are both statistically significant; however, it represented no significant effect.

• H2: Supplying merchants with payment integration instruc-tions immediately after sign up increases the number of merchants receiving payments.

– Outcome: The experiment did not result in a signif-icantly increase the number of merchants receiving payments; it also did not represent a significant effect. • H3: Sending merchants an email with the encouragement

to accept another payment method increases the number of merchants receiving payments from that encouraged pay-ment method.

– Outcome: The experiment did result in a significant increase the number of merchants receiving payments from that encouraged payment method; it represented a small effect.

A detailed explanation and methodologies is presented in section 4. The results of hypotheses testing is presented in section 5.

4. METHODOLOGY

As described in literature, controlled experiments on the web are usually performed using an A/B test [28, 15, 13]. These tests are used to compare two variants of an experiment to see which variant performs statistically better against a pre-determined goal [13]. The two test two variants, a control group (A) and a treatment group (B), are randomly displayed to users. In a simple A/B test every variant has a single factor that differentiates it from the other variants [15].

A disadvantage of A/B testing is that a great amount of data is needed to test for a significant difference. These tests generally also have a small impact because only related variants of one concept are tested. One might test different variants of some currently non-optimal behaviour, while a better approach would be to solve the problem on a higher level, by completely reinventing the behaviour.

Due to technical limitations it was not possible to integrate an A/B testing tool into the Mollie website. As an alternative

(6)

we performed time split-tests (also known as ’before-and-after tests’). Although time-split tests are subject to temporal influences (e.g. monthly changes in customers and transac-tions) and therefore considered unreliable [15], Mollie’s tem-poral influences are limited. Although Mollie does experience monthly changes in absolute numbers of customers and trans-actions, the percentage of merchants receiving payments gen-erally stays the same month to month. As this measurement was specifically used in all results throughout this research, performing time split-tests in this context can be seen as re-liable. Additional advantages of time-split tests are that they make it possible to conduct experiments on a higher level than with A/B tests and require less amount of data to test for a significant difference.

Several authors [21, 15] recommend only testing once factor at a time. At Mollie we achieved this by conducting simultaneous experiments that had no influence on each other, leaving all other factors unaltered. We also took the day-of-week effect into account. This effect is encountered when running an experiment on different days (e.g. during weekends), and it has an influence on the outcome. We ran each experiment for multiple weeks, so the day-of-week effect is negligible. Based on the FBM and the data gathered from the Mollie database, multiple hypotheses were formulated. Each hypoth-esis was tested by setting up a small experiment, a so-called Minimal Viable Product (MVP). An MVP ’is that version of the product that enables a full turn of the build-measure-learn loop with minimum amount of effort’ [24]. In this thesis an MVP generally consisted of a small but fundamental change to Mollie’s website. If this change had impact, Mollie could implement a more elaborate solution at a later stage for opti-mal impact. Later in this section each hypothesis is explained in more detail, and is tested using an MVP.

The results were collected by running queries on Mollie’s ex-tensive MySQL database, which includes 191 database tables. Due to security reasons a detailed overview of the database cannot be included in this thesis. Queries that combined mul-tiple database tables were performed to retrieve the desired data. An example output of a database query can be seen in Figure 4.1. The outputs of the different queries were used to perform analyses on the different datasets.

4.1 Description of the data

The data gathered in this thesis consists of information about Mollie’s merchants. The merchants’ sign up dates and the dates they received their first payment were collected. Because time split-tests were used for most of the experiments, two groups were involved at these experiments: a before group, which was not influenced by the experiment, and an after group, which was influenced by the experiment. In each exper-iment the groups were created based on the sign-up date of the merchants. For example, one group could consist of a group of merchants who signed up 3-7 weeks before the experiment was conducted, and the second group consists of of a group of merchants who signed up 0-4 weeks after the experiment was conducted. Because the data on both groups was gathered at the end of the experiment, the before group had extra time to receive payments. To be able to make a fair comparison,

we only looked at a specific period after sign-up to determine whether a merchant received payments. We first calculated that the median period of all merchants to receive payments after sign up was 13 days (M = 16.939, SD = 10.193). Based on this calculation, and taking into account a short but accurate experiment duration, we determined that for each experiment a merchant received payments if it occurred within three weeks (21 days) after sign up. During the experiments, 67.5% of the merchants received their first payment within three weeks after sign-up. By only analysing merchants who received payments within three weeks (21 days) after sign-up, we still incorporated most of the merchants in each experiment, while being able to use a relatively short experiment duration. By assigning merchants who signed up 3-7 weeks as a before group, these merchants have at least three weeks to receive payments before they were influenced by the experiment. As only merchants were analysed that received payments within three weeks after sign up, the before group was fully isolated from each experiment’s effects, while the after group was fully affected by the experiment.

4.2 Experiment 1: Eliminating approval delay after sign up

Until recently, when merchants signed up, the employees of Mollie manually approved their websites before these mer-chants were able to start receiving payments. This resulted in an approval delay after sign up. In light of the FBM, a merchant signing up for a company’s product is a sign that the merchant is motivated to use the product. When Mollie manually approved merchants’ websites, the process could take up two business days. This way a merchant had no abil-ity to directly start accepting payments. According to the FBM, having an ability is crucial for a target behaviour to occur; therefore, this may have resulted in some merchant’s behaviour of not receiving payments.

After a merchant’s account was manually activated and the merchant notified, another factor affected the merchant’s be-haviour; in this experiment we assumed that the activation delay was disadvantageous because the waiting time may have removed a merchant’s motivation to perform the target be-haviour once the website was approved. This experiment observed the impact of eliminating the activation delay by automatically approving merchants’ websites on the number of merchants receiving payments. The hypothesis was that the merchant’s motivation to perform the target behaviour would remain high because they could immediately start receiving payments after signing up for the product. This led to the following hypothesis:

H1 Eliminating the approval delay after sign up in-creases the number of merchants receiving payments. Due to security reasons, Mollie could not remove the website approval step because merchants may not use Mollie’s pay-ment methods to offer illegitimate or obscure products and services. To eliminate the approval delay, we chose to build a website crawler. A website crawler automatically retrieves the merchant’s website, and scans it based on an extensive list of words that indicates possible risks or prohibited businesses.

(7)

Figure 4.1. An example of a data export at Mollie. A data export typically involved a query that combined multiple database tables.

Mollies developers made it possible for merchants’ website profiles located in the Netherlands and Belgium to be directly approved when they do not contain any of the predefined prohibited words. When a website does contain prohibited words, it is manually checked by a Mollie employee before it is approved or rejected. This applies to only a small fraction of websites.

The website crawler went live on March 13, 2017 around 12:00 PM. A comparison was made later between merchants who signed up 3-7 weeks before the website crawler went live and merchants who signed up 0-4 weeks afterwards. The first group consisted of merchants who signed up between February 12 and March 13, 11:59 AM. The second group consisted if merchants who signed up between March 13, 12:00 PM and April 12. There was a two-month waiting time after the website crawler went live before collecting the data, so merchants of both groups had at least three weeks time to receive payments.

We retrieved the sign-up date and the date the first payment was received for each group of merchants by executing an SQL-query. As described in the main methodology section, the before group had more time to receive payments than the after group. To be able to make a fair comparison, we only looked at merchants who received payments within three weeks (21 days) after sign up to determine whether a merchant received payments.

4.3 Experiment 2: Supplying payment integration in-structions after sign up

The Mollie Dashboard is the first page that is displayed to merchants when they finish signing up at Mollie. At this page merchants are able to view their received transactions, and a short technical explanation of how to start accepting payments. It is likely difficult for a non-technical merchant to understand how to start accepting payments when viewing this page. The page displays an "API key", but there is no explanation of how this can be used to implement the payment solution into the merchant’s website. In this situation the mental effort to start accepting payments is high for non-technical merchants, as they have to devote a high amount of cognitive resources to understand how to start accepting payments. Mental effort is one of the ability elements of the FBM.

In this experiment we assumed that if merchants were able to view an explanation of how the payment solution could be in-tegrated into their websites, this could increase their perceived behavioural controlbecause the explanation increases their belief that they could succeed the integration. As described earlier, according to Ajzen’s theory of planned behaviour [1, 2, 3] this results in an increased motivation, which makes it more likely that they receive payments. Supplying merchants with payment integration instructions immediately after sign-up ssign-upplied a relevant trigger to perform the behaviour of receiving payments and also increased merchants’ ability to perform this behaviour. This assumption led to the following hypothesis:

H2 Supplying merchants with payment integration in-structions immediately after sign up increases the number of merchants receiving payments.

To test the hypothesis, it first had to be determined which infor-mation should be supplied, and where it should be displayed. Before the experiment was conducted, there was no direct explanation available to the merchant of how the payment solution could be implemented into the merchant’s website. From the page that was visible immediately after sign up a link was included to the Developer API reference. However, this link was only visible when hovering over a small menu. More-over, the Developer API reference is irrelevant for merchants who make use of a webshop or webshop module because they can simply install a plugin to start accepting payments; they have no need for a technical documentation of how to cre-ate their own application to accept payments. However, the problem remained of how merchants that have a webshop or webshop module could know which plugin to download. Mollie has a Help Centre on their website, which contains online articles with explanations about a variety of subjects related to Mollie. One of the pages is the "Integration" cate-gory article, which contains all questions related to integration instructions for webshops and webshop modules. We de-termined that this is the most relevant information page for merchants that have a webshop or webshop module because it contains detailed steps of how to integrate the Mollie payment system into a webshop. Although most merchants now use this type of integration, before this experiment only 5% of the merchants who signed up visited this "Integration" category article. This was likely caused because merchants did not know such an article existed. If a merchant is a developer, the

(8)

Figure 4.2. Mollie’s Dashboard before the experiment (left) and during the experiment (right).

most relevant information page is the Developer API reference because it contains detailed information on how to build the integration oneself. The most relevant placement of the link was immediately after sign up because that is the moment a merchant would want to receive instructions on how to imple-ment the payimple-ment system into their website. The page that is displayed immediately after sign-up is the Dashboard. We included a link to both pages on the Dashboard because both the link to the "Integration" category of the Help Centre, and a link to the Developer API reference were relevant to two different groups of merchants (Figure 4.2). In both links we included a tracker (UTM-tags1) to be able to measure the click-through rate and impact of the two links.

The integration instructions link went live on the Mollie web-site on May 15, 2017. After a month we looked at the percent-age of signed up merchants who received payments within the first three weeks. We compared a group of merchants that signed up 3-7 weeks before this experiment went live (this way the signed up merchants had at least 3 weeks time to receive payments before the experiment went live) with another group of merchants who signed up 0-4 weeks after the experiment went live.

1https://www.launchdigitalmarketing.com/

what-are-utm-codes/

4.4 Experiment 3: Credit card activation email

Mollie offers a variety of payment methods on its website, in-cluding payment methods that can be used internationally, like PayPal and credit card. Many merchants who have received PayPal transactions did not always have the credit card pay-ment method activated. It is likely that having PayPal activated is a sign for selling internationally, so it would make sense to also activate credit card. However only 48% of the merchants who received PayPal payments also received credit card pay-ments (Table 4.3). The problem remained if merchants did not know they could activate credit card or if they were they not aware of the benefits of having credit card next to PayPal. Receiving payments through credit card rather than PayPal is cheaper for the merchant, and is also more profitable for Mollie. The following hypothesis was formed:

H3 Sending merchants an email with the encourage-ment to accept another payencourage-ment method increases the num-ber of merchants receiving payments from that encouraged payment method.

To test this hypothesis, an email was sent to merchants that had the PayPal payment method activated, but didn’t have credit card activated. Merchants that were denied for credit card activation by Mollie in the past were excluded from the exper-iment. The email contained an explanation to the merchants that activating the credit card payment method would be more profitable for them than the PayPal payment method. In light of the FBM, the email uses hope for an increased profit as a

(9)

motivation type to activate the credit card payment method. After two months the group of merchants were analysed for an increase in received credit card payments.

Table 4.3. The number of merchants who received PayPal payments, and the percentage of these merchants who also received credit card pay-ments before the experiment was conducted.

Group N N also receiving credit card Merchants receiving

Paypal transactions 6026 2912 (48.3%)

5. RESULTS

This section is divided into four subsections, as different datasets were used for each experiment. Each subsection contains a description of the dataset and the results from con-ducting the experiment. Because our a priori predictions were directional, one-tailed tests were used, with a significance level of α = .05 to determine whether the null hypothesis (no difference or a decrease of merchants receiving payments in the exposed group) must be rejected or not. The statistical analyses were performed using the programming language Python, in combination with scipy’s software library contain-ing a statistical functions module (scipy.stats). An example code used within the analyses and the corresponding output is shown in Code fragment A.1 and A.2, and Figure A.3.

5.1 Experiment 1: Eliminating approval delay after sign up

With this experiment the first sub-question is answered: To what extend does eliminating of the approval delay after sign up increase the number of merchants receiving payments? As seen in Table 5.1 the percentage of merchants receiving pay-ments within three weeks after sign up increased 2.2% in the exposed group in relation to the control group. A Chi-square test shows that the difference between the two groups (the control group and the exposed group) is statistically signifi-cant (χ2(1) = 12.339, p < .001, φ = .031). The odds ratio is used to determine the direction and the degree of the effect. This shows that it is 1.19x more likely that merchants receive payments when the approval delay after sign up is eliminated; however, not representing a significant effect (φ < 0.1). As the possibility that the observed distributions occur if the null hypothesis is true (p < .001) is smaller than p = .05, and the direction of the effect is positive, the null hypothesis is rejected. The alternative hypothesis (H1: Eliminating the approval de-lay after sign up increases the number of merchants receiving payments) is accepted.

As seen in Table 5.1 not only did more merchant receive payments in the exposed group, the merchants that received payments received their first payment on average 0.518 days earlier (M = 8.752, SD = 6.279) than the merchants that re-ceived payments in control group (M = 8.240, SD = 6.417). To test whether there is a significant decrease in days after sign up before receiving payments between the control group and the exposed group, a two-sample one-tailed t-test is used. The decrease in days between the two groups is statistically significant, t(1754) = 1.792, p = .037; however, it did repre-sent a very small effect, d = 0.08.

Table 5.1. The number of merchants (N) that received their first pay-ment within three weeks after sign up before and after the experipay-ment.

No payments Payments

Group N % N %

Control group (signed up 3-7 weeks beforethe approval

delay was eliminated) 5479 85.5% 927 14.5% Exposed group

(signed up 0-4 weeks afterthe approval

delay was eliminated) 5586 83.3% 1121 16.7% Table 5.2. The mean and median delay after sign up before a merchant received its first payment, only taking into account merchants that re-ceived payments within 21 days after signup.

Group N Mean delay Median delay Control group

(signed up 3-7 weeks beforethe approval

delay was eliminated) 927 8.752 days 8 days Exposed group

(signed up 0-4 weeks afterthe approval

delay was eliminated) 1121 8.234 days 7 days

Before the approval delay after sign-up was eliminated, it could take up to two business days before a Mollie employee approved a merchant’s website. Eliminating this waiting pe-riod by automatically activating their website profile had a positive effect on the activation period and on the number of merchants receiving payments. Under previous conditions, when a merchant displayed the motivation to implement the Mollie payment system into their website by filling in their website profile, they were removed from the activation flow be-cause they had to wait for their website profile to be approved. This required the merchant’s motivation to be reinitiated once the website profile was approved.

In the new situation, the immediate activation of a merchant’s website profile enabled the merchant to directly continue to im-plement the Mollie payment system into their website. In this way, the first incentive was enough to result in the implementa-tion of Mollie into their website because merchants were kept in the activation flow after they filled in their website profile. Activating the website crawler resulted in a faster activation time and an increase of merchants receiving payments. It also saved much time for Mollie’s team, as they no longer needed to manually check every website.

5.2 Experiment 2: Supplying payment integration in-structions after sign up

With this experiment the second sub-question is answered: To what extend does supplying merchants with payment integra-tion instrucintegra-tions immediately after sign up increase the number of merchants receiving payments? As seen in Table 5.3 there

(10)

was a 0.5% increase of merchants receiving payments within three weeks after sign up between the control group and the exposed group. However, the difference between the two groups is not statistically significant (χ2(1) = 0.306, p = .580, φ = .007) and thus H2(Supplying merchants with payment integration instructions immediately after sign up increases the number of merchants receiving payments) cannot be accepted. Table 5.3. The number of merchants (N) that received their first pay-ment within three weeks after sign up before and after the experipay-ment.

No payments Payments

Group N % N %

Control group (signed up 3-4 weeks before

the link was displayed) 2822 86.3% 447 13.7% Exposed group

(signed up 0-2 weeks after

the link was displayed) 2466 85.8% 408 14.2% As seen in Table 5.1, on average, it took merchants of the exposed group 0.058 days longer to receive payments (M = 8.752, SD = 6.279) than merchants of the control group (M = 8.752, SD = 6.279). However, the increase in days between the two groups is not statistically significant, t(755) = -0.129, p = .897; it also did not represent a significant effect, d = -0.009.

Table 5.4. The mean and median delay after sign up before a merchant received its first payment, only taking into account merchants that re-ceived payments within three weeks after signup.

Group Mean payment delay Median delay Control group

(signed up 3-4 weeks before

the link was displayed) 8.219 days 7 days Exposed group

(signed up 0-2 weeks after

the link was displayed) 8.277 days 7 days

5.3 Experiment 3: Credit card activation email

With this experiment the last sub-question is answered: To what extend does sending merchants an email with the en-couragement to accept another payment method increase the number of merchants receiving payments from that encour-aged payment method? As seen in Table 5.5 there was a 7.1% increase of merchants receiving credit card payments after they received the email. A one-tailed McNemar’s test with Yates’s correction (a McNemar’s test is a Chi-square test for nominal data) shows that the increase between the two groups (the con-trol group and the exposed group) is statistically significant (χ2(1) = 8.094, p = .002, φ = .051) and thus H3is accepted. The odds ratio shows that it is 1.08x more likely for merchants that received the email to receive payments than not having received the email; however, not representing a significant effect (φ < 0.1).

A significant number of merchants who were emailed created their account multiple years ago. In light of the FBM it is

likely that sending them an email would not provide a suc-cessful trigger because their motivation to receive payments has disappeared. We hypothesised that sending the email only to merchants that signed up recently would have a larger ef-fect size. To test this hypothesis merchants were selected that signed up recently (in October or November 2016) before receiving the email. As seen in Table 5.6 within this group there was a 14.3% increase of merchants receiving credit card payments after they received the email. However, the increase between the control group and the exposed group is not statis-tically significant (χ2(1) = 0.925, p = .178, φ = .101) due to the small sample size. The odds ratio shows that it is 1.17x more likely for recently signed up merchants that received the email to receive payments than not having received the email; it represents a small effect (0.1 ≤ φ < 0.3).

Table 5.5. The number of merchants (N) who already received PayPal payments and not received credit card (CC) payments before the email was sent (N = 3114) compared to the number of merchants who did and did not receive CC payments after the email was sent.

No CC payments CC payments

Group N % N %

Before email (all merchants

receiv-ing PayPal payments) 3114 100% 0 0% After email

(all merchants

receiv-ing PayPal payments) 2893 92.9% 221 7.1%

Table 5.6. The number of merchants who signed up in the last two months (N) who already received PayPal payments and not received credit card (CC) payments before the email was sent (N = 3114) com-pared to the number of merchants who did and did not receive CC pay-ments after the email was sent.

No CC payments CC payments

Group N % N %

Before email (all merchants

receiv-ing PayPal payments) 91 100% 0 0% After email

(all merchants

receiv-ing PayPal payments) 78 85.7% 13 14.3%

Although outside of experiment’s scope, we analysed Mollie’s additional profit since the increase of merchants that received credit card payments after receiving the email. Currently Mol-lie’s profit margin on credit card payments is much higher (1.8% of the payment value +e 0.25 fixed fee) than on Pay-Pal payments (e 0.10 fixed fee). We evaluated that within four months after the experiment, the 221 merchants that acti-vated credit card payments in total received approximately2 36,000 credit card payments (e 2,800,000) and 56,000 PayPal payments (e 3,200,000). After substracting Mollie’s costs, it

2Approximate values are shown due to the exact values being

com-mercially sensitive information, as Mollie’s exact profit margin could be calculated.

(11)

resulted in a profit ofe 35,000 within four months, compared to the situation that all these payments would be processed via PayPal. Mollie is still profiting from this experiment, as these merchants still receive (the more profitable) credit card payments.

6. DISCUSSION

The methods used in this research have some shortcomings. We used time split-tests (also known as "before-and-after tests") to test our hypotheses, but these tests are considered to be unreliable [15] due to being prone to temporal influences. Although temporal influences in Mollie’s case are limited, because the percentage of merchants receiving payments gen-erally stays the same month to month, temporal influences can never be completely excluded with this type of method. For example, Mollie launched a new marketing website on April 18, 2017. Although the marketing website is a different website that merchants see when they are signed in (and thus did not directly influence the experiments), the new website gained attention throughout different media channels. This could have lead to multiple new merchants that signed up only out of only curiosity, without planning to use the product. A/B tests are more accurate than time split-tests. A/B tests exclude temporal influences, as the testing group is split up in two groups that are tested at the same moment in time. One group would be exposed to the experiment, and the other group would serve as the control group that is not exposed to the experiment. Due to technical limitations it was not possible to integrate an A/B testing tool into Mollie’s website, but with more resources from Mollie’s development team these tools could certainly be implemented. This would have resulted in more accurate results than with time split-tests, and it would have been easier to determine the exact impact of each experiment.

Throughout the research different datasets were used, contain-ing a different set of merchants in each experiment. This made it impossible to compare the impact of one experiment with the other experiments. A small group of merchants were included in multiple experiments, which could have resulted in mer-chants being influenced by multiple factors at the same time. In an optimal situation one specific set of merchants would be used throughout all experiments to ensure the validity and relative impact of each experiment. In practice however this would not be efficient.

In the second experiment, described in section 4.4, no signifi-cant increase was found of merchants receiving payments. A possible explanation for this could be that the implementation did not provide an optimal trigger to perform the behaviour of receiving payments. Merchants were supplied with a link to the implementation instructions instead of directly displaying the instructions themselves. The integration instructions were intended to influence the merchant’s perceived behavioural controlof implementing the payment system into their web-site. However, the merchants were required to be motivated to click on the link prior to be influenced by the integration instructions. A significant increase may have been found if the implementation instructions were supplied directly on the page, instead of merchants having to click on a link first.

The last experiment, described in section 4.4, explained that there was a significant increase of merchants receiving credit card payments after they received the activation email, but it represented a small effect. All merchants were analysed who received the activation email. We were not able to deter-mine which merchants actually opened the email or clicked on its links. As described in the experiment’s results, many merchants who received the email created their account years ago. It cannot be determined whether the merchants’ email addresses still exist. For future work using an email sys-tem which could retrieve the specific merchants that actually opened the email, an analyses could be performed by only taking into account merchants that opened the email, as only these merchants were actually exposed to the experiment. This could have lead to an increased effect size in the number of merchants who received credit card payments.

In each experiment only merchants were analysed who re-ceived payments within 21 days after sign up. This period was chosen because this incorporated the majority of the mer-chants, while being able to use a relatively short experiment duration. Although the median period for all merchants to receive payments was 13 days, a high standard deviation was found (M = 16.939, SD = 10.193). For future work a higher reliability of the data could be achieved by incrementing the 21-day period to the sum of the mean and the standard devia-tion; analysing merchants who received payments within 27 days after sign up.

7. CONCLUSION

The purpose of this research was to find an answer to the question: To what extent can an implementation of the Fogg Behaviour Model’s recommendations within the customer on-boarding process increase the number of merchants receiving payments at Mollie B.V.? To answer this question multiple experiments were conducted and analysed, which showed the following outcomes. Eliminating of the approval delay af-ter sign up resulted in a statistically significant increase in the number of merchants receiving payments; however not representing a significant effect. Supplying merchants with payment integration instructions immediately after sign up did not result in a significantly increase the number of merchants receiving payments; it also did not represent a significant ef-fect. Sending merchants an email with the encouragement to accept another payment method did result in a significant increase the number of merchants receiving payments from that encouraged payment method; it represented a small effect. The outcomes of the different experiments showed that it is indeed possible to cause a statistically significant influence on the number of merchants who received payments; the con-ducted experiments were reported as having a small effect. However, because we were not able to determine the exact merchants who were exposed to each experiment, the exposed group was too broad. If analytical tools were used to deter-mine the exact merchants that must be assigned to the exposed group, the reported effect size could be increased, being more in line with the actual effect size.

(12)

8. REFERENCES

1. Icek Ajzen. 1991. The theory of planned behavior. Organizational behavior and human decision processes 50, 2 (1991), 179–211.

2. Icek Ajzen and Martin Fishbein. 1980. Understanding attitudes and predicting social behaviour. (1980). 3. Icek Ajzen and Martin Fishbein. 2005. The influence of

attitudes on behavior. The handbook of attitudes 173 (2005), 221.

4. Talya N Bauer. 2010. Onboarding new employees: Maximizing success. SHRM Foundation’s Effective Practice Guideline Series(2010).

5. Alexander Benlian, Marios Koufaris, Thomas Hess, and others. 2010. The Role of SaaS Service Quality for Continued SaaS Use: Empirical Insights from SaaS Using Firms.. In ICIS. 26.

6. Benjamin Brandall. 2016. The Complete Guide to Customer Success for SaaS Comapnies. Amazon Digital Services LLC.

7. Dave Chaffey and Mark Patron. 2012. From web analytics to digital marketing optimization: Increasing the commercial value of digital analytics. Journal of Direct, Data and Digital Marketing Practice14, 1 (2012), 30–45. 8. Efthymios Constantinides. 2008. The empowered

customer and the digital myopia. Business Strategy Series 9, 5 (2008), 215–223.

9. Driek Desmet, Shahar Markovitch, and Christopher Paquette. 2015. Speed and scale: Unlocking digital value in customer journeys. McKinsey & Company (2015). 10. Martin Fishbein and Icek Ajzen. 1977. Belief, attitude,

intention, and behavior: An introduction to theory and research. (1977).

11. Brian J Fogg. 2002. Persuasive technology: using computers to change what we think and do. Ubiquity 2002, December (2002), 5.

12. Brian J Fogg. 2009. A behavior model for persuasive design. In Proceedings of the 4th international Conference on Persuasive Technology. ACM, 40. 13. Bruce Hanington and Bella Martin. 2012. Universal

methods of design: 100 ways to research complex problems, develop innovative ideas, and design effective solutions. Rockport Publishers.

14. Ben Jonson. 2005. Design ideation: the conceptual sketch in the digital age. Design studies 26, 6 (2005), 613–624. 15. Ron Kohavi, Randal M Henne, and Dan Sommerfield.

2007. Practical guide to controlled experiments on the web: listen to your customers not to the hippo. In Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 959–967.

16. John Kounios and Mark Beeman. 2009. The Aha! Moment the cognitive neuroscience of insight. Current directions in psychological science18, 4 (2009), 210–216.

17. Diana LaSalle and Terry Britton. 2003. Priceless: Turning ordinary products into extraordinary experiences. Harvard Business Press.

18. Gitte Lindgaard, Gary Fernandes, Cathy Dudek, and Judith Brown. 2006. Attention web designers: You have 50 milliseconds to make a good first impression! Behaviour & information technology25, 2 (2006), 115–126.

19. Meryl Reis Louis. 1980. Surprise and sense making: What newcomers experience in entering unfamiliar organizational settings. Administrative science quarterly (1980), 226–251.

20. Nathalie Nahai. 2012. Webs of Influence: The Psychology of online persuasion. Pearson UK.

21. Eric T Peterson. 2004. Web analytics demystified: a marketer’s guide to understanding how your web site affects your business. Ingram.

22. Alex Rawson, Ewan Duncan, and Conor Jones. 2013. The truth about customer experience. Harvard Business Review91, 9 (2013), 90–98.

23. Jan Renz, Thomas Staubitz, Jaqueline Pollack, and Christoph Meinel. 2014. Improving the Onboarding User Experience in MOOCs. Proceedings EduLearn (2014). 24. Eric Ries. 2011. The lean startup: How today’s

entrepreneurs use continuous innovation to create radically successful businesses. Crown Business. 25. Jeffrey Spiess, Yves T’Joens, Raluca Dragnea, Peter

Spencer, and Laurent Philippart. 2014. Using big data to improve customer experience and business performance. Bell Labs Technical Journal18, 4 (2014), 3–17.

26. Robert J Sternberg and Janet E Davidson. 1995. The nature of insight. The MIT Press.

27. Fritz Strack, Roland Deutsch, and Regina Krieglmeyer. 2009. The two horses of behavior: Reflection and impulse. Oxford handbook of human action (2009), 104–117.

28. Diane Tang, Ashish Agarwal, Deirdre O’Brien, and Mike Meyer. 2010. Overlapping experiment infrastructure: More, better, faster experimentation. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 17–26. 29. Noam Tractinsky, Avivit Cokhavi, Moti Kirschenbaum,

and Tal Sharfi. 2006. Evaluating the consistency of immediate aesthetic perceptions of web pages.

International journal of human-computer studies64, 11 (2006), 1071–1083.

30. Connie Wanberg. 2012. The Oxford handbook of organizational socialization. Oxford University Press.

(13)

APPENDIX

Code fragment A.1. The code written in Python to load the necessary scikit-learn libraries before the analytical tests could be performed. Below the import statements the functions to perform a Chi-square test and t-test are included as well as a function to calculate Cohen’s d effect size.

1 # Importing functions 2

3 import pandas as pd 4 import numpy as np 5 import math

6 from scipy . stats import ttest_ind , chi2_contingency , fisher_exact , ttest_ind , ttest_rel

7

8 def chisquare_test ( column1 , column2 ):

9 test_group11 = column1 [’ amount ’]. astype (’ category ’). values

10 test_group12 = column2 [’ amount ’]. astype (’ category ’). values

11

12 obs = np . array ([ test_group11 , test_group12 ])

13 chi2 , pwaarde ,dof , expected = chi2_contingency ( obs )

14

15 oddsratio , pvalue_fisher = fisher_exact ( obs )

16

17 print "Chi - square (", dof , ") = ", chi2 , " , p =", pwaarde

18 print " Group 1 totals :", column1 [’ amount ’]. values .sum() , "(", column1 . index [0] , ":",

column1 [’ amount ’]. values [0] , " (", str(round(float( column1 [’ amount ’]. values [0]) / float(

column1 [’ amount ’]. values .sum() ) * 100 , 4) ) , " %) - ", column1 . index [1] ,": ", column1 [’

amount ’]. values [1] , "(", str(round(float( column1 [’ amount ’]. values [1]) / float( column1 [’ amount ’]. values .sum() ) * 100 , 4) ) , " %) )"

19 print " Group 2 totals :", column2 [’ amount ’]. values .sum() , "(", column2 . index [0] , ":",

column2 [’ amount ’]. values [0] , " (", str(round(float( column2 [’ amount ’]. values [0]) / float(

column2 [’ amount ’]. values .sum() ) * 100 , 4) ) , " %) - ", column2 . index [1] ,": ", column2 [’

amount ’]. values [1] , "(", str(round(float( column2 [’ amount ’]. values [1]) / float( column2 [’ amount ’]. values .sum() ) * 100 , 4) ) , " %) )"

20 print " Totals : ", column1 [’ amount ’]. values .sum() + column2 [’ amount ’]. values .sum() 21 print " Phi value ", math . sqrt ( chi2 /( column1 [’ amount ’]. values .sum() + column2 [’ amount ’].

values .sum() ))

22

23 if pwaarde < 0.01:

24 print " The p value is smaller than 0.01 , so there is a significant difference ." 25 else:

26 print " The p value is greater than 0.01 , so there is a significant difference ." 27

28 print " ODDS :", oddsratio 29

30 def cohen_d (x ,y):

31 return ( np . mean (x) - np . mean (y)) / math . sqrt (( np . std (x , ddof =1) ** 2 + np . std (y , ddof

=1) ** 2) / 2.0)

32

33 def ttest ( dataset1 , dataset2 , type, tail ):

34 if type == ’ independent ’:

35 tvalue , pvalue = ttest_ind ( dataset1 , dataset2 )

36 degrees_of_freedom = ( np . count_nonzero (list( dataset1 )) + np . count_nonzero (list(

dataset2 )) - 2)

37 elif type == ’ paired ’:

38 tvalue , pvalue = ttest_rel ( dataset1 , dataset2 )

39 degrees_of_freedom = ( np . count_nonzero (list( dataset1 )) - 1)

40

41 if tail == ’ twotailed ’:

42 pvalue = pvalue

43 elif tail == ’ onetailed ’:

44 pvalue = pvalue / 2

45

46 print " Group 1 - mean :", np . mean ( dataset1 ) , " - std :", np . std ( dataset1 ) 47 print " Group 2 - mean :", np . mean ( dataset2 ) , " - std :", np . std ( dataset2 ) 48

49 print "t(", degrees_of_freedom , ") =", tvalue , " , p = ", pvalue 50 print " Effect size : ", ( cohen_d ( dataset1 , dataset2 ))

(14)

Code fragment A.2. An example code fragment to test one of the hypothesis after the import statements and analytical functions of Code fragment A.1 are imported.

1 # Website crawler analysis 2

3 # create a list with observed variables

4 control_group = [{’ index ’: ’No payments ’, ’ amount ’: 6406 - 927} ,

5 {’ index ’: ’Has payments ’, ’ amount ’: 927}]

6 # create dataframe from list

7 df_control_group = pd . DataFrame ( control_group ). set_index (’ index ’). rename_axis ( None )

8

9 exposed_group = [{’ index ’: ’No payments ’, ’ amount ’: 6707 - 1121} ,

10 {’ index ’: ’Has payments ’, ’ amount ’: 1121}]

11 df_exposed_group = pd . DataFrame ( exposed_group ). set_index (’ index ’). rename_axis ( None )

12

13 # perform chi - square test

14 chisquare_test ( df_control_group , df_exposed_group )

Figure A.3. The output of Code fragment A.2 which tests the hypothesis "Eliminating the approval delay after sign up increases the number of merchants receiving payments".

Referenties

GERELATEERDE DOCUMENTEN

These objectives can be integrated into the development of the Company X Business Process Framework, containing a process design method, enterprise-level processes and a governance

The log file is then mined and a Product Data Model (PDM) and eventually a workflow model can be created. The advantage of our approach is that,.. when needed to investigate a

As both operations and data elements are represented by transactions in models generated with algorithm Delta, deleting a data element, will result in removing the

They would like to have fact-based insights about the predefined invoice booking process and the actual observed behaviour of the invoice booking process using a process

The aim of this study was to investigate the effects of the level of interaction, the effect of the interaction per interaction channel, the effect of the use of combinations

Regarding the total product overview pages visited by people in state three, they are least likely to visit one up to and including 10 product overview pages in

While other studies report significant evidence for the effect on churn by all determinants (except from the variables of promotional behaviour) or have different signs, in

A0 Road mapping A1 Function creation process A2 Product creation process A3 Mass production Business strategy Marketing information Technology forcast Product plan Product