• No results found

5.1 Discussion

5.1.1 Theoretical and managerial implications

We observed several studies in the literature that indicated behavioral changes when utilizing a mobile (or web)-based self-reported measurement (e.g., Anderson et al., 2001; Tate et al., 2001; Tate et al., 2006; Wing et al., 2006; Duncan et al., 2014; Elbert et al., 2016). However, we did not find any substantial evidence that the reactive effect could have played a role in influencing the subsequent behavior. Notably, studies that incorporated a mobile-based self-reported measurement had other elements involved in the intervention. For example, in their intervention, Anderson et al. (2001) also provided a guided system to help the respondents change their behavior, such as personalized goal settings and feedbacks and educational, informational provisioning. We believed that these elements had a more significant influence in modifying the subsequent behavior than the reactive effect of measurement.

Our initial argument that a reactive effect may occur in a mobile-based self-reported measurement was due to increased self-monitoring. Self-monitoring is related to one of the self-regulatory techniques (Michie et al., 2009). According to the study of Michie et al. (2009), prompting self-monitoring was significantly effective in modifying health-related behavior when combined with other self-regulatory techniques. These techniques may include prompting intention formation, prompting specific goal settings, prompting a review of behavioral goals, and providing feedback on performance. In our intervention, these other techniques were absent. Thus, possibly our intervention's ineffectiveness was due to the absence of other behavioral regulatory techniques.

34

While most behavioral modification intervention incorporated more than one of the self-regulatory techniques, our study contributed to studying the measurement reactive effects on its own. Followingly, our study did not find any significant effect of measurement reactivity in a mobile-based self-reported measurement system.

The findings of our study also had several managerial implications. First, practitioners interested in collecting data through a mobile-based self-reported system may not be worried about a reactive effect. However, this implication is with the assumption that the system does not incorporate any other self-regulatory techniques, as suggested in Michie et al. (2009), that may influence subsequent behavior.

Secondly, our study also provided no evidence that the reactive effects on its own may be an effective tool for behavioral modification. Practitioners or policymakers interested in behavioral modification intervention may need to consider to incorporate other techniques as shown in other studies (e.g., Anderson et al., 2001; Tate et al., 2001; Tate et al., 2006; Wing et al., 2006; Duncan et al., 2014; Elbert et al., 2016). Moreover, our online panel data indicated that many respondents become inactive in a specific panel period. Thus, another reason why this intervention may not be a practical tool. If the respondents were supposed to voluntarily participate in the data collection process, other elements that can keep the respondents engaged might be required.

An example of a practical application is the usage of ‘mHealth’ applications to increased vegetables and fruits consumptions (e.g., Mandracchia et al., 2019). These applications used a self-monitoring element, in addition with other behavioral modification elements such as personal dietary feedback, financial incentive rewards, and remote coaching support. A systematic review study by Mandracchia et al. (2019) showed that mHealth applications are effective in increasing vegetables and fruits consumption intake. Hence this

35

application can be useful for vegetables and fruits seller, also for policy makers interested in improving public health.

5.1.2 Research limitations & future research

There are several limitations to our study. The first limitation is in regards to one of our main assumption in the study. One of our main assumption in this study was the relationship between mobile-based self-reported measurement and self-monitoring. We assumed a link between the two, in which a mobile-based reported measurement may lead to an increased self-monitoring. Although we had sound reasoning to believe that’s the case, we still do not have empirical evidence of the relationship. Therefore, it would be interesting to test the assumption through an empirical study.

Moreover, our study is limited in the sense that we only analyzed the at-home purchases, and excluded the away-from-home purchases. While, our findings indicated that there was no strong evidence that the intervention influence the at-home purchases, there is still a possibility that it influenced the away-from-home purchases. Unfortunately, it was not possible to analyze the away-from-home purchases through a DID analysis with the absence of a control group. Further research may analyze the away-from-home data by using an interrupted time series design (e.g., Spangenberg, 2003). However, such research design may lose the benefit of a DID analysis which handles unmeasured covariates.

Another limitation of our study is that our observed variable was measured through a scanned-based system. In this case, the respondents reported their purchasing behavior independently. This applies to both the treated and control groups; the only difference is that the treated group was exposed to a mobile-based self-reported intervention. We argued that the mobile application based method provides a self-monitoring mechanism. This is due to the

36

households that were consciously aware of what they were purchasing as they reported their purchases. Moreover, the mobile application method specifically grabbed the households’s focus on chocolate purchases. In contrast, in the scanned-based method, they reported the whole grocery purchases and led to less attention towards the snack purchases alone. Therefore, we believe that it is plausible for the two methods to differ, but it is not strictly so. More studies are needed to make sure that the scanned-based method does not provide a reactive effect.

Followingly, our suggestion to increase our study’s validity is to use a different data collection method that does not involve the respondents’ active reporting actions. For example, nowadays, supermarkets provide mobile application that records consumer purchasing data without the respondents deliberately reporting them (e.g., Feddema & Yen, 2019).

Finally, one of the main limitation of this study arises from the online panel data.

Working with an online panel data gave a bit of unclarity regarding the consumer purchasing behavior. We cannot be fully confident to differentiate between inactive respondents and the respondents who do not make any purchase. Currently our best strategy to handle this issue was to create an assumption that a households is no longer active in the study if they did not report any purchase for six months. It would have been ideal if the respondents also report when no purchase is made. Another suggestion for future research is to use a more reliable data collection method such as the mobile application that tracks consumer purchasing data without consumers’ active reporting actions.

5.2 Conclusion

Our study attempted to shed some light in an area of measurement reactivity literature, which had not been studied enough before. While the literature indicated a reactive effect through the QBE, still limited is known regarding the reactive effect of a mobile-based self-reported measurement. This research's main objective is to study the measurement reactivity in a

37

mobile-based self-reported measurement, explicitly focusing on the impact on subsequent consumer purchasing behavior towards snacks. We were interested in finding whether a mobile-based self-reported measurement leads to a reactive effect as commonly found in QBE literature (e.g., Sprott et al., 2006)

We used an online panel data of consumer purchasing behavior set in the United Kingdom from December 2007 until September 2012, but filtered it down to September 2009 to control for the inactive respondents. We analyzed the data using novel causal inferencing techniques in DID analysis (Callaway & Sant'Anna, 2019). We compared the grocery shopping of chocolate snacks between households that participated in the mobile-based data collection method and those that did not. However, we did not find strong evidence to support the hypothesis that a mobile-based self-reported measurement resulted in a reactive effect over the long-term.

38