• No results found

Privacy and commercial data collection

N/A
N/A
Protected

Academic year: 2021

Share "Privacy and commercial data collection"

Copied!
30
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Privacy and

Commercial Data Collection

Izi Hitimana

(2)
(3)

Abstract: Commercial data collection has, over the last decade, become commonplace in modern

societies and is set to become more ubiquitous still. In this paper I note that critiques of this trend commonly point to the loss of privacy it involves. Through discussion of Warren and Brandeis’ The

Right To Privacy and Reiman’s Driving to the Panopticon I argue that predominant defense of

privacy have associated loss of privacy with very particular threats. I claim that the mechanisms of commercial data collection are not geared to produce these particular threats. I conclude therefore that privacy theory might be of limited use as a critical framework to understand contemporary trends.

This paper was written in August 2015.

———————————————————————————————————————————————————

Contents

Introduction 3

Data investment boom 4

Internet of Things 5

The Problem of Privacy in Public 7 Two Historic Defenses of Privacy 11

The Right to Privacy 11

Driving to the Panopticon 13

Cold War 18

1984 and The Circle 18

Predictive Analytics 22

Conclusion 26

Epilogue 28

(4)

Introduction

I am somewhat of a gadget enthusiast. A smartphone release by an obscure Chinese upstart manufacturer is unlikely to escape my notice. On Youtube I am subscribed to a channel dedicated to ‘hands-on’ videos of the latest gizmo’s, I read reviews on technology website and sometimes even a forum to read user experiences. My current phone, which came with the Google Now service pre-installed, quickly ‘picked up’ on my interest. By analyzing among other data your gmail, browser behavior, youtube views and location history (where I am and have been) Google Now aims to offer ‘just the right information, at just the right time’. So now when I have an idle moment and I swipe to the right on the home screen of my phone it often offers me a link to an article of the newest consumer-technology updates. Often the articles it suggests I would never have stumbled upon myself. But the service does much more: it shows the local weather forecast for your current location, wherever you are; if you ordered something online, Google Now will find the tracking code in your mail and inform you at what time your package will be delivered; when you are traveling it will suggest attractions in the area you might like; it alerts you for upcoming agenda entries and if you travel by public transport it tells you which train gets you home quickest. Google Now is a free of charge service that can do many things that a round the clock personal assistant would do.

If I were to give the service a more critical reading the most readily available approach seems to note the loss of privacy due to the massive amounts of my data processed to figure out what I want to know. The image of smartphones as inanimate spies – constantly updating distant servers about our daily lives – is a gripping one. Academic authors have furthermore pointed out that personal privacy is an important condition for ‘a free society’ and, among other things enhances ‘people’s capacity to function as autonomous, creative, free agents.’ (Nissenbaum 1998:  29) It is through the prism of privacy that large scale data collection and processing become seen as ‘mass surveillance’. This in turn causes popular media outlets to liken Google to today’s Big Brother (Crump, Harwood 2014 (salon.com)). If Google’s business practices are indeed detrimental to freedom, the current trend of ever more data collecting ‘smart-devices’ entering our surroundings is a worrisome prospect.

In this paper however, I want to make the argument that the mechanism of commercial data collection behind the success of businesses like Google do not provide the threats privacy is argued to protect against. But first I will argue that the huge amounts of money invested in data driven companies suggest the scale of todays commercial data collection is to grow far farther still.

(5)

Data Investment Boom

While news about the economy has generally been gloomy since the financial crash of 2008, it seems as if internet companies reside in a parallel economy where the biggest players like Google and Facebook can continually buy up start-ups for millions and more than a few times billions of dollars. What most of these internet companies have in common is that they are native to the internet (or even native to mobile internet) meaning that these companies were founded at or after the moment the internet became widespread. These companies are well adapted to exploit the possibilities the internet as a technological platform opens up. An important realization driving this economic enthusiasm is that internet usage, by design, creates unimaginable amounts of data. Every click and scroll can be logged and connected to a individual user. This data – when processed – creates an unique profile for each user which can be quite valuable, either by selling it to advertisers or by personalizing and enhancing the service on offer. The video streaming service Netflix for example does not show advertising but relies on subscription fees as its main source of 1

income. But by not only analyzing the movies and series you watch, but also mapping your Facebook friends and their viewing behavior, Netflix manages to recommend the offerings in its library of 9000+ titles you are most likely to enjoy. Netflix utilizes the data it collects on its users to enhance its service in the expectation it ensures that subscribers will keep paying their subscription each month. The business of processing data into something valuable is called data mining and companies which are good at it amass million dollar valuation virtually overnight.

For an illustrative depiction of what’s happening in the parallel economy of data driven internet companies look at Nest Labs. Founded by two former Apple engineers in 2010 they developed an internet connected thermostat: Nest. It detects if you are home or away and will pick up on patterns if you manually adjust the temperature. The thermostat collects data on all interactions you have with it until it will eventually ‘know’ just how you like it and change the temperature in your home accordingly while also minimizing energy consumption. Not even four years after the company was founded Nest Labs was acquired by Google for a total of $3.2 billion. Nest is just one of many prominent and less prominent start ups that have gained eye watering valuations over the last few years. In April 2012 Facebook bought the photo and video sharing service Instagram for approximately $1 billion. Snapchat, founded in 2011, received a $19 billion valuation in March 2015. Snapchat’s app lets users send pictures and messages that disappear after a few seconds and employs less than 200 people. Its $19 billion valuation, however, is comparable to the market capitalization of Ahold, the holding company of some of the most recognizable retail brands in the Netherlands (Albert Heijn, Etos, Gall & Gall, bol.com), which employs 122.000 people and operates a large distribution network.

Netflix, not internet-native, successfully transformed itself from a postal DVD rental service into a internet

1

(6)

The astronomical amount of money pumped into these internet companies is a hot topic of debate in business circles. Take for example what Andrew Nusca, senior editor of the business magazine Fortune, wrote on Fortune.com after news of Snapchat’s valuation:

‘Snapchat, which launched in 2011, reportedly generates little revenue and has yet to turn a profit. But the broad strokes of its business model are well-worn: marshal a captivated audience, sell advertisements against it, profit.

The latest reports put Snapchat’s user base at more than 100 million people, though the exact number has not been publicly disclosed and could be well beyond that. Compare that to LinkedIn’s roughly 200 million (LNKD market cap: about $33 billion), Twitter’s almost 300 million (TWTR: about $30 billion), and Facebook’s 1.4 billion (FB: $212 billion) and the math—preposterous as it seems—starts to make sense.’ (Nusca 2015)

Andrew Nusca suggests something curious is happening with the stock price of the largest internet companies. Their value (market cap, or market capitalization) seems to roughly correlate with their user numbers. The (preposterous) ‘math’ seems to be simple: users => data => (eventual) profit. Nusca argues that the market overvalues the number of users of data driven internet companies, often despite unproven profitability. This explains why even an unprofitable company without much revenue but with 100+ million users, like Snapchat, can be valued at $19 billion. Investors are quite literally paying for start-up companies ability to collect data, and the more data a company is able to collect the more the company is perceived to be worth. To Nusca this reasoning is a sign of a bubble about to burst. And indeed the similarities to the dot-com bubble at the beginning of the century are all to obvious. Young innovative internet companies attract huge investments on what some say are too optimistic projections of their future profitability. But just as, looking back, the dot-com collapse seems the beginning rather than the end of internet commerce, an economy is developing around the collection and utilization of huge amounts of data, or Big Data, which is not about to disappear.

Internet of things

Optimistic investors argue that the Google’s and Facebook’s of the world are, for now, just scratching the surface of the potential of big data. The internet has up to just a few years ago been mostly confined to desktop computers, laptops and mobile devices. These devices, however smart they may be, form a bottleneck for both the collection and utilization of data. If you could connect ever more ‘smart’ devices to the internet, more data can be collected and more value mined from it. Google’s purchase of Nest and its smart-thermostats is seen as a step in this direction. Data optimists and pessimists both predict a plethora of smart home appliances becoming available to consumers in the near future. (Crump, Harwood 2014 (salon.com)).

(7)

The expected network of devices packed with sensors has been dubbed the internet of things. A smart-refrigerator for example will be able to detect it’s contents so when you are running low on orange juice an advertisement can be shown on your smart-TV after which a pop up on your smartphone for that kind of orange juice will enable you to put that brand on your grocery list. When you order your groceries to be delivered data sourced form the smart home security system will make sure that the van will come by at a moment somebody in your household will be home to receive it. Data provided by all costumers combined also makes it possible for the delivery company to calculate the optimal route for the van to take. Not only does the internet of things promise convenience for users, but also productivity gains to business.

The valuations of internet companies which seem inflated to Andrew Nusca might be explained by the privileged positions these companies are acquiring in this burgeoning market of the internet of

things. Leading international management consulting firm McKinsey&Company estimates that the

‘economic impact’ of the internet of things will be anywhere between $3.9 and $11.1 trillion in 2025 which on the top end of the estimate would be equal to 11% of the world economy. (McKinsey, 2) In a market where data is the main resource, current data collecting internet companies have a considerable head start. It explains why $3.2 billion for Nest, in Google’s perception, might be a good price. Google hopes that Nest will built an infrastructure of consumer products, eventually connected to other Google databases, ensuring a dominating position in smart home appliances before any competitor moves in.

The ‘economic impact’ estimate cited above comes from a 2015 report titled The Internet of

Things: Mapping the Value Beyond the Hype. It can be read as anything between an near future

utopia of convenience and efficiency to a dystopia of complete visibility through billions of devices in every environment –down to the human body. The report maps out the potential application and economic impact of smart devices in nine ‘settings’ (human body, home, retail, office, factories, worksites, vehicles, cities and outside). McKinsey&Company essentially offers a blueprint for a Big Data driven economy. It is a vision for the future which undoubtedly informs the actual investments decisions which are shaping that very future.

(8)

The Problem of Privacy in Public

If the internet of things becomes a reality on a scale approaching what the analysts at McKinsey are predicting private businesses will have acquired an unprecedented new role, collecting data on virtually every aspect of life (our lives) and using that data to enhance efficiency for both business and consumer. Soon ever more of our daily activities; at work; around the house; in shopping centers; on the road, will in one way or another be mediated through a data driven service. It is therefore important that we should have critical theoretic framework to evaluate this trend.

The default option for critique seems to be to discuss the nascent internet of things as a threat to privacy. Although the link between the collection of personal information and the loss of privacy barely seems to need further argument, I happen to think the link between contemporary data collection and the loss of privacy is not wholly unproblematic. In her 1998 article The problem of

Privacy in Public, Helen Nissenbaum argued privacy theory has not kept up with ‘advanced uses of

information technology’. In that paper she sets out to identify the reasons why contemporary data collection does not seem to fit the theoretical framework of privacy, developed since the late 19th century. Her eventual goal is to argue for the continued value of privacy protection in the contemporary context. Philosophical theories on privacy, Nissenbaum writes, have often argued for limits on ‘allowable practices of information gathering, analyzing, and sharing as means of protecting privacy.’ but, she continues ‘their efforts have primarily applied to intimate and

sensitive information.’ (Nissenbaum 1998: 1)

She firstly identifies a conceptual problem with prevalent theories of privacy with regards to large scale data collection. Theorists have commonly imagined a strict division between the private and public realm. In the realm of private life a person should be able to seek seclusion and be protected against disclosure of embarrassing private facts. The private realm is imagined not only as the space behind the front door of the private home, but also as a psychological realm of intimate and sensitive facts which one does not wish to publicize. When we consider the personal information Nissenbaum worried about back in 1998, it is the pervasiveness of the ‘surveillance’ that seems to 2

concern her most. The intimacy and sensitivity of the data in her following enumeration is in some instances open for debate :

‘People have become targets of surveillance at just about every turn of their lives. In transactions with retailers, mail order companies, medical care givers, daycare providers, and even beauty parlors, information about them is collected, stored, analyzed and sometimes shared. […] In to the great store or information, people are

I prefer to use the more neutral term ‘data collection’ for what Nissenbaum calls ‘surveillance’. Surveillance

2

to me suggests above mere observation a judgement on what ought and ought not to be observed. This is collaborated by the Merriam-Webster dictionary which gives ‘supervision’ as a synonym. I will argue that it is exactly this moral dimension which is missing from contemporary commercial data collection

(9)

identified through name, address, phone number, credit card numbers, social security number, passport number, and more; they are described by age, hair color eye color, height quality of vision, mail orders and on site purchases, credit card activity, travel, employment history, rental history, real estate transactions, change of address, ages and numbers of children, and magazine subscriptions. The dimensions are endless.’ (Nissenbaum 1998: 3)

The conceptual problem is that privacy protection is often not extended outside the privacy realm. Let us grant that most of the information above is indeed sensitive and ought to be permanently protected as private information. Then there remains a second obstacle Nissenbaum describes as the normative argument. A point of general agreement between privacy theorists and advocates is that a right to privacy cannot be an absolute right. Most theorists are in agreement that whatever value you attached to privacy, it has always to be balanced against other rights. The protection of the privacy of one is, in almost any thinkable case, a limitation to the freedom of another. Competing rights have to be assigned a normative weight in order to judge which right has precedency in a given situation. Again privacy in public frequently loses out in such balancing under the ‘apparently overwhelming weight of competing interests.’ (Nissenbaum 1998: 10)

The problem here is that, generally, information is (at least implicitly) freely offered in exchange for some form of service. The underlying problem is that if you willingly disclose information like your phone number or your eye color you are assumed to have ‘let the cat out of the bag’. Even if the information could be deemed sensitive and therefore private, as soon as you willingly disclose it, it enters the public domain. At that point a company who obtained that information is free to use it however it sees fit (within the law). It is unreasonable to later expect to retract or control the flow of that information, especially because that would limit the freedom of those who aim to use that (your) data. This argument of overriding interests is hard to counter. Nissenbaum describes it as ‘the normative knock-down argument’ because of its ‘compelling hold over philosophers, policy-makers, and judges, as well as the commercial interests that benefit from its use’ (Nissenbaum 1998: 11 + 14)

To Nissenbaum the ever growing amount of data collected on us was already problematic back in 1998. The conceptual and normative problems for privacy in public to her are to be overcome in a contemporary theory of privacy. She would eventually fully develop a contemporary theory called

contextual integrity which provides a framework to judge the morality of instances of data

collection and processing. Here however, I am interested in the preceding assumption that, despite the conceptual and normative hurdles, the contemporary context of data collection does confront us with problems to which privacy protections are the solution. Nissenbaum has two reasons to accept that this is the case. In the first place she points to widely shared indignation about widespread data collection. Secondly however, she relies on the arguments developed in past predominant theories of privacy:

(10)

‘My purpose has been to argue that public surveillance, which many theorists have denied a central place, ought often to be construed as a violation of genuine privacy interests. Although I have criticized predominant theories of privacy for neglecting this important privacy interest, I rely on the considerable insights developed in these theories to show why even in the public sphere individuals have legitimate privacy interests.’ (Nissenbaum 1998: 28)

Nissenbaum continues to note that an essential contribution made by various authors has been to draw connection between privacy and other values. In conclusion she writes:

These approaches [to privacy] have in common a version of the idea that privacy protects a “safe haven”, or sanctuary, where people may be free from the scrutiny and possibly the disapprobation of others. Within these private spheres people are able to control the terms under which they live their lives. By exercising control over intimate and sensitive information about themselves, people may exercise control over the way they portray themselves to others […] These two forms of privacy, namely control over information and control over access, establish the conditions for a free society and, among other things, enhance people’s capacity to function as

autonomous, creative, free agents. (Nissenbaum 1998: 29) (my cursives)

Nissenbaum comes to defend what she calls ‘privacy in public’ because she holds that privacy ‘establishes the conditions for a free society’ and helps people to ‘function as autonomous, creative free agents’. The flip side apparently is that with a lack of privacy we risks becoming an unfree society of controlled, unfree agents with a lack of creativity. Nissenbaum argues for privacy as a condition for ‘freedom’ on both an individual as a societal level. To come to this argument for privacy in the context of new advances in information technology and the subsequent widespread collection of data, she extrapolates from what theorists have written before her. Back in 1998 that might have been a sensible approach, the central and transformative role data would come to play in our lives through – for example – the proliferation of internet connected services was still hard to imagine.

But now 15 years later, while the amount of data being collected has exploded, we have the advantage of lived experience. With data collecting smartphones as a day to day essential tool for almost everyone, it requires ever greater sacrifices to escape Nissenbaum’s ‘surveillance’. According to Nissenbaum, based on previous predominant theories of privacy, this undeniable loss of privacy should have resulted in a less free society and less autonomous and creative individuals. I, with the full benefit of hindsight, am however not convinced that a dynamic of privacy loss leading to loss of freedom is a particularly convincing characterization of developments since the turn of the millennium.

(11)

In what follows I want to take the reader through two historic (predominant) defenses of a right to privacy. The first account is an 1890 article which is regarded as the starting point of the academic privacy oeuvre, the second a 1995 article discussing privacy in light of a proposed highway information system. I do this with a dual aim. By showing a continuity between the two accounts I aim to show how the concepts of privacy and freedom are associated. By placing both accounts in a historic context I secondly suggest that we currently are faced with a rather different reality, calling into question if the link between privacy and freedom still holds.

(12)

Two Historic Defenses of Privacy The Right to Privacy

The academic tradition on the concept of privacy is usually traced back to the publication of ‘The Right to Privacy’, a law review article by American legal scholars Samuel Warren and Louis Brandeis (1890). A right to privacy, they argued, has always already been implied in common law in the form of protection of person and property. It were new threats to person and property which urged them to explicate this right. ‘Recent inventions and business methods call attention to the next step which must be taken for the protection of the person.’ (Warren&Brandeis 76) These recent inventions and business methods where, respectively, the photograph and the newspaper. These two innovations in combinations sprung employment opportunities for proto-paparazzi and pesky journalists. ‘It was the era of “yellow journalism,” when the press had begun to resort to excesses in the way of prying that have become more or less commonplace today.’ wrote law professor William Prosser in 1960 in a authoritative article overviewing the development of privacy as a legal concept since Warren and Brandeis’ 1890 law review. In this article he popularized a widely cited backstory of Samuel Warren. Prosser, in his turn referring to an earlier Brandeis biography, describes how Boston’s Saturday Evening Gazette covered Mrs. Warren parties in ‘highly personal and embarrassing detail.’ He writes:

‘The matter came to a head when the newspapers had a field day on the occasion of the wedding of a daughter, and Mr. Warren became annoyed. It was an annoyance for which the press, the advertisers and the entertainment industry of America were to pay dearly over the next seventy years’ (Prosser 1960: 104)’

This backstory has been readily accepted by many subsequent authors. But Prosser’s rendering, while not far besides the truth in essence, has the material facts quiet wrong. As Amy Gajda, assistant professor of journalism and law at the University of Illinois, points out in her engaging 2007 paper ‘What If Samuel D. Warren Hadn’t Married A Senator’s Daughter?’ the account presented by Prosser cannot be true. The Warrens married just seven years prior to the publication of ‘The Right to Privacy’, family records show they didn’t have a daughter of marrying age till after 1890. Furthermore, Gajda notes: ‘Professor Gormley found that Boston’s Saturday Evening

Gazette “only mentioned [Samuel D. Warren’s] name twice between the years 1883 and

1890.’ (Gajda 2007: 5)

It must be said that Gajda is not the first to notice the implausibility of Warren’s backstory provided by Prosser. She is however the first to have tried to sketch a fuller picture of the relation of the Warrens with the press. I think Gajda’s research is important because (in her own words) ‘[It] could shed a revealing light on the sort of press coverage Warren meant to penalize and

(13)

prevent through publication of his article.’ (Gajda 2007: 8). The historic context she portraits is 3

one of an overzealous press which shows little sympathy for the Warren family, aiming to profit from (some painful or embarrassing) private moments.

Gajda investigates 60 articles mentioning the Warren family from 1883 up to the publication of The

Right to Privacy. The articles come from The New York Times, The Washington Post and The Boston Daily Globe. Prosser was right that not Samuel Warren but his wife Mabel Warren, born as

Mabel Bayard, was the centre of the press’ attention. Mrs. Warren’s father, Thomas Bayard was an US senator for Delaware, three time candidate for the democratic nomination for US president and one time secretary of state. The ‘high birth’ of Mrs. Warren made that contemporary newspapers considered the Warren family (at least somewhat ironically) to be part of ‘Blue Blood’ social circles. (Gajda 2007: 2)

The articles discuss Warren family affairs from social gatherings, an expensive art purchase to descriptions of the deaths and funerals of Mrs. Warren’s sister and mother in 1886. The mostly rather unsympathetic coverage of the press must have made a year of double personal grief that much harder for her. The friendship of Mrs. Warren with Mrs. Cleveland, the charming, young, newly wed wife of president Stephen Grover Cleveland landed the Warren’s extra media attention in the final years up to the publication of ‘The Right to Privacy’. Gajda’s research gives an informative depiction of the media landscape the Warren-Bayards felt victimized by. Gajda interprets ‘The Right to Privacy’ as an argument for a concrete solution to a clearly conceived problem. Warren sought and found a legal argument for ‘the right to be left alone’ because he had had enough of the ‘gossip-mongering’ press.

‘The law, [Warren and Brandeis] wrote, “must afford some remedy” so that the “gossip-monger” would not be given license to publish “the facts relating to [another’s] private life, which he has seen fit to keep private.” Any privacy law, they suggested, must be designed “to protect those persons with whose affairs the community has no legitimate concern from being dragged into an undesirable and undesired publicity . . . .” Only then would the “too enterprising press” that willingly discussed “one’s private affairs,” “the acts and sayings of a man in his social and domestic relations,” and a woman’s face, “form, and her actions, by graphic descriptions colored to suit a gross and depraved imagination” be thwarted. Only then would “the acts and sayings of a man in his social and domestic relations [be] guarded from ruthless publicity.”’ (Gajda 2007: 10)

While Mrs. Warren has mostly disappeared into the obscurity of history the fact that Samuel Warren wrote

3

‘The Right to Privacy’ is the reason why gossip articles about the Warren family are still of research interest today. The irony of this fact is not lost on Amy Gajda (Gajda 2007: 9)

(14)

The right to privacy, in its earliest iteration, is a protection of ‘private life’ against ‘ruthless publicity’ brought by prying , ‘gossip-mongering’ journalists. It is an argument for an idea that would stick with subsequent privacy theories. Nissenbaum described it as ‘the idea that privacy protects a “safe haven”, or sanctuary, where people may be free from the scrutiny and possibly the

disapprobation of others.’ The experience of Mrs. Warren and her family with the press shows that

breaches in their ‘sanctuary’ indeed led to scrutiny and disapprobation from the press, and by extension the wider public. In the case of a late 19th century journalists, with new innovations in information technology; the photograph and the newspaper at his disposal, disclosure of private facts was unmistakably his motive. But this does not mean that any subsequent form of information gathering (including todays commercial data collection) is necessarily aimed at uncovering such private facts. We need to be alert for how contemporary ‘recent inventions and business practices’ in the field of information technology change the dynamics we try to understand. This starts to illustrate why I think it unwise to rely too much on privacy theories from other era’s to defend the need for privacy. New contexts for data collection might call for different reasons to protect privacy. Or not at all.

Driving to the Panopticon

If we consider ‘yellow journalism’ the form of data collection Warren and Brandeis were concerned with, it was data collection on the smallest of scales. 19th century gossip journalist wouldn’t lie in the bushes of every ordinary wedding. Only celebrities were of interest to a wide enough public for newspapers to justify the investment of time and money to gather ‘gossip’ on them. The flow of ‘data’ in the context Warren and Brandeis’ were faced with, went from the noteworthy few, through newspapers, to the masses of the public. Imagine it as a pyramid with information about a few people at the top flowing to a broad base of newspaper readers. If it is up to the companies who are now jostling for the biggest share of collectable data, the internet of things promises something resembling the reversal of this pyramid. Data collecting and processing companies will gather data not only on nearly everyone, but also nearly everything. Unprecedented amounts of data will be flowing to the top of the pyramid to data collecting companies.

Commercial mass data collection is a rather new phenomenon, large scale data collection by states however has been with us for a longer time and presents another context for a predominant defense of privacy. An influential paper against large scale data collection by governments was written in 1995 by philosopher Jeffrey Reiman in response to a public/private joint initiative called Intelligent Vehicle Highway Systems (IVHS). Part of the project was to take record of the routes drivers would travel coupled to a date and time, creating a database of every trip taken over the highways. The collected data would be utilized to provide real-time traffic information and to suggest optimal routes. The long term goal of IVHS was even to have computers take over driving entirely. The technical ability to realistically conceive both the scale and comprehensive nature of this proposed data collection was new in the early 1990’s. As the title Driving to the Panopticon suggests Reiman did not regard this innovation as harmless.

(15)

The Panopticon Reiman refers to is a prison design dreamed up by the 18th century English philosopher Jeremy Bentham to be optimally efficient. All individual cells would be built in a circle with outward facing windows to let plenty of light flow in. In the middle of the circle would be one tower from which a single guard would be able to observe the silhouette of each prisoner in their cells. The tower would be dark so prisoners could not see if there actually were a guard in the tower. The innovation of this design was the realization that the constant possibility of being observed would be enough to ensure good behavior in the prisoners.

As Reiman notes at the beginning of his paper, it was the French philosopher Michel Foucault who used Bentham’s prison plan ‘as an ominous metaphor for the mechanisms of large-scale social control that characterizes the modern world’. (Reiman 1995: 195) Reiman uses the Panopticon as a metaphor as well, but quite differently from how Foucault understood it. To Foucault the Panopticon represents a blueprint for a power-structure he calls ‘discipline’, which would become exemplary for many institutions of the modern world – from schools and hospitals to factories and offices. Foucault’s discipline is a form of power but not of centralized power. Reiman makes clear he intends the Panoptic metaphor in a different way when he writes:

‘I want to stretch the Panopticon metaphor yet further [than Foucault], to emphasize, not just the way it makes people visible but also the way that it makes them visible from

a single point.’ (Reiman 1995: 196)

What he means by ‘people becoming visible from a single point’ becomes clearer when Reiman explains that the privacy concerns over IVHS can not be considered in a vacuum:

‘IVHS’s information will exist alongside that provided by other developments already in existence and likely to grow, such as computerization of census and IRS information; 4

computer records of people’s credit-card purchases, their bank transactions, their credit histories generally, their telephone calls, their medical conditions, and their education and employment histories; and, of course, the records of their brushes with the law, even of arrests that end in acquittal.’ (Reiman 1995: 200)

As Reiman points out it is possible to draw pretty precise and accurate conclusions about someones life by combining pieces of information that separately might not be revealing at all. Reiman’s concern, then, is not the travel data collected through IVHS per se, but the revealing pictures that can be constructed if data from different sources are combined to make visible from a single point. The entire data-collecting complex Reiman predicted, of which IVHS could become a part, he calls the informational Panopticon.

Internal Revenue Service, the U.S. tax authority

(16)

Reiman argues ‘privacy is the condition when others are deprived of access to [information about] you.’ But in the informational Panopticon, Reiman warns, everyone leaves collectable traces of their lives in databases. Even if each individual trace seems trivial, the picture they reveal when combined is not. In this way Reiman seems to avoid Nissenbaum’s problem of privacy in public. He argues against public data collection because public data when viewed from one point enables observers to reach detailed conclusions which we might consider to be private. Reiman quotes philosopher Richard Wasserstrom who in 1978 already observed that all information collected about him could produce a ‘picture of how I had been living and what I had been doing … that is fantastically more detailed, accurate, and complete than the one I could supply from my own memory.’ (Reiman 1995: 196) To Warren and Brandeis the right to privacy would protect only ‘the facts relating to [someones] private life, which he has seen fit to keep private.’ Reiman, after significant developments in the field of information technology, points out that enough mundane facts unrelating to your private life put together could also provide revealing insights. This begins to suggest that Reiman too connects the collection of data with the ambition to gain insight into individual private lives. This time not journalists but the government is doing the prying and ordinary people are the target. In the case of 19th century gossip-mongering journalists, prying into personal lives of celebrities was a clear and stated aim, in order to sell newspapers. Reiman’s argument, which starts from IVHS’ innocent sounding stated aim of providing traffic information and ends up at a dystopian informational panopticon, is rather more speculative. Driving to the

Panopticon is not an idle thought experiment however, to Reiman the threat of unescapable

government surveillance was not unrealistic. The risks Reiman associates with living under the informational panopticon give us an idea of the power relation he envisioned between the observer and the observed, the government and the governed.

Let us start with what Reiman’s calls the risk of extrinsic loss of freedom by which he means: ‘all those ways in which lack of privacy makes people vulnerable to having their behavior controlled by others.’ (Reiman 1995: 201) Reiman clearly links the right to privacy explicitly to freedom. A lack of privacy results in loss of freedom through having your behavior controlled by others. Earlier we saw that Nissenbaum, based on prior predominant theories, assumes a similar causality when she defends the need for privacy in public. She wrote that privacy ‘enhances people’s capacity to function as autonomous, creative, free agents.’ In Driving to the Panopticon Reiman explains why he thinks a violation of privacy makes an individual vulnerable to control. We can subsequently consider if such a warning against 1990’s ‘government surveillance’ makes equal sense as a warning against 2010’s widespread commercial data collection.

If we read on, Reiman states that the risk of extrinsic loss of freedom…:

Most obviously […] refers to the fact that people who want to do unpopular or

(17)

benefits – jobs, promotions, or membership in formal or informal groups – or even

blackmail, if their actions are known to others. (Reiman 1995: 201 my cursive)

He also quotes philosopher Ruth Gavinson, who makes a similar argument:

‘Privacy … prevents interference, pressures to conform, ridicule, punishment,

unfavorable decisions, and other forms of hostile reaction. To the extent that

privacy does this, it functions to promote liberty of action, removing the unpleasant consequences of certain actions and thus increasing the liberty to perform them.’ (Reiman 1995: 202 my cursive)

We see here again the assumption that data collection is geared towards the uncovering of sensitive ‘facts a person has seen fit to keep private’. The threat implied within the widespread data collection according to Reiman is that your ‘unpopular or unconventional’ actions, which are sensitive and you have sought to keep private, might be found out. As examples of what might be considered unpopular or unconventional he mentions ‘pornography, gambling, drunkenness, and homosexual or pre- or extramarital heterosexual sex’.

It is not self-evident that the (US) government would be interested in developing an informational panopticon with the ability to identify which citizens are gambling homosexuals or drunken porn addicts without there being a purpose to it. The purpose of the panopticon in Reiman’s 1995 dystopian vision becomes clearer if we consider how he imagines governments might utilize collected personal sensitive information. Reiman states that individuals might be subject to social pressure, if their actions are known to others. A second assumption Reiman and Warren and Brandeis share than, is that collection of data leads to some form of publication of sensitive personal facts. People can only ‘deny others certain benefits’ for unconventional behavior if the information collected through the panopticon is somehow disseminated to them. By publication I mean nothing else than that the collected sensitive information, after collection, is shared with other people, on whatever scale. Following the argument deeper we must conclude that Reiman imagines a government releasing sensitive information about it’s citizens in full expectation that this will lead to hostility against unconventional behavior. Apparently Reiman interprets unconventional behavior of citizens as a problem for a government to which social punishment (through ridicule and unfavorable decisions) can be employed towards a solution. Reiman, I conclude, foresees that widespread data collection will be employed to enforce individual conformity within society. This will ultimately create people who are: ‘something less noble, less interesting, less worthy of respect.’ (Reiman 1995: 206) He writes:

As the inner life that is subjected to social convention grows, the still deeper inner life that is separate from social convention contracts and, given little opportunity to develop, remains primitive. Likewise, as more and more of your inner life is made sense

(18)

of from without, the need to make your own sense out of your inner life shrinks. You lose both the practice of making your own sense out of your deepest and most puzzling longings and the potential for self-discovery and creativity that lurk within a rich inner life. Your inner emotional life is impoverished, and your capacity for evaluating and shaping it is stunted.

‘The most ominous possibility of all,’ he continues, is that we lose:

‘the inner personal core that is the source of criticism of convention, the source of creativity, rebellion, and renewal. To say that people who suffer this loss will be easy to oppress doesn’t say enough. They won’t have to be oppressed, since there won’t be anything in them that is tempted to drift from the beaten path or able to see beyond it. They will be the “one dimensional men” that Herbert Marcuse feared. The art of such people will be insipid decoration, and their politics fascist. (Reiman 1995: 208)

At this point Reiman has provided an instance for Nissenbaum’s observation that privacy has generally been conceived as both ‘the conditions for a free society and [… as enhancing] people’s capacity to function as autonomous, creative, free agents. But evaluating the argument Reiman lays out, I think it becomes clear that Reiman is not providing an argument against public data collection per se, but public data collection in the hands of a totalitarian regime, bend on an conformist population. Put in a different way, I think Reiman’s argument cannot be used against public data collection in general. If you would do this the contours of circular reasoning appear: in

Driving to the Panopticon Reiman demonstrates how large scale data collection is a potent tool

against individual and societal freedom – in the hands of a regime striving against a free society, without autonomous, creative free agents. Again, I think that we can’t copy arguments from historic predominant theories of privacy to argue for the value of privacy in contemporary contexts because historic contexts are deeply incorporated in different defenses of privacy.

(19)

Cold War

Driving to the Panopticon written in the aftermath of the Cold War and the war against fascism

during WWII before that, must be read in the context of that tense period in history. Born in the United States in 1942, Reiman lived in a country in perpetual conflict with totalitarian regimes for almost his entire life before the publication of Driving to the Panopticon. What he means when he writes about ‘the inner life made sense of from the outside’ falls into place when you consider that for decades America asserted itself as the defender of freedom against state ideology, wether it was Japanese militarism, German fascism or Soviet communism.

Because of these conflicts Americans endured the chilling experience of state enforced ideologic conformity first hand, notably during the era of McCarthyism. In the 1950s US senator Joe McCharthy started the anti-communist crusade which would make him into the namesake of a period of paranoia. Convinced of a communist plot he accused scores of people in positions of power and influence of disloyalty and anti-American tendencies, often with scant evidence. Through propaganda messages Americans were imprinted with the message that everyone, even your nice neighbor, was a potential soviet sympathizer plotting to destroy the American way of life. People were urged to keep vigilant and report any suspicious behavior to the authorities. The Federal Bureau of Investigations (FBI) meanwhile started a practice of surveilling and infiltrating social movements outside the mainstream, a practice the civil rights movement and the Vietnam protests later where also victimized by. The FBI was also legally tasked to screen every government employee for disloyal tendencies in order to deny those people their jobs. If a society of neighbors watching each other distrustingly and government agencies screening political sympathies to decide job prospects is the scenario to avoid, privacy becomes the way to prevent ‘interference, pressures to conform, ridicule, punishment, unfavorable decisions, and other forms of hostile reaction.’ Starting from such a background it is rather easier to understand Reiman’s skepticism towards the IVHS initiative. Governments have been terrified of their citizens, and in their turn it was not at all unjustified for citizens to be terrified of their government.

1984 and The Circle

The most influential anti-totalitarian 1949 novel, George Orwell’s 1984, makes a similar connection between privacy and individual pressure to conform. In 1984 the protagonist Winston Smith lives under totalitarian oppression in the dystopian future of Oceania, a place where the ever watchful eye of Big Brother scrutinizes every move of every person in name of the Party. Individuality is forbidden in Oceania, a rule Winston defies by trusting his inner most thoughts to a diary. Winston’s fear is what Reiman calls ‘the most ominous possibility’ to lose his ‘inner personal core that is the source of criticism of convention, the source of creativity, rebellion, and renewal.’ Constant visibility – as in Reiman's informational panopticon – is an important theme in the book. Through telescreens (essentially televisions equipped with webcams) people are exposed to endless propaganda. As little as an inappropriate facial expression in reaction to this propaganda could be

(20)

regarded as a thoughtcrime, which means to think a thought deviating from the party doctrine. Conformity is not only ensured through the thoughtpolice but, just as Reiman argues, also through

social pressure or even blackmail by loyal party members who happen to be neighbors, colleagues,

complete strangers or even your own child.

1984 is a gripping dramatization of a life under an absolute totalitarian regime. The 2013 novel The

Circle by Dave Egger mirrors themes from 1984 to sketch a totalitarian dystopia in the age of mass

commercial data collection. The ominous Orwellian propaganda slogans ‘War is Peace. Freedom is Slavery. Ignorance is Strength’, in Egger’s contemporary version have become propaganda for the fictitious internet company The Circle: ‘Secrets are Lies. Sharing is Caring. Privacy is Theft.’

The protagonist in this novel is Mae Holland who is overjoyed to start working at The Circle, something very close to a near future Google which has bought up both Facebook and Amazon. After a few weeks she becomes an overnight celebrity when she is chosen to be the first employee to go ‘transparent’. This means she will be wearing Circle’s ‘SeeChange’ lightweight camera at all times, broadcasting her life in real time to anyone interested to watch. Her parents and ex-boyfriend become wary about meeting her, but in return she gains millions of followers watching her and commenting throughout her days. (In an exaggeration of human exhibitionism) SeeChange camera’s, which can also just be placed somewhere stationary, become so popular in the novel that consumers across the globe buy and install millions and millions of these camera’s, leaving nearly no places on earth uncovered by SeeChange.

The mission statement the Circle seems to operate under is the same as nearly all actual internet businesses: to make things more efficient, easy and safe. Different departments within the Circle are working on projects relating to health care, education, crime reduction and even democracy. Through Mae’s undoubting enthusiasm for how technologic interventions will alter the world The

Circle best captures the unbridled optimism and apparent inevitability behind the economic rush

on data we witness today. But tellingly, the book makes less of a compelling argument for the downside to these trends. Jon Baskin, in an interesting review for The Point (a Chicago based essay magazine) makes this point better than I could. He writes:

Eggers […] makes the Circle so overzealous in its designs (one plan to reduce child kidnappings involves inserting a tracking device into every infant’s bones; another crime-prevention measure relies on sensors that alert residents when an unfamiliar person has entered their neighborhood) that the reader is more or less bullied into acknowledging the potentially negative social consequences of the company’s success. What makes the situation thought-provoking is just that, in slightly more moderate cases, most Americans have proven time and again that they are willing to sacrifice privacy or anonymity for connectivity, efficiency and security. […] And they have done so for precisely the reasons that people do so in the novel, because it

(21)

makes life easier, because social networking is addictive, and because it seems a hard and abstract matter to even say exactly what they are sacrificing. (Baskin 2015)

Throughout the novel two characters act as mouthpieces against the circle. The first is Mae ex-boyfriend Mercer, an artisan chandelier maker who dislikes the internet so much he barely keeps up his website to sell his work. His critique focusses on the hollowness of communication on social media, which might be valid but is not precisely in defense of privacy. Instead Mercer point comes down to a claim to his right to ‘opt-out’ from being constantly prompted to share his thoughts and comment on other peoples thoughts. Then, in the finale to the novel, there is the mysterious Circle founder who warns Mae of a world where ‘everyone will be tracked, from cradle to grave, with no possibility of escape.’ But although that sounds dangerous, it does not explain why being tracked should be regarded as a bad thing. Jon Baskin again:

‘Mae may be wrong to dismiss such complaints as simply crazy, but The Circle does little to persuade its reader that she is wrong to dismiss them. In neither case does the objector state even one concrete negative consequence that a person who is tracked by the Circle their whole life will have to contend with.’ (Baskin 2015)

The point I would make is that it proves hard for Eggers to come up with concrete negative consequences to Circle tracking because the underlying understanding that it is indeed negative to be tracked depends on arguments implying there to be some totalitarian state as articulated in

1984 and Driving to the Panopticon. He tries to fit a Cold War argument for privacy on a

contemporary context which frankly rejects it. As if to emphasize this point, on one of the very last pages of the novel the mysterious Circle founder turned rebel makes one last attempt to convince Mae of the dangers the Circle poses. He does so by asking Mae to close her eyes and imagine the Circle to be a totalitarian regime!:

‘Picture this. The Circle has been devouring all competitors for years, correct? It only makes the company stronger. Already 90 percent of the world’s searches go through the Circle. Without competitors, this will increase. Soon it’ll be nearly 100 percent. Now, you and I both know that if you can control the flow of information, you can control everything. You can control most of what anyone sees and knows. If you want to bury some piece of information, permanently, that’s two seconds’ work. If you want to ruin anyone, that’s five minutes’ work. How can anyone rise up against the Circle if they control all the information and access to it?’ (Eggers 2014: 482)

And when Mae protests that governments wouldn’t let it come to that, the mysterious figure continues:

(22)

‘Governments who are transparent? Legislators who owe their reputations to the Circle? Who could be ruined the moment they speak out? What do you think happened to Williamson? Remember her? She threatens the Circle monopoly and, surprise, the feds find incriminating stuff on her computer. You think that’s a coincidence? (Eggers 2014: 483)

And even this final speech concerns the dangers of a monopoly on the curation of all information and is not about the collection of personal data or tracking. As the mysterious man suggests, when powerful enough, you don’t even need surveillance to uncover embarrassing personal facts in order to hurt somebody – planting incriminating facts might be easier. The ultimate argument against the real Google The Circle provides is that it will become dangerous, when Google gains a

monopoly over truth and renders government oversight powerless. The book does however little

to suggest that such a thing might be likely to happen.

The point I try to make is not that there is nothing to worry about, my point is rather that we should not speculate but investigate the actual relation individuals have to the Googles and Facebooks of the world. In what remains in this paper I will argue that to say that companies ‘know’ anything about us obfuscates the unprecedented way data is being processed, and what I think that means for privacy.

(23)

Predictive Analytics

In October 2010 PAWcon, a conference for professionals in the field of ‘predictive analytics’, was held in Washington D.C. This particular event was just one in dozens of conferences organized yearly in the field of data processing. These gatherings give professionals an opportunity to exchange expertise and share case studies that might be of interest to other professionals. While business conferences are proverbially dull affairs at this particular event in 2010 a statistician working for the American retail chain Target would give an presentation titled How Target Gets

the Most out of Its Guest Data to Improve Marketing ROI, which would eventually prove to be

very controversial. The presentation would attract the attention of a New York Times reporter – Charles Duhigg – whose piece How Companies Learn Your Secrets in New York Times Magazine would inspire a blogpost by Kashmir Hill on Forbes.com titled How Target Figured Out A Teen

Was Pregnant Before Her Father Did. This blogpost would be picked up by different media outlets

among which were Fox News and the Daily Mail, starting one of the first data processing controversies.

In her blogpost Hill recycles the most controversial, or if you wish incriminating, passages from Duhigg’s New York Times Magazine article. She includes the only anecdote of a ‘concrete negative consequence to a person tracked by Target’ provided in that piece. It is a story of which she writes that it is ‘so good that it sounds made up’. The anecdote tells of an angry man in a Minneapolis Target, demanding to speak to the manager. From How Companies Learn Your Secrets:

‘“My daughter got this in the mail!” he said. “She’s still in high school, and you’re sending her coupons for baby clothes and cribs? Are you trying to encourage her to get pregnant?”

The manager didn’t have any idea what the man was talking about. He looked at the mailer. Sure enough, it was addressed to the man’s daughter and contained advertisements for maternity clothing, nursery furniture and pictures of smiling infants. The manager apologized and then called a few days later to apologize again. On the phone, though, the father was somewhat abashed. “I had a talk with my daughter,” he said. “It turns out there’s been some activities in my house I haven’t been completely aware of. She’s due in August. I owe you an apology.” (Duhigg 2012)

I am especially interested in this particular controversy because it revolves around the mathematical subfield of predictive analytics (PA), the theme of PAWcon. It is through applying PA on huge sets of customer data that Target sends targeted mailers. It is also how Google Now offers the information you want to know, when you want to know it, it is how Amazon conjures up

(24)

‘personal recommendations’, how advertisements are ‘targeted’ and how hiring companies rank applicants before the first interview is even conducted. Through PA computerized systems are able to create the impression of intimate knowledge about you personally. I suspect that this personalized experience contributes a lot to the premature conflation of data collection and unwanted scrutiny (how companies learn your secrets). The impression of personalized knowledge about each customer/user, which mimics the worst fears of complete and unescapable surveillance, obfuscates the innovative nature of PA in data processing. To understand the role of PA in the contemporary context data is collected in is a promising starting point to consider the change brought by ‘recent inventions and business practices’. Understanding the mechanics of PA gives us the ability to evaluate the plausibility of the New York Times Magazine’s claim that ‘companies learn our secrets’ and the Forbes.com claim that Target was (is) capable of determining a teens pregnancy.

Amazingly a recording of the original presentation by the Target statistician – his name is Andrew Pole –was until very recently freely available at the PAW website. I was fortunate enough to be able to watch the 45 minute clip before it was taken offline and probably disappeared from public view for ever (only a dead link remains). Eric Siegel, program coordinator for PAWcon and author of the book Predictive Analytics - The power to predict who will, click, buy, lie or die, was at this presentation and describes what happened in his book:

[Andrew Pole] took the stage and dynamically engaged the audience, revealing detailed examples, interesting stories, and meaningful business results that left the audience clearly enthused. Toward the end, Pole describes a project to predict customer pregnancy. Given that there’s a tremendous sales opportunity when a family prepares for a newborn, you can see the marketing potential.

But this was something pointedly new, and I turned my head to scan the audience for reactions. Nothing. Zilch. Nada. Normally, for marketing projects, PA predicts buying behavior. Here, the thing being predicted was not something marketers care about directly, but, rather, something that could itself be a strong predictor of a wide range of shopping needs. (Siegel 2013: 51)

What Andrew Pole demonstrated was a method to predict, from collected customer data, which female customers will have a baby in the coming months. In the presentation Pole explains that consumers usually loyally visit the same store for everyday items. It is very hard for advertisers to convince people to make their regular purchases elsewhere. There are just a few moments in a persons life when such long-term habits are shaken-up. The exited anticipation and stress involved with becoming new parents is one such moment. If you can bind people to your business during this period of flux, when new habits are established, you stand to gain a new long term customer. It is to this end that a company like Target invests considerable capital into to figuring out which

(25)

women might be pregnant in order to offer them enticing savings in order to get them to a Target shop. So is what Andrew Pole describes how companies learn our secrets? Is, as is the implied assumption in predominant theories of privacy, data collection today still geared towards the

uncovering of sensitive facts a person has seen fit to keep private? To see why I think it is not, let’s

consider how PA helps Target to come to a prediction of pregnancy.

Without going into the mathematical specifics, a predictive model looks at the ‘performance’ of a specific ‘unit’ in the available data set. In this case the model takes the purchase history (performance) of known pregnant women (unit) throughout and after their pregnancy. The objective of the model is to predict the likelihood that a similar performance in a different sample will be of a similar unit. In plain(er) English: the model puts a ‘pregnancy prediction score’ on the chance a women with similar purchasing patterns (performance) is pregnant (unit).

Target provides a registry where pregnant women can register for baby shower gifts. Women who register are obviously pregnant and also provide their due date. From this Andrew Pole and his team were able to identify a ‘unit’ of pregnant women in their customer data. By analyzing their purchase history a pattern emerged. The women tended to buy certain products in their first trimester, different products in the second trimester and so on. In this way some 25 products with predictive value were identified (performance). These products are not necessarily baby items you would expect, the only thing that matters for the model is the strength of the correlation. With a strong idea of what pregnant women tend to buy in what stages of pregnancy and early motherhood, Pole’s team released the model on the purchase histories of women Target didn’t know were pregnant or not. If the purchase habits of a woman shows close similarities to the general performance of known pregnant women in, say, the second trimester, she is expected to be too. Practically all data-driven personalized services work according to some (more elaborate) version of this model. It is how Facebook puts the stories most interesting to you on top of your feed, next to the advertisement you are most likely to click on: predictions based on data of billions and billions of clicks by different users before you. The bigger the data set is, the more nuanced prediction can be mined from it.

As I said, I aim to argue that PA, as a business practice, does not seek to uncover sensitive facts a

person has seen fit to keep private. Admittedly that is an argument to make after explaining how

Target tries/tried to find out if their customers, who meant to disclose nothing of such nature, are pregnant. A tricky, but I believe not meritless, argument to make is it to point out that predictive

analytics does merely that: predict. No medically reliable facts can be deduced from purchase

histories and there is no way for Target to verify if the women they suspect to be pregnant actually are. The Target ‘Mom & Baby’ mailer composed to send to possibly pregnant women did not read ‘Mom & Baby’ at the top. Coupons for several non-pregnancy products would be mixed in so the mailer wouldn’t be wasted on non-pregnant recipients. And, admits Andrew Pole, also to not creep the pregnant women out. (Duhigg 2012)

(26)

The stronger argument however, is that Target, ultimately, isn’t actually interested in who is pregnant at all. The best way to asses the ends to which PA is utilized in modern context, is to strip away the media controversy and listen to how people who come up with it like Andrew Pole explain it. The, I believe honest, answer is right in the title of his speech. He explains how, at Target, he uses guest (read: customer) data to improve the return on investment (ROI) of money spend on marketing. His team takes customer information from all sources available. He for example explains how Targets e-mails opt-in customers about sales. When such a customer clicks a link in the e-mail a cookie is placed in the browser to link ‘browser behavior’ to the customer ID they create. As much sources as possible are exploited to link as much, almost random, information to the customer ID as possible. Cookies can’t be placed on phones so Target offers ‘mobile coupons’ to identify specific devices to the ID; all interactions on the website are off course also linked; from address information distance to the closed store and local competitors are derived; redeemed coupons are tracked, etc. And from that data, through trial and error, they optimize marketing messages to get as much revenue generating customers to Target shops, for as little as possible marketing dollars. That is Andrew Pole’s job – and he is good at it. Duhigg even suggest that the growth in revenue of 52% Target achieved between 2002 when Pole was hired and 2010 is attributable in large part to his team. Duhigg cites Target’s then CEO Gregg Steinhafel boasting to investors about the company’s “heightened focus on items and categories that appeal to specific guest segments such as mom and baby.”

When trying to understand what Target does, to focus on the fact that they singled out pregnant women, I think, is to miss the point. What Pole’s team was after is an unit of customers with a high potential to durably improve performance in order to generate a high return on marketing investment. Eric Siegel writes: ‘PA’s objective is to improve operational efficiency rather than figure people out for its own sake’. Words like ‘pregnancy’, ‘mom’ and ‘baby’ overload with meaning what is essentially an economic formula stripped down to its bare minimum. Target interprets pregnancy as nothing but an indicator for something else: a stellar marketing opportunity. Target’s profiling, I claim, is devoid of cultural context. The purpose of PA is contrary to ‘figuring out people’, it is a method to deconstruct complete people into performing consumers consisting of manipulatable data points.

Referenties

GERELATEERDE DOCUMENTEN

Juist omdat er over de hier kenmerkende soorten relatief weinig bekend is, zal er volgens de onderzoekers bovendien gekeken moeten worden naar de populatie - biologie van de

th e public sector co mpl y with the procurement po lici es and standards.. research problem statement , and object ive s, research questi on s, scope , and research

These results are in line with the results from the study in which designers and laypeople judged the familiarity, perceived usability and attractiveness of

dat Helen Hunt Jackson bezeten was door een spirituele kracht die haar dwong naar California te gaan en te schrijven over de Indianen (!) en dat veel van haar literaire werk door

Die doel van die vak Toegepaste Wiskunde is om wetenskaplike konsepte te verhelder, om ver- skynsels in wiskundige terme te beskryf (waar hulle ook al voorkom) en om

Alles overz iend is het aannemelijk, maar niet aantoon- baar aan de hand van publicaties van de hoogste bewijsklasse, dat preventieve programma’s, gericht op personen met

1 0 durable goods in household 1 Heavy burden of housing and loan costs 2 1 durable good in household 2 Heavy burden of housing or loan costs 3 2 durable goods in household 3

This basic qualitative interview strategy offers some possibilities for finding “truth“ and relevance, but in order to dig deeper and in other directions four specific techniques