• No results found

Is a picture worth a thousand words? “On written text and the (moving) image, their relationship within Facebook’s interface and the possible humanistic consequences”

N/A
N/A
Protected

Academic year: 2021

Share "Is a picture worth a thousand words? “On written text and the (moving) image, their relationship within Facebook’s interface and the possible humanistic consequences”"

Copied!
63
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Is a picture worth a thousand words?

“On written text and the (moving) image, their relationship within Facebook’s interface and the possible humanistic consequences”

Nena Snoeren

University of Amsterdam

MA Media Studies: New Media and Digital Culture 28th of June 2019

(2)
(3)

Index Page Abstract 4 1. Introduction 5 2. Text/Image 8 2.1 On text 9 2.2 On image 11

2.3 The relationship and the divide 13

3. Text/Image within new media 17

3.1 Affordances 21

3.2 Methodology 24

3.3 Text/Image within Facebook’s interface 26

4. Case studies 27

4.1 Case study #1: The status update 28

4.2 Case study #2: Emoji, symbols, stickers and GIFs 39

4.3 Case study #3: The person 47

5. Discussion 53

6. Conclusion 58

(4)

Abstract

Our media world is made up out of two visual components: images and written text.

Together they form a big part of our understanding and online communication. In the recent past images have become significantly easier to produce and share. This has had its effect on the position of written text in media and communication, that now operates within the ‘logic of the image’. In this work, the relationship of written text and images will be explored through a historical and philosophical overview of these two concepts. Furthermore,

Facebook’s interface will be analysed through the ‘discursive interface analysis’, focussing on three case studies: ‘the status update’, ‘emoji, symbols, stickers and GIFs’ and ‘the person’. These analyses will be conducted from a contemporary historical perspective, mapping the changes in Facebook’s interface updates in relation to written text and images over time. Considering the concept of productive power, the features are researched through the concept of affordances, seeing how certain changes in features produce a specific normativity towards images and text. To conclude, the image-shift that becomes apparent is analysed from a humanistic perspective, discussing the possible humanistic consequences of the change in the text/image relationship.

(5)

1. Introduction

Does the title page of this work feel a bit empty to you? Would it look more appealing with an image on it, or a bigger title? You are reading this work now, but if it were laying on a table amongst a few other papers with images on them, would you feel drawn to reading this one? The answer is probably: no.

From my personal and academic experience, it has been my observation that the world around us, and new and social media in particular, have become more (moving) image orientated. Our technologies have increasingly become compatible with (moving) images in the last -and even more so in this- century. Moving images have migrated from living in cinemas to living in our homes on the television, then on the computer and now in our hands via our smartphones. Cinema and television were made to show the moving image, initially the computer and the telephone were not. Computer to computer

communications were written-text based and phone communications verbal-text based. As computer technologies advanced, the image and later the moving image (first in the GIF format) made its way onto the internet, onto our home computers and not much later onto our internet connected phones. Now our timelines, news feeds and digital communications have shown a shift from predominantly textual information towards images, symbols, videos, GIFs and emoji. Not only has the image found its way on the text-based medium of the internet, it is now not merely being supported, but it has been thoroughly integrated into its structures. Modern social media platforms are specifically built to support and stimulate the use and viewing of (moving) images. User posts that contain images or moving images are far more visible than those only containing text. Newer popular social media platforms like Instagram and Snapchat have even kept the use of written text to a minimum, making images the main form of content and communication from the start.

Academic research into written text and images has been conducted by many fields, including psychology, philosophy, linguistics, communication, television studies, film studies and new media studies. Research into internet platforms, social media and web interfaces has been done extensively within new media studies, focussing on the content and structure of certain online platforms and their interfaces. A combination of research into written text and images as ontological research subjects, within an online platform, however, has not been conducted. I aim to make my work here a contribution to platform studies within new

(6)

media studies, in this new but necessary way.

Research into digital platforms and their interfaces are highly relevant at this time, seeing their immense user base and thus their far-reaching influence and power. Research into the interfaces of the most popular social media platforms discussing the state of text and image is essential as they are the cornerstones of modern human culture,

understanding and communication. This work will therefore be focussing on the changing relationship between written text and the (moving) image on the most popular social media platform until this day: Facebook.

Neil Postman talks about not seeing the point of studying media unless doing so in a moral or ethical context in “The Humanism of Media Ecology”, the keynote address at the Inaugural Media Ecology Association Convention in 2000. Other well-known media scholars like Marshall McLuhan did not agree with this idea and thought one ought to stay ‘neutral’ (Postman 11). I agree with Postman in the sense that one can and should reflect on media within a moral context. We can reflect, analyse and dissect what certain media are, how they function and how they work, and then? What deeper meaning and possible

consequences does their influence have? These, I believe, are the important things to consider. Humanism concerns itself with the ethical consequences of human actions and decisions. A humanistic approach thus evaluates the morality behind certain matters. I want to touch upon this humanistic point of view in relation to the use of written text and images within new media in this work. If we are looking at a platform that currently has 2.3 billion active users, facilitating a practice at the core of our human experience: communicating with one another through text and images, the humanistic consequences of this practise must be reflected upon. This is not to say that my research here will hold all the answers to what is ‘good’ or ‘bad’ for us humans about the changing text/image relationship on

Facebook, rather it is to be seen as a (limited) exploration into the humanistic issues this changing relationship could be bringing about. Therefore, the research question of this work is as follows: “What has the relationship been between written text and the (moving) image within Facebook’s interface structure and what could this mean in terms of humanistic consequences?”

Firstly, to lay a good foundation for a discussion on changing text/image

relationships within a new media platform, I will be exploring what written text, images and their relationship are in the first section of this work. I will use a multi-disciplinary approach

(7)

for this as to explore the text/image question from multiple perspectives in relation to new media. I believe such a multi-disciplinary approach is necessary in order to rightfully address the diverse nature of these research subjects. In order to research what the ontology of written text and (moving) images are, academic works from the disciplines of psychology, communication, linguistics, anthropology, film studies and new media studies are used. Following, I will use certain main features of Facebook’s interface as case studies to analyse the specific text/image relationships over time. The case studies will be: ‘The status update’, ‘Emoji, symbols, stickers and GIFs’ and ‘The person’. These contemporary historical analyses will be conducted through the ‘discursive interface analysis’ to thoroughly research what changes occurred to the interface structure at what time and what this signifies in terms of normativity. The concept of affordances and productive power will provide helpful here. Conducting these analyses on Facebook’s interface will serve as an example for the greater changes that are happening within the (new) media landscape. A discussion of the above sections will follow, exploring the deeper implications of the changes analysed in the case studies in connection with the first section on written text and images. This is the place where the humanistic context will come forward and possible moral consequences will be explored.

(8)

2. Text/Image

Traveling without the internet, some people used -and maybe still use- a little picture dictionary called point it, for basic communication with people that do not speak their language. Obviously, more complex communication was not possible with this little book, but with the internet not being around and/or accessible yet, it made travelling a lot easier.

From a text-image relationship perspective, this is the idea of a picture dictionary (as point it calls itself on its cover). It is using a set of images to function as a type of language, organised by its ‘universal’ uses and associations independent of any written language. A dictionary, both in definition and etymology, has a strong connection to words and written language. In the case of point it one could thus argue a picture dictionary is a contradictio in terminis; as a picture or an image is not a word or language but something different, yet connected.

We all know that written text and images are different from one another, at least, they are to us humans. We read text and we see or read images in a different way. Written text is language and to be able to ‘make sense’ to us it works following reasonably strict rules. We must learn to understand the system of language as it is not innate to us. Almost all of us learn to speak, most to read and write too. We are highly verbal beings. Not only are we very verbal beings, we are visual beings too. We have a lively imagination and produce and share images like never before. Technological development has made it easier to communicate with images. It has never been as easy for so many people to participate in visual communication. The creation of images (especially photography and video) is done with hardly any effort in contrast to a hundred years ago. One could image the number of images (let alone moving images) an average person was exposed to daily in 1919 has increased significantly over a hundred years. So, we like communicating with images, a lot. It seems the more it is technologically possible to communicate with images, the more we like to do it, especially within new media. What does this mean for written text within new media? Is it being replaced in some form or another? Or are images only a welcome addition to textual communication? To make any kind of analysis into this direction, we first need to lay out what written text and images are. How they are different and how they might be the same. In the next section, I will lay out a discussion of the ontology of written text and the image.

(9)

2.1 On text

The first stories we will be exposed to in our lifetime will be either visual (picture books) or verbal, as we cannot properly read until we reach the age of about six or seven years old. You probably do not remember anymore what it felt like to slowly be able to make more sense of this highly textual world. One could also take a trip to China to experience this feeling of lostness (with the obvious exception of the ones able to read Chinese).

But what exactly is text? A text can be written or spoken. You can hear a text, or read one like you are doing now, unless someone is reading this to you. When written text is spoken about in this work, it is referring to language that is written down. To the writings of spoken languages specifically. So the written down versions of other types of ‘languages’ that are not spoken by humans (today) such as Morse code, hieroglyphs or hypertext will not be part of this discussion. Although language has a lot of ways of expression other than written text (verbal, sign language, Braille), the focus will be on written text as this form of language is often used in combination with images and this relationship is the subject of my study. I will not be researching the spoken word or spoken text in this work, as it would fall outside of the possible scope of this particular (limited) research into image-text relations. I would also like to point out that the ‘image-as-text strategy’ will not be used. This idea was developed during the ‘linguistic turn’ by (language) philosopher Ludwig Wittgenstein, and later on by others. In this trend, language was seen as constructing reality and everything could be considered a text of some sort (Bateman 13). Following this idea, one could read a painting, a film or even a building as being a form of text. In the field of semiotics, for example, this idea of text went quite far, making a text anything that can be studied with the techniques of the semiotic system (Bateman 13). Making images a part of what is considered a text, would make my research into image-text relations rather problematic. Written text knows many forms or systems of writing, this work is written in the Latin script. Unfortunately, I cannot read or write other forms of writing than the Latin script, therefore this research will be into the ontology of written text in the Latin script only. Undoubtedly, reading from right to left provides a (small) difference in the readers/users experience in relation to nearby images, but this concept will thus not be considered in this work.

As written text is language, it now makes sense to look into the field of linguistics to further our understanding of the written word. Language works following a system that

(10)

ensures what is spoken or written down makes sense to us. Gibberish is still language, but to even be classified as being gibberish it means the words that are used fall outside of this systems’ logic. There are many concepts that work within this system or grammar, some of which I will briefly lay out with the help of the work “Language Descriptions” by Anthony Liddicoat and Timothy Curnow. Syntax, for example, deals with the order of words in a sentence so that it means what we want to say: “the dog sees the cat” versus “the cat sees the dog” (Liddicoat and Curnow 39). Language also works with units, for example within: “the tall plumber died”, ‘the tall plumber’ goes together as a unit and ‘plumber died’ does not (Liddicoat and Curnow 41). There are many more examples of how grammar works but this will not be discussed further as the point of this section is to demonstrate that the rules of language and written text are quite strict. If you put a question mark behind a sentence it creates a whole different context of meaning: “You saw your wife in the garden” versus “You saw your wife in the garden?”. The point here is: language seems to work following a relatively unambiguous system to create meaning. If we ask what written text is, the answer thus far would be that it is written down language and that it works following a specific set of rules that determine its interpretation. A big part of what written text is however, should contain why it is used by us and how.

Written text has one purpose that is most prominent: communication. Why we use written text in certain situations compared to other means of communication is due to its relatively unambiguous nature. Written text can communicate quite specific and complex information efficiently. In contrast to the spoken word or spoken text, written text is also less ambiguous. You can ask someone to put something ‘into writing’; this is a more binding way of communicating. Written text is associated with seriousness, and being permanent: it is there in black and white. Written text, and language in general, is also very useful to describe things that happen over time. We could express these kinds of narratives through the image, or possibly through smell, touch or even taste but one can image this being highly inefficient. We use written text in a way that places itself within discourses that go beyond the language in use to “language use [that is] relative to social, political and cultural formations […], language reflecting social order but also language shaping social order, and shaping individuals’ interaction with society” (Jaworski and Coupland 3). We use language to reflect upon society and it is a reflection of society in itself. Written texts, in a broader concept than just being texts on paper or a screen, can be part of specific discourses.

(11)

Discourse is defined by the Cambridge Dictionary as “communication in speech or writing” (Discourse, Cambridge Dictionary), which comes down to simply being spoken or written text. What Jaworski and Coupland are referring to however, is a type of discourse in the Foucauldian sense. This refers to the specific way a group of people speak about a subject and with this also construct a specific reality of it; containing certain morals and power relations. In more modern academics, it is not only the written word that is seen as constructing a discourse. Images, art works and interfaces do too.

2.2 On image

When we discuss what an image is, it is needed to make clear what we mean by ‘an image’. The Cambridge Dictionary explains image as: “any picture, especially one formed by a mirror or a lens” (Image, Cambridge Dictionary). ‘Picture’ is explained by the dictionary as: “a drawing, painting, photograph, an image seen on a television or cinema screen, a cinematic film” (Picture, Cambridge Dictionary). When I will be referring to image in this work, this can include any type of picture as explained above.

Images are all around us, at least, for most of us. They are so integrated in our daily lives, our surroundings and our technologies that we experience them as being ‘normal’. Symbols and logos are to be found almost everywhere humans are. Taking a photograph of ourselves - a selfie - and sending this to another person in a matter of seconds is not

prodigious (anymore) for most of us in the Western world and beyond. Centuries ago the reproducing of images was done by woodblock printing or other kinds of stamping. Easily producing, using and reproducing ‘realistic’ images became possible through the invention of photography. First forms of photography were invented around 1830-1840 by Joseph Nicéphore Niépce and Louis Daguerre. The invention developed from being very time-consuming to the type of everyday analogue photography that became increasingly possible and popular in the 20th century. Taking photographs soon became easy with cameras

becoming portable and available for consumers. Still, most people did not take photographs every day, but reserved them for special occasions such a holidays, birthdays and other significant events. As a photo that is taken by an analogue camera cannot be deleted within the device, one can imagine that the process of taking one was more special than the

(12)

now become the norm for most in the Western world. What lays at the core of our need to create these types of images?

Of course, there are many types of images besides the photographic one, still and moving, symbols, drawings and graphs. Discussing the photographic image here however is helpful seeing that a large percentage of content on social media sites - including Facebook - is made up of photographic images in particular. André Bazin, influential film critic and theorist, claims that it is our obsession with realism in his work “The Ontology of the Photographic Image” (7). With the invention of photography: “For the first time, between the originating object and its reproduction there intervenes only the instrumentality of a nonliving agent” (Bazin 7). Also, Bazin explains that our obsession with recreating the world realistically (through the photograph) lies in our deeper human need to overcome time and death (6). By creating a ‘realistic’ replication of the world at a certain point in time, the subject stands still in this time. With the social media age now being here, it becomes clear that this ‘preservative nature’ of the photograph is no longer our main concern. With platforms like Snapchat being popular, where photos that are taken and send are deleted a few seconds after consumption, the photograph has clearly become something else. This is also the case for the popular features of Instagram and Facebook: ‘stories’, where photos stay online for 24 hours and are then automatically deleted. What then has the photograph become?

In his work “Photography in the Age of Snapchat”, Daniel Miller concludes that photography has expanded to become a ubiquitous presence and with this it is almost analogous to language itself (14). The photograph has, in the new media landscape, become mundane by nature (Miller 15). The idea that the image now acts as a kind of language itself, calls for the question as to how the understanding of this language works exactly. Structural research has been done in this department by Gunther Kress and Theo van Leeuwen over several years, where they, together as well as independently, extensively discuss the different semiotic modes of images, visual design, multimodality and modality in the digital age. It becomes clear that those different modes of the visual do not work as language works, as the ‘signs’ do not refer to set meanings like the signs of language (words) do.

With discussing what images are, it is important to discuss how they carry

(13)

more complex than it is for text (Santini, Gupta and Jain 37). “In text, every word has a finite number of meanings, and the correct semantic value of a word, if not immediately clear, needs to be disambiguated […] through sentence level or paragraph-level analysis” (Santini, Gupta and Jain 37). With images, this ambiguity is clearly more of an issue. Although people who do not speak each other’s language can see the same image and understand what it is, what it means is more open to interpretation and personal context than the written word. Overall, what is to be concluded from this section is that how images ‘work’, how they make meaning, is different and not as straight forward as how language works. In the next section the relationship between written text and images will be discussed.

2.3 The relationship and the divide

Going back as far as Plato a specific distinction was made by people between words and images. Plato is credited with first systematising this distinction in the Cratylus. Words or language were seen as something created and artificial whereas images were seen as natural and effortlessly understood (Mitchell 75). Thomas Mitchell also makes this observation in relation to forms of art in his work Iconology: Image, Text, Ideology from 1986. Mitchell says: “[…] poetry, or verbal expression in general, sees its signs as arbitrary and conventional—that is, ‘unnatural’ in contrast to the natural signs of imagery” (47). Mitchell believes this distinction between text and image is so normal for humans to make that there is no reason to believe Plato first systematised it (75). In this logic, there is a “commonplace distinction between words and images as conventional and natural signs” (Mitchell 77). In other words: language and written text is something we have to learn to understand whereas understanding images is more innate to us. Looking at images and words this way, the visual/verbal divide seems strong.

There are cases where written text can manifest itself in the form of an image (see Image 1). We can read the words individually and together they make up an image. So written text can form an image and strictly speaking, written language is visual. Does this mean that maybe they are more similar than we initially think? John Bateman states in his work Text and image: a critical introduction to the visual/verbal divide (2014) that in

researching text and image one needs to avoid assuming that the two are more similar than they actually are (7). Bateman states that if text and image were more or less the same,

(14)

combining them would not result in anything substantially new (7). Going in the opposite direction: arguing that images and text are completely different, is also not productive because this would mean combining them does not produce anything sensible (Bateman 7). And clearly they do. In case of Image 2 for example, the fact that the words are shaped in the image of a face, changes the ‘reading’ (the interpretation) of the words in question. One could imagine the different context of the words if they were shaped like, say, an elephant. Now we do not very often encounter text and image like Image 1 in daily life. Most of the time we see images and text next to each other or on top of each other (text on image especially). Like for example with the online consumption of text-based news.

Image 1. Words form an abstract image of a face

To see how images and text are typically used in our daily news consumption, one can take a look at some popular news websites. Popular news websites like The Times, the NOS, The New York Times and many others all contain a lot of images, especially on their homepages. Surfing around the internet will result in noticing that the online written news media rely heavily on the use of images. The structure of the NOS.nl homepage even makes it

necessary for a new article to be accompanied by an image. We can conclude from this that people seem to really like photographic images to go with text-based news consumption. I have noticed especially photographic images with humans on them. Another conclusion to come to here is that the images are an important part of drawing attention to the article, since they most often appear next to the headline and preview. Most of the time the images are not an essential part of the article, or even not necessary at all (see image 2).

(15)

Image 2. The preview of a Dutch article on the NOS.nl home page about “Kijkwijzer” accompanied by the logo of “Kijkwijzer”

In this text-based news media example, images are used to make the news more appealing even when the image is not necessary to tell the news story.

Why is it that the image, and especially the moving image, can have such a powerful and seemingly immediate attention-demanding effect on people that written text just not seems to possess? Obviously, when the moving image contains audio there is an extra component to draw a persons’ attention, but even without audio the movement in images draws our attention more than something that is still. A study done by Aman Yadav et al. into cognitive and affective processing and the differences here between text, video, and video plus text in 2011 is called “If a picture is worth a thousand words is video worth a million?” This study concludes that “video is more powerful than text in the

affective/engagement realm” (34). Participants in the study were more engaged when experiencing video or video plus text content compared to the text-only content. Earlier research by James Clark and Allan Paivio from 1991 shows that visual imagery, such as pictures and moving images trigger a more affective response than written words do (160). A more recent study done by Marko Horvat, Davor Kukolja, and Dragutin Ivanec in 2015 states that “emotional reactions can be induced by virtually any multimedia format: films, pictures, sounds, voice and even text.” (1394, emphasis added). “And even” suggests here that it is the least obvious or apparent option. They concluded that video material was more powerful in stimulating emotions than pictures (1396). A hierarchy is noticeable: text evokes the least affect/emotion, images more so and moving images even more. The same goes for

(16)

the attention hierarchy. Text and images thus are different in how or if we see them and what our reactions are towards them. A conclusion to draw here is that in cases where texts and images are placed in the same space, the image and especially a moving image, would ‘win’ in terms of grabbing our attention, holding it, and transferring a stronger feeling. One could imagine this effect being even stronger if the moving images have accompanying sound. What becomes clear by simply looking at homepages of text-based news websites is that images are also used with the purpose of drawing attention to text. Images here make a written text news website, and especially the homepage, more appealing. As if a written news article does not possess the power of being appealing enough by itself. Written text is not as sexy, you could say. Is the rule of the written word broken? Has the image won out over the word? These questions are asked by Bateman in his book Text and Image (11). His answer in short: no. “[…] despite the increase in ‘visuality’, it is rarely the case that the written word disappears. What we instead find all around us is a far richer range of combinations of different ways of making meanings” (Bateman 11, emphasis in original). Over time, their relationship has seemed to become closer and more intertwined. Bateman continues with explaining that when combinations of written text and the image are done well, it results in something more than either could achieve alone (11).

French literary theorist, philosopher, linguist and semiotician Roland Barthes has written quite a bit on written text and images and their relationship. In his work “Rhetoric of the Image” (Rhétorique de l'image) from 1964, Barthes explains image-text relationships as being equal or unequal. Barthes lays out a three-way classification of text-image relations: anchorage, relay and illustration (Barthes 38) (Bateman 35). Anchorage and illustration are unequal relationships between text and the image. With ‘anchorage’, the text describes how to ‘read’ an image, for example with an image caption. The freedom of the image is being dictated by the text (Bateman 35). The other way around would be ‘illustration’, in which case the text needs the image. Barthes calls more equal text-image relations ‘relay’. According to him these types of relationships are less common. “Here text […] and image stand in a complementary relationship; the words, in the same way as the images, are fragments of a more general syntagma […]” (Barthes 41). He goes on to explain that the modes can co-exist in one iconic whole (41).

With a social media site instead of more traditional media that appear on the page, this co-existence of the different modes of text-image relationships seems the default. In

(17)

the Discussion section, these different modes will be discussed in relation to the case studies. Both written text and images have unique characteristics, that go together well in human communication. Together, they create meanings that they would not be able to create on their own.

3. Text/Image within new media

New media, as their name suggests, have indeed not been around for long. Especially in contrast to written text and images, and the forms of media that they were native to for centuries. Who thinks of new media probably thinks of the internet first. Seeing it changed the way we live our lives so drastically, it might be the most important development in media thus far. In “What Is New Media” (2001), Lev Manovich lays out that we are living in the middle of a new media revolution, the revolution of ‘new media’. He describes this as: “the shift of all culture to computer-mediated forms of production, distribution, and communication” (2001, 19). This shift of all culture might seem radical, and maybe it is. Never before were different forms of media, text, audio and/or images together in a way in one space that the internet now offers. Manovich describes how new media became new in detailing the history of the old media and computing. Their intertwinement changed the identity of both media and the computer (Manovich 2001, 25). Manovich then goes on to explain that computerisation will affect deeper and deeper layers of culture (2001, 27). I would argue that nowadays, the identity of media and ‘the computer’ are not only changed and intertwined, but inseparable. The old media have found their ways onto the internet where they now live as if they always have. Though, as Manovich claims, we have to be wary of defining new media as old/analogue media that has solely been digitised (49, 50). Indeed, if we look at the state of new media today, one does not simply identify the ‘old media’ within it as being digitised. In the case of cinema, television or radio

viewing/listening online, there is a clear feeling of experiencing the old media in a new media environment, but this is not at the core of what new media is. So, what are new media? What makes new media ‘new’ and so different from the older media forms? I will touch upon these questions briefly with the works of Lev Manovich, Mark Deuze and Terry Flew.

(18)

window of a web browser will come to supplement a cinema screen, a book and a CD player with which all culture will be filtered through a computer (8). The human-computer

interface plays a big part in this as it “[…] comes to act as a new form through which all older forms of cultural production are being mediated” (Manovich 2003, 8). Manovich has since written a lot about these human-computer interfaces and their software in particular. In 2019, new media and the human-computer interface are still ‘filtering’ culture but most of all, they are creating it on an immense scale. When talking of new media today, we are talking about massive quantities of digital cultural production, distribution and

consumption. We can identify a shift from ‘filtering’ or ‘translating’ the old media to new media forms towards mass consumption and now to mass new media production. This mass new media production mostly consists of user-generated media content. This shift is

touched upon by Manovich in his work “The Practise of Everyday (Media) Life” (2009), where he discusses user-generated content in particular. Manovich shares some numbers of media creation and consumption from 2006-2008, a few years before the first photo was shared on Instagram. “Facebook: 14,00,000 photo uploads daily. The number of new videos uploaded to YouTube every 24 hours (as of July 2006): 65,000” (Manovich 2009, 4). In 2013, an average of 350 million new photos are uploaded to Facebook every day (Smith n. pag.). These numbers have been rising and if we are talking about what new media are, user-generated content in the form of text and images seems to be a big part of it. This type and scale of bottom-up production structure was not present in the older media. This

omnipresence of media is another aspect of what makes these media ‘new’. Digitalisation, embeddedness, connection and fluidity are all concepts that work within this omnipresence of new media. What is happening now and what will continue happening in the (near) future is that new media will become more and more invisible (Deuze 40). This next shift will be from device orientated media consumption to a more ‘natural’ media experience such as the further implementation of the Internet of Things (Deuze 40). A part of experiencing new media today, is experiencing it with less consciousness about it.

The new media we use and speak about today are in itself different than the new media from twenty or even ten years ago. Answering what new media are exactly proves to be quite difficult, as the changes in technology and new media use change so rapidly. But the concepts that I have discussed above, combine to be helpful in our understanding. A combination of the internet, digitalisation, bottom up content creation and their

(19)

omnipresence are at heart. On the next page, Image 3 shows the three “C’s” in a definition of new media given by Terry Flew in his book New Media an Introduction (2008). Flew explains that three C’s: Computing and information technology, Communications networks and digitised media and information Content, converge in the case of new media. In its centre is the internet, at the heart of new media. Now new media is discussed in general briefly, let is investigate how written text and images occur and have occurred within them.

Image 3. From New Media an Introduction by Terry Flew (page 3)

When we look at new media, we can identify that it is a space for the visual to thrive. In contrast with the older media, image production, image sharing and image viewing is common practice here. Over time, a shift has become visible from text orientated new media into a more visual experience. If we look at the popular social media platforms from the early 2000s such as MySpace and LinkedIn, written text was playing a prominent role on the platforms. Images were present, but not in abundance. Twitter became popular a few years later and this social media site was and is mainly text orientated. Around the same time YouTube became very popular, and here an early shift towards a focus on images can be identified. The video-based platform has since in part taken over the role of the

television, with many young people consuming the majority of the moving images this way (Businesswire.com). Tumblr was launched in 2007, gaining popularity quickly. This site focusses around micro-blogging and has also seen a shift towards more image sharing.

(20)

Facebook, the most popular social media site in the world, became popular in 2006. In 2009, Facebook was ranked the most used social networking service by monthly active users. In 2006, Facebooks interface was mainly text orientated, with a single profile picture in the top left corner of someone’s profile and the odd tagged photo here and there. Now a person’s Facebook profile page is cluttered with images. After Facebook’s rise, other social media platforms have since become very popular. It is noticeable that these were very image orientated from the start, like Instagram and Snapchat, where text plays a significantly smaller role. Snapchat, mainly popular amongst young people, has seen the most dramatic shift towards an image-based structure. The app has no central user profile with textual information, no textual descriptions or bio’s and mostly focusses around sending

photographs and other images.

The image-shift that is clearly noticeable within new media raises the question what the state is of text in this landscape. Do we envision a future of (new) media where the consumption of text is kept to a bare minimum? As the technological possibilities for creating and sharing images has become more advanced, more and more images are used. It becomes clear that the more images we can produce and consume, the less we need and want written text. As the book and the page were replaced by the screen, the ‘logic of writing’ has been replaced by the ‘logic of the image’ (Kress 9). The power of text has been giving way to the power of the image after centuries of dominance, with the new media as a perfect example. New media of today, and especially the new social media platform, centres around the image. Images play a key role in presenting online identities, through profile pictures, everyday snapshots and created media (Highfield and Leaver 49). Next to this, as Gunther Kress puts it in his work Literacy in the New Media Age: “When writing now appears on the screen, it does so subject to the logic of the image” (10). Kress explains ‘the logic of the image’ as being spatial/simultaneous, in contrast to ‘the logic of writing’ as being temporal/sequential (20). According to Kress, the logic of the image now dominates the sites and the conditions of appearance of all ‘displayed’ communication (9). Following this logic, written text is now a visitor within this world of the image, and the world of new media. Existing within this other logic, it has to follow new rules. “After a long period of the dominance of the book as the central medium of communication, the screen has now taken that place. This is leading to more than a mere displacement of writing. It is leading to an inversion in semiotic power” (Kress 9). Kress’ work was published in 2003, the same year

(21)

MySpace was founded, before smartphones and iPads existed. One can image the

magnitude of the shift in semiotic power now screens are everywhere, in our pockets, on our faces, multiple ones in our homes and in classrooms. Now the image has taken over our new media spaces and written text is visiting within this logic of the image, it seems fair to assume that our affective responses are changing too. As discussed above, images cause and hold a different affective response and quality than written text does by its nature alone. Living within media that follows the logic of the image must have a great impact on the affective quality and potential that written text now possesses.

The new media networks that images and text travel through to reach our bodies for their consumption, are often just considered as passive sets of objects. James Ash argues in his work “Sensation, Networks, and the GIF: Toward an Allotropic Account of Affect” that rather than doing this, the networks can be considered as “transmitting and translating sense itself, which in turn generates affects as these sensations encounter bodies” (123). If we consider these networks to be operating within the logic of the image, this means that the generating of affect within these networks is structured to benefit the image.

Computational objects and networks structure the affects that a particular object can potentially generate (Ash 123). The potentials for generating affect within these

computational objects and networks are thus noticeably not equal towards images versus texts.

3.1 Affordances

In the introduction of their work “The Affordances of Social Media Platforms”

Taina Bucher and Anne Helmond describe the switch of the ‘favourite button’ on Twitter to the ‘heart button’. This switch was answered with noticeable dislike from users. Bucher and Helmond write that “a feature is clearly not just a feature”, they are objects of intense feelings (2). Social media platforms are structured in a way so that users can interact with them in specific ways through these features. This interaction needs to be easy to

understand, intuitive and fun. The way a social media interface is structured, affords a user to use it in a certain way. I will be discussing the concept of affordances in relation to social media interfaces and images and text in the next section.

(22)

Ecological Approach to Visual Perception in 1979. Gibson discusses the concept of

affordances as being specific relationships between the animal and their environment. The surroundings of the animal afford the animal to do certain things, it provides things. These affordances are not the same for all animals, as one can imagine, they are relative to the animal (Gibson 120). Affordances thus say something about the relationship between the animal (or human) and the environment. The concept of affordances has since been used in many academic fields including design, Human-Computer Interaction, sociology and new media studies. When using the concept of affordances within a new media interface analysis, the interface is the environment the human is using. The features and specificities of this environment afford the user specific things to do (and to not do). This understanding of affordances is described by Bucher and Helmond as ‘low-level affordances’ and is feature orientated. Next to low-level affordances, Bucher and Helmond describe ‘high-level

affordances’: a more abstract conceptualisation that is much broader than specific features such as buttons and screens. This conceptualisation of affordances as high-level affordances is closer to the original concept Gibson described: “In a Gibsonian sense, technical

features—understood as the furniture of the digital landscape—afford certain actions such as clicking, sharing, or liking” (Bucher and Helmond 13, emphasis added). The old Twitter ‘favourite button’ simply affords a user to click on it, and with it the user communicates a post is one of his/her favourites. But many users also used this button to say “I agree”, to save the post to look at later or to end the conversation (Bucher and Helmond 2). Bucher and Helmond call for being more sensitive to platform specificities in empirical analyses of affordances in social media. This entails considering how affordances of platforms are relational and multi-layered.

If we are speaking about interface affordances, a concept to consider is productive power. With affording certain actions, a platform constructs a certain power structure: creating regulatory power and normalisation. The concept of productive power was introduced by philosopher Michel Foucault in his work on social control and its relation to power and knowledge. A productive power framework, as Mel Stanfill describes it,

“operates from the premise that making something more possible, normative, or ‘common sense’ is a form of constraint encouraging that outcome” (1060). Stanfill introduces the ‘discursive interface analysis’ in their work “The interface as discourse: The production of norms through web design”. With this form of analysis, a websites’ affordances are analysed

(23)

to reveal what users should do, thus producing the normative. Stanfill wants us to ask which beliefs drive design and which are built in, and also what the consequences are of these design choices. This type of analysis uses the concept of low- and high-level affordances, seeing it looks at specific features and also looks beyond these functions, how they shape the norm. Stanfill uses three types of affordances (out of four) from the work of Rex Hartson, to conduct the discursive interface analysis: cognitive, sensory and functional. Functional affordances show what a site can actually do, cognitive affordances how users know what a site can do and sensory affordances enable the user in sensing something (design choices) (Stanfill 1063). With these three components, the discursive interface analysis produces knowledge into what a platform affords the user and how these affordances, through productive power, create certain norms.

Now there is a better understanding of affordances and how this concept relates to social media, we will come back to the two core concepts of this work: written text and images. How do they relate to affordances in the world of social media? Written text and images are essential parts of the affordances of a social media interface. In fact, they are the only two components of the visual affordances a social media platform has to offer. They often occur in combination. A word (text) in a box (image) is often a button, which will afford you to click on, often taking a specific action. Beyond the fact that visual affordances are made up out of written text and images, these affordances offer the user to interact with written text and images too. A button may afford you to upload or take a photo or write a text. Where those buttons or options are and what a user can do with them, has certain implications. These two ‘levels’ of affordances of images and text are comparable with the high- and low-level affordances Bucher and Helmond describe. We know that written text and images have different affordances themselves; you interact with one differently than with the other, one mode affords you different things and actions than the other. Their differences have been discussed earlier in this work, though I will sum up some core differences in the light of affordances. Written text affords a person to imagine an accompanying image, and affords ‘understanding’ following a narrative logic. Whereas images do not evoke this same imagination and have a more spatial/simultaneous logic. Images afford us – in the case of the photograph – to look at a ‘real’ representation of the world. Written text affords us a type of clear and precise communication, that images (without speech) do not afford us in certain situations. In their turn images also afford us

(24)

another type of clear and precise communication, that written text does not. A good example of this is the use of traffic signs, instead of long textual information next to the roads.

3.2 Methodology

To obtain a better understanding of how written text and images relate to each other within a new media environment, I will conduct three case studies of Facebook’s user interface. These case studies will be done through a historical overview of updates in certain interface features. The first case study is an analysis of the (changes in the) Facebook status update feature and how these changes afford different options in terms of the use of written text and images. The second case study is an analysis into how emoji, stickers, symbols and GIFs were and are supported by the social media site, in relation to written text. The third case study is an analysis into how ‘the person’ was and is represented on Facebook through images and text, specifically through ‘the profile’. The analyses are conducted in the ‘discursive interface analysis’ style laid out by Mel Stanfill, introduced upon above. Within my analyses I will make use of the concept of affordances to describe what certain functions afford the end-user to do. The three types of affordances – functional, cognitive and sensory – laid out by Stanfill in their discursive interface analysis will provide useful. Following the advice of Bucher and Helmond to be more sensitive to platform specificities, I take these Facebook specificities in account whilst conducting the analyses.

The contemporary historical period I will be considering is from September 2006 until current day (June 2019). I choose September 2006 because that is the time the Facebook platform became open to anyone. From this starting point, the most noticeable changes in Facebook’s platform structure in relation to the three subjects of the case studies are discussed. A study worth mentioning here is “Becoming a semiotic technology – a

historical study of Instagram's tools for making and sharing photos and videos” by Søren Vigild Poulsen. Poulsen researches Instagram’s interface tools for making images and describes these tools in semiotic terms while considering their meaning making potentials. Poulsen also conducts a contemporary historical analysis and discusses two kinds of changes in the semiotic function that emerges from the analysis. “One is the addition of new tools with semiotic functions that were not previously available, and another is the elaboration of

(25)

the meaning potential of existing tools” (Poulsen 133). My work here will also consider the changes in platform tools of an influential social media site, both existing and new, and their ‘meaning potentials’. Though our approaches differ as I will be researching the features through the concept of affordances and further, how this produces the normative in relation to productive power. Nonetheless, Poulsens’ method will provide a useful context in the way it concerns platform features that are supporting images.

By using the three types of affordances as part of the analysis, a variety of functions can be discussed. I analyse the low-level affordances of the specific features in relation to written text and images as well as the high-level affordances they provide. This means that besides discussing (the changes in) buttons, placements, options, menus and lay-outs, I am discussing what actions they afford the end-user and what this produces. In this way, combining a historical approach with the discursive interface analysis allows me to map the changes in certain Facebook’s interface features and study their normalising effects on a higher level. With doing this, the relationship between written text and the (moving) image will be thoroughly investigated. Knowing that new and social media are environments where images thrive, and are operating within the ‘logic of the image’, special attention is paid to how text is being represented. Being that (moving) images clearly are playing a bigger role on social media sites today than they were ten years ago, the starting point of the analyses is to research how this is specifically represented by Facebook’s interface. In researching these specific features, a clearer idea emerges as to what ‘the shift’ to the logic of the image actually looks like. Keeping the relationship between text and images central is key to understanding what the limitations of images are in these environments and vice versa.

The discursive interface analysis is described by Stanfill as examining “norms produced by ‘affordances’ of websites – defined by H. Rex Hartson (2003: 316) as what a site ‘offers the user, what it provides or furnishes’ (emphasis in original)” (1062). Stanfill explains that this type of analysis is different from the social-scientific Human–Computer Interaction framework as it is not orientated towards designers goals, but takes into account that there is an unequal power balance between industry and the end-user (1062).

Something that is especially true for a platform like Facebook. The discursive interface analysis shows how through productive constraint technically possible uses become more or less normative. Of course, the concept of the discourse takes a central place in the

(26)

discursive analysis, as it provides a fruitful base for researching productive power. In the analyses the interface is analysed as discourse, it is telling us something. This Foucauldian concept is used widely within the critical discourse analysis in Media and Cultural Studies, and proves useful as discourses “structure how we think about things and accordingly how it makes sense to us to act” (Stanfill 1061). Foucault introduced the idea of the discourse in relation to social power structures over fifty years ago, and it still provides an applicable base for analysis. Foucault was mainly talking about governance from the national establishment like the government and other powerful institutions like prisons, schools, hospitals, the military and religion. Which in his time were considered the main producers of power. However, I would like to add that in this day and age, the massive new media

companies that we spend so much of our time within are to be considered serious producers of productive power too. Their interfaces are part of our everyday lives and communications, structuring the normative through their affordances. Analysing the

interface of the social media site with the most users worldwide in relation to the two main pillars of our understanding of the world and human communication: written text and (moving) images, will help provide knowledge into the formation of modern-day productive power. In the Discussions section after the case studies, a discussion of the possible

implications of the normalising effects of the changes within the interface will be provided.

3.3 Text/Image within Facebook’s interface

This section will be a relatively short overview of how written text and images have existed within Facebook’s interface. No specific features, functions and tools will be discussed in detail at this point. The purpose of this overview is to establish a base on which to further built the case studies, and to demonstrate the more general ‘image-shift’.

Facebook became available to anyone with an internet connection in September 2006. At the start of its existence, Facebook’s interface supported the profile picture on the top left of someone’s profile, photos that are uploaded in ‘Albums’ and photos a person is tagged in. These Albums however are not centrally linked to on someone’s profile. Most of the links and options are consisting of text. The next year, a few symbols are added in the top left menu. The ‘Mini-Feed’ still mainly contains text updates. It was not until 2010 that photos really began to play a bigger role, with more images appearing on the profile, bigger

(27)

symbols and more sponsored content containing images. In 2011 the “Timeline update” was introduced, bringing the most significant redesign of the interface thus far. With Timeline the interface became a lot more image centred, with the profile and the home page supporting significantly more images than before. At the Facebook ‘f8 developers

conference’ in 2011, Mark Zuckerberg reveals this new Timeline redesign for the first time. Zuckerberg says: “The first thing you’re going to notice is that, it’s just, a lot more visual” (F8 2011 Keynote, 14:19). After the 2011 shift to a more visual platform, changes in favour of images kept steadily being implemented. The Facebook Messenger App was also released in 2011, making instant messaging via Facebook on mobile a lot easier. Facebook Messenger has become more and more supportive of image and photo sharing over the years. April 2012 is when Facebook buys the very visual social media platform Instagram for one billion US dollars. Hashtags, video stories and more options to personalise photos soon became available on Facebook too. Though, until this day, we see that Facebook still makes use of a considerable amount of written text in its interface structure. Especially compared to the other popular social media websites. In contrast to, for example, Instagram and Snapchat, written text is found in every feature. To explore in what ways text is used, and images are implemented the case studies section will follow.

4. Case studies

In the next few sections I will conduct three analyses as case studies of Facebook’s user interface: ‘The status update’, ‘Emoji, symbols, stickers and GIFs’ and ‘The person’. These particular features are chosen as case studies within this work as they represent the nature of social media and Facebook in particular well. Sharing (updating status), the profile (how a person is represented) and the use of emoji, symbols, stickers and GIFs are at the heart of what makes a social media platform social. Within ‘The status update’, the Facebook feature that allows users to update their status is looked at, from its earlier forms until the way it takes shape today. Attention will be paid to how textual and visual elements within this feature evolve. With ‘Emoji, symbols, stickers and GIFs’ these visual elements of Facebook’s interface will be analysed throughout the platform throughout its history, paying attention to their relationship with text. The third case study, ‘The person’, will look at how a person is represented through text and images within Facebook’s interface. This includes the personal

(28)

profile and a persons’ representation across the platform.

To collect data of older Facebook interface layouts, click through YouTube tutorials, screenshots, news articles on design updates and tech blog posts are used. YouTube tutorials in particular provide a useful source of data in this case, as the videos show how the older versions of Facebook were actively used in movement. Unfortunately, these do not exist from before 2009. With this data collection, the functions that are subject to the case studies will be discussed through the three types of affordances. First the functional affordances, second the cognitive affordances and last the sensory affordances will be discussed per change in feature in the case studies. First, the low-level affordances will be discussed per case study (what changes and how) followed by the high-level affordances (what do the changes mean). These findings will be discussed in the Discussion section, together with the findings of the first section on written text, (moving) images and their relationship. The discursive interface analysis method will provide a structural base for analysing the different features across the interface, in relation to the affordances of written text and images, and their productive power.

4.1 Case study #1: The status update

When Facebook became open to the public on the 26th of September 2006, the interface of Facebook looked a lot different than it does today. The status update did exist at this point, however not in a central position of one’s profile. Analysing the status update through the concept of functional affordances, I will discuss here what the feature was able to do. Looking at Image 4, we can see what the status update function looked like in 2006. A person was able to upload a status, only made up out of written text. Like a lot of other Facebook features at the time, the functions of the status update were limited. Looking at the cognitive affordances of the status update here, the question to answer is: how does a user know what this function can do? The word ‘edit’ at the top right of the feature provides the user with the cue that by clicking it the status can be edited. Two other clickable items are the words ‘See All’ and ‘[2] updates’. This early version of the status update thus provides the user with three options to click on, all indicated with written text. As an attribute to cognitive affordances, the concept of sensory affordances consists of certain design features to help the user with sensing. There are two basic symbols used in the

(29)

status update function at this point, an arrow at the top left and an image of an envelope next to the status.

Image 4. Mark Zuckerberg’s Facebook profile page in 2006, viewed from the user

In 2007, Facebook’s interface sees some changes, but the status update feature still only supports written text (see Image 5). The two options to see past status updates together ‘See All’ and ‘[2] updates’ have been removed. What is possible to do with the status update function, thus has been limited. In terms of the functional affordances, not much has

changed with the status update feature. The cognitive affordances did change, however. From the word ‘edit’ has now been made ‘Update your status…’. The status update in 2007 now only has one clickable item at a time, both written text only. A sensory affordance that has changed from 2006 to 2007 is that the placing of the status update feature has changed. The function now has a more visible and prominent place on the profile page, since it

moved from below the profile picture and links to personal information to the top centre of the page. As most of the text and the two symbols have been removed, the feature now has a more minimalistic and sleek appearance.

(30)

Image 5. A Facebook profile page in 2007, viewed from the user

In 2008 the status update feature sees a lot of changes. The functional affordances increase significantly with a new profile layout (see Image 6). Beyond written text, it is now possible to include written notes, images, videos and links in a status update. I would argue that this is the biggest change in the status update that Facebook has ever made. Next to being able to upload anything other than written text as one’s status update or wall post, people can now comment on status updates.

(31)

Image 6. A Facebook profile page in 2008, viewed from the user

The cognitive affordances of this new status update and all its new functions have obviously also changed significantly, with new symbols, text and buttons to go with the all the new options. The written text option as a status update is still the default option. Sharing a written note is the next option, followed by (moving) images. A clear ‘Post’ button on the right side of the function now first appears within the status update. The sensory

affordances now include several symbols and better legibility. The addition of the text entry box makes the function more noticeable and prominent.

The status update became a lot more prominent on the News Feed (Live Feed), where it was moved from a position on the side of the screen to the top centre in 2009 (see Image 7). The “Add photo”, etc, texts are now gone, only leaving their symbols with the clear indication “Attach” in front of them. The ‘Write note’ option has downgraded into the drop-down menu. Furthermore, in uploading a status update now a person is no longer ‘posting’ but ‘sharing’ as the publish button to update one’s own status has changed.

(32)

Image 7. Mark Zuckerberg’s profile page in 2009, viewed from the user

In terms of the sensory affordances, this version of the status update has gotten back its more simplistic and clean look, with a more minimalistic design without the written text. The symbols are now meant to be sufficient on their own for the user to understand what to do with them. But mousing over them will reveal a written explanation like: ‘photo’, ‘video’ and ‘event’. These words came back in 2010, with the text entry box missing completely (see Image 8). The text entry box or photo upload options would appear upon clicking on the text/symbol buttons. This is the most minimalistic the feature has looked in Facebook’s history, taking up the least amount of space.

(33)

The next change to the status update feature was with the big ‘Timeline redesign’ in 2011 and 2012 (see Image 9). With the Timeline redesign, the original Wall became more like an overview of one’s life, where one can add ‘life events’ and easily scroll back and forward through time.

Image 9. A Facebook profile page in 2011, viewed from the user

Considering the status update, the text entry box is back, and the written text next to the ‘photo’ and ‘status’ buttons has stayed. With the addition of being able to share the particular life events, the functional affordances have changed significantly. These life events are highly visual and visible important posts on a person’s timeline, categorised in different areas of life. With a status update, upon clicking on the text entry box, one can now tag locations and friends, that will appear as written text links in the update. A part of the cognitive affordances to support this new function are the five new symbols at the top right. The ‘video’ and ‘link’ option buttons have been removed. The ‘share’ or ‘post’ buttons have also been removed from the outlook of the feature at this point, and is only to be found after clicking in the text entry box (it is back to ‘post’). The sensory affordances have thus also changed, although the font and original symbols have stayed the same. All Facebook users were forced to use the new Timeline redesign in the spring of 2012.

In the following year, the life event update symbols have disappeared and made place for a more general ‘life event’ button (see image 10). This does not change the functional affordances, but the introduction of ‘feelings and actions’ to the status update

(34)

feature does (see Image 11). After clicking on the text entry box in ‘status’ mode, a user was able to select a particular feeling or action to go with their status update. This part of the status update is made up out of words and an accompanying symbol. As can be seen in Image 11, categories to share within are either what you are feeling or consuming.

Image 10. A Facebook profile page in 2013, viewed from the user

Image 11. Feelings and Actions added to the status update feature

Upon clicking on the text entry box now, next to being able to add a person and a location, two extra buttons have appeared next to those: an ‘add photo’ button and the ‘add

(35)

feelings/actions’ button (see Image 12). In terms of cognitive affordances, the order of the top row buttons has not changed. The fact that the five symbols of the ‘life event’ function are now bundled under one general button gives the status update a less cluttered look. The addition of the photo upload button within the written text status update is particularly important to the text/image relationship within the status update. The sensory affordances at this phase of the status update feature highly depend on the variety of the different symbols, that gives the function a more visual but more cluttered look.

Image 12. Options when uploading a textual status update in 2013

The next changes to the status update feature are around the year 2016 when the cognitive and sensory affordances change (see Image 13). The ‘photo’ button at the top bar changes into ‘photo/video’, the word video had been gone from the top bar since 2010. The button ‘place’ has been removed. Most clearly, the font and the symbols have changed

significantly, giving the status update feature a less dated look.

(36)

In 2017, the option was added to stream a live video from the status update. This button is placed before the ‘life event’ button (see Image 14). Another big addition to the status update is the function of the ‘image background’: making a text-only post to have a specific visual background (see Image 15). A user can choose between a variety of designs as a background image for their text-only status update, resulting in a text-image combination. Also part of the update is the addition of a wide range of emoji, GIFs and stickers that can be added to the written text. The cognitive affordances have changed significantly too, with the buttons for the written text options now being in a different place and having different symbols. They no longer take up a relatively small part of the total status update, but are larger and more visible and visual. Obviously, the image background function brings about its own new set of cognitive affordances, mainly consisting of the text entry box being a lot bigger and busier upon clicking within it. The new button to add emoji is placed within the text entry box, to give it a more prominent position. Different image backgrounds can be selected easily with multiple buttons at the bottom of the text entry box. The buttons and symbols are in a different style and all have colours now. An important change is that a miniature of a person’s profile picture is now placed within the text entry box. The feature overall has a more visual, cluttered and busy feel.

(37)

Image 15. The image background function introduced in 2017

At present day (spring 2019), the same options are possible while some options are added like ‘play with friends’ and ‘support charity’. Posting something can now directly be added to a person’s ‘story’. In terms of cognitive affordances, buttons for adding extra media to the written status update have returned under the text entry box (see Image 16). The option to add photos and or videos is now represented by two buttons. The button to ‘share’ has been removed and will only appear upon clicking within the box (see image 17). After some years, updating a status has now become ‘sharing’ again. The sensory affordances have yet a more visual, playful, rounded feel again, with more colourful symbols being present.

(38)

Image 17. Facebook status update feature upon clicking on it in 2019, viewed from the user In the previous section, the changes in low-level affordances of the status update function have been discussed extensively. In the next section, these findings will be analysed from a high-level affordances perspective, laying out the normalising effects the changes in the status update feature bring about in relation to written text and the image.

In the earliest days of Facebook, the status update feature was text-based only. This stayed this way until an important update in 2008. Having this origin, Facebook used to consist mostly of written text content. Moving the status update to a more prominent position within the profile and removing the symbols in 2007, makes the function more important and written-text focussed. Until this time, images such as photos, symbols, GIFs and emoji were not yet prioritised on the platform. With the rise in popularity of the smartphone in the mid-2000s, taking photographs became a more normal practise. This reflects in the changes the status update sees from 2008 onwards. With making it possible and easy to upload images with the 2008 update, this practise instantly becomes more normal and common for over a hundred million active users (Clement n. pag). Soon, the content on Facebook sees a shift to the visual, and the choices in status update layout clearly reflect this. The buttons to share visual content get a more prominent position with every update and the cognitive and sensory affordances themselves become more visual as well. This entails the changing in order of the buttons to prioritise the sharing of image-based content; the buttons themselves becoming more visual; the addition of symbols and emoji and being able to share GIFs, stickers and video material. The position of written text

Referenties

GERELATEERDE DOCUMENTEN

Volgens de vermelding in een akte uit 1304, waarbij hertog Jan 11, hertog van Brabant, zijn huis afstaat aan de kluizenaar Johannes de Busco, neemt op dat ogenblik de

Yet, less is written about the faith of these men, and more about their politics; even less studied is the spiritual life of political leaders, what Nelson Mandela,

The fifth category of Internet-related homicides consisted of relatively rare cases in which Internet activity, in the form of online posts or messages on social media

The classical version is one in which the narrative component is supposed to be largely dominant, sustained through periodic moments when the emphasis shifts towards

All respondents randomly viewed an advertisement with a neutral, functional, emotional, or combined appeal. Afterwards, respondents were asked to indicate the degree to which they

Similar to how they were used in the scene completion algorithm, when the query image is compared with images in the web collection, we determine the similarity between

In this day and age, where collections often already contain millions of images and keep on increasing in size, there is no justification for researchers to continue testing their

We have determined density profiles, surface tension, and Tolman length for a fluid in contact with a hard wall using the squared-gradient model and density functional theory with