• No results found

Performative Listening: A Cultural Anatomy of Studio Sound Enterprise

N/A
N/A
Protected

Academic year: 2021

Share "Performative Listening: A Cultural Anatomy of Studio Sound Enterprise"

Copied!
58
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Performative Listening

A Cultural Anatomy of Studio Sound Enterprise

Name: Steven Vrouwenvelder Student number: 6049400 Thesis rMA Cultural Analysis Supervisor: Dr. Timothy F. Yaczo

Second reader: Prof. Dr. Julia J.E. Kursell Date: 15-06-2016

(2)

Contents

Acknowledgments ... 3 Introduction ... 4 Listening ... 6 Cultural anatomy ... 8 Performativity ... 9 Translation ... 11

The External Ear: Creation ...13

Tuning ... 14

Modes of listening as ideality ... 15

Forward listening ... 15

Verstehen and forelistening ... 16

Construction of roles ... 17

Directed listening ... 18

Ascoltando ... 19

Guide... 19

The Middle Ear: Recording ...22

The Microphone ... 23

Assistive media ... 23

Resistive media ... 26

The Room ... 28

Construction of the self through listening ... 28

Construction of SSE-Noord ... 29

The Inner Ear: Processing ...31

Imitative Devices... 32

Echo, reverb and delay ... 32

Is imitation listening? ... 33

Reflexive listening ... 34

Non-human listening ... 36

The Nervous System: Mixing ...38

Arrangement ... 40

Psychoacoustics ... 43

The Brain: Beyond SSE-Noord ...45

Reduced listening ... 46

Entendre and Reduced Listening ... 46

Reduced Listening at SSE-Noord ... 48

Detached listening ... 50

Conclusion ...53

(3)

Acknowledgments

For the realization of this thesis, I was dependent upon the cooperation and assistance of many people. First of all, I would like to thank Frans Hagenaars, because without his permission I could not have observed and analyzed my object. I would like to thank all of the band members and musicians who I was allowed to observe and interview: Aart Schroevers, Annita Langereis, Arnold Lasseur, Bart van Strien, Ben Bakker, Berend Dubbe, Brian Pots, Danny Vera, Erik Kriek, Josephine van Schaik, Peter Peskens, Pyke Pasman, Reyer Zwart, Robert-Jan Kanis, Sonny Groeneveld, and Sophie ter Schure. I am very grateful to M.D. Emil den Bakker for introducing me to the basics of the human ear and for lending me his textbooks. Of course, I also wish to thank my supervisor, Dr. Tim Yaczo, and my teachers at the cultural analysis and musicology departments, especially Prof. Dr. Julia Kursell, for inspiring me and stimulating me to keep improving my thesis. Lastly, I would like to thank my girlfriend, Iris Gadellaa, for her inexhaustible support.

(4)

Introduction

It is cold—February cold—outside Studio Sound Enterprise, but inside a “desert” song is about to be recorded. Frans Hagenaars, Ben Bakker, and Reyer Zwart listen to the demo which today’s artist, Danny Vera, has recorded. When thinking of this song, Danny imagines a cowboy on a horse in the desert. In the future, he wishes to perform this song with three people live for the radio. Frans, who is directing this session, reminds the musicians that they are currently recording. He means that Danny can always strip down the song for live performances. Frans suggests that the synthesizer heard in the demo can be replaced with high strings. He reckons further that they should add double trumpets to attain the stereotypical Mexican sound.

Danny plays a soundtrack by Ennio Morricone and tells Ben that he is looking for a similar sound. Ben, as a regular session drummer, knows how to get that result. He explains, but the others do not fully understand. Ben gets the chance to record so the others can hear what he means. They

decide to record a basic take. The musicians record with a “click track”1 to maintain their tempo. In

this way, they can replace every recorded track with another later. After recording, they listen to the result, but no one is enthusiastic about what they hear. Danny tells Ben that the result is too

“military,” so Danny does not want to continue in this direction. They decide to record a new basic take. This time, Ben plays with brushes and Danny trades his acoustic guitar for an electric one. Listening to this new basic take makes Danny happy. This is the take he wants to work with—this take evokes the picture in Danny’s imagination.

When I first walked into Studio Sound Enterprise (SSE-Noord) in Amsterdam, I was surprised by the construction of the building. When I looked down the hall I saw several rooms with

microphones in them, but I needed to go upstairs to see the control room: the space with the recording equipment and the mixing console. In the SSE-Noord control room, one can only aurally

(5)

communicate with the musicians, who are located downstairs. These musicians, for their part, each record in separate rooms. SSE-Noord does not match the popular preconceptions about a music studio’s layout: a producer sitting behind a window telling musicians to record the song. In contrast to this popular image, I started to perceive the studio as a site of listening instead of documenting. With my limited understanding of the human ear, I conceived the recording room as the pinna and the control room as the processing brain. The connection between rooms only exists in electronic and digital media, just as the eardrums connect the outside world with the internal receptor

apparatus. In this thesis, I expand and elaborate on this analogy between human perception of sound via the ear and the perception of musical sound in the studio.

I asked Frans Hagenaars, who owns and operates the studio, what “studio” meant for him. He explained that a studio should be a place where an artist can easily record music, and where the result should instantly sound good (Hagenaars, 3 Mar.). Surprisingly, this resonated with my

preconceptions, but SSE-Noord showed me more, as the anecdote above exemplifies. I observed more listening to music, than simply the playing and recording of it. Many modes of listening were employed: listening to the demo, to Morricone, to each other’s playing, to the click track, and finally to the recorded take. These modes of listening are not clear-cut or ready-made. The band’s inability to understand the drum part Danny wanted shows that listening can fail. People practice and test their listening at the studio. This goes beyond “easily recording” and “instantly sound good” that Hagenaars mentioned.

The people and equipment involved at the studio listen to each other in order to come to a final and definitive listening. “Listening” is a gerund—a verb that takes the form of a noun. It is the holistic and subjective sound reception experience with a strong connection to its verb “to listen”: the action that results in this experience. The result of a recording session is the construction of an ultimate listening—the piece of music in its recorded state. This challenges the concepts of song and

(6)

limited view reduces the operations at a studio to mere documentation. I examine the multitude of

listenings that precede the ultimate listening. This approach emphasizes that music changes between

listenings and that its identity is absent, or at least unstable. The human auditory sense has a special

agency that is called listening, because it translates air pressure into mechanical movement and

finally into electric neural information in a way that is unique to every individual. This function is mirrored by the different instances of listening during the process of recording. Listening

performatively establishes a specific situation that shapes and directs the recording of music. In this

thesis I investigate these situations through, what I call, a cultural anatomy, because I divide the listening that happens in a studio into stages just as an anatomist dissects the ear. Only after such an operation can the different functions that the parts perform be investigated. This thesis analyzes how listening functions as a performative agency at Studio Sound Enterprise.

Listening

Sounds can be perceived because they make the eardrums vibrate. In other words, sound only sounds when it is heard. Hearing is often understood as passively receiving sound, while listening is oriented or focused towards a particular sound source (Herbert 1-3). Musicologist Ruth Herbert points out that people conceive of this dichotomy as a hierarchy in which active perception is deemed the proper engagement with music. This hierarchy implies that sound conceals a universal understanding only to be grasped through listening. I rather see listening as a performative

engagement with music: the listener attributes meaning to music by listening to it in different ways. One generally does not listen to a click track for inspiration, just as one does not listen to an Ennio Morricone composition simply to keep the right tempo. However unlikely, one could listen in these ways. One changes the Morricone piece by listening to it in different ways. I call this a mode of listening. Sound comes into existence through reception, whether active or passive. The

(7)

comprehension of sound and the realization of a listening, on the other hand, varies according to modes of listening. This thesis, then, is an investigation into modes of listening at SSE-Noord.

An increasing amount of scholarly work on listening has been published in the recent decade

(Carlyle and Lane; Herbert; Kane; Nancy; Sterne; Szendy; Tuuri and Eerola; Voegelin; Wolvin).2 The

literature is comprised of listening-based ontologies and phenomenologies and also explores everyday encounters with listening. Communication scholar Jonathan Sterne, for example, explores how the MP3 format and people’s distracted listening fuel each other. Others (a.o., Nancy and Voegelin) explore the way listening provides a perception of the world that is markedly different

from seeing.3 In both cases, scholars figure the listener as a universal subject.

These elaborations on listening focus mainly on the receiving end of perception. Although they enhance understanding about human listening, they largely fail to explore how listening enhances and alters one’s understanding of the world. An exception is the French philosopher Peter Szendy, who investigates listening agency throughout European history as he examines several

specialist listeners.4 He thereby deviates from the assumption that listening is universal, but rather

sees it as a subjective appropriation of a musical work (Szendy 8). The listeners that Szendy discusses are specialists because they have the agency (i.e., potential) to communicate their listening. In other words, their modes of listening create a new listening for others. Szendy explores when and where these listenings were conveyed as they went beyond the individual experience. For example, the arranger (Szendy 35-68) is the one who interprets (i.e., listens to) a musical work and rewrites it for another (set of) instrument(s). In other words, the arranger can write down a listening allowing others to listen to her or his listening.

2 This list contains several books and articles that I address in this thesis or that are exemplary for the general

tendency of listening research. Tuuri and Eerola as well as Wolvin are included because they reviewed much of the pertinent research. They show that it is mainly psychological(ly informed) and formulated in universal terms.

3 I elaborate on this in chapter 2.

(8)

In the introduction, Szendy wonders, “Can one only make one’s listening heard by rewriting, by radically crossing out the work to be heard? Can one adapt, transcribe, orchestrate, in short

arrange in the name of the work?” (Szendy 7).5 Surprisingly, Szendy does not investigate the

production of music at a recording studio.6 For many artists, a studio is the place to carry out a

listening and to preserve it for eternity. In contrast to the individual specialists Szendy discusses, the listening established in the studio is the work of multiple actors. The studio is a unique situation that provides this opportunity. This is evident from listenings done outside the studio; for instance, live concert music is different from studio recorded music. These divergent perceptions, even or especially with the same song, are the result of different modes of listening that direct the music at these locations. This thesis intervenes to hear what was previously silent in Szendy, by examining how SSE-Noord provides the space to develop a communicable listening. This thesis contributes to an understanding of listening as specialists at a recording studio practice it. It also enhances the general concept of listening, because I emphasize the non-universality of listening.

Cultural anatomy

During a conversation with a friend, who is a medical doctor, it occurred to me how many functions of the human ear resemble operations that take place at SSE-Noord. These functions of the ear are optimized for proper hearing and enhance the audibility of the auditory world. This

perspective revealed that the diversity of actors at SSE-Noord were connected, because they were all oriented towards the audibility of music. This realization stimulated me to investigate the analogy between the ear and SSE-Noord. The auditory organ of the ear and the recording studio are both widely perceived as a unity. Hagenaars states that an artist comes to the studio to crystallize a song that the artist rehearsed before entering the studio (5 Oct.). This view obscures the creative process

5 Emphasis in original.

6 Maybe it is his predilection for so-called European classical music that causes him to neglect the recording

(9)

that music undergoes during a recording session. I look at the studio as a listening space, instead of just a site to record and document songs. In order to understand the creative processes that take place at SSE-Noord, I will perform a cultural anatomy. Like the anatomist who dissects the ear, I will cut the studio into parts. Sound travels from the external ear through the middle and inner ear, via nerves into the brain. Each of these stages performs a crucial part of the process of hearing and listening. Although the functions of each part are described, they only function together. The process may appear unidirectional, but in fact, the parts send information back and forth within the ear.

Ear dissection allows for the exhibit of parts that are distinguished on the basis of a connected function. Similarly, at SSE-Noord, modes of listening with related functions can also be separated from one another. My cultural anatomy distinguishes five components that correspond to the parts of the auditory organ: creation, recording, processing, mixing, and beyond the studio, respectively. Although it may appear so, the order of chapters here does not represent the progression of recording, but correlates to clustered modes of listening. The modes of listening I examine, are active throughout the recording process, but are emphasized in the separate stages that I distinguish.

Performativity

The cuts of this cultural anatomy are “agential cuts,” as formulated by Karen Barad (815), because actors (musicians, equipment, etc.) perform these cuts. Barad contributes to the discussion between linguistic performativity (an entity is because it is called accordingly) and material

performativity (an entity is because it acts accordingly), while seeking to emphasize the importance of the latter. She argues that performativity is “linked not only to the formation of the subject but also to the production of the matter of bodies” (808). She relates this to Judith Butler’s gender performativity: “a set of free-floating attributes, not as an essence—but rather as a ‘doing’.” (808n8). In other words, a perceived entity exists as an entity because it acts accordingly not because it is.

(10)

Actions divide subjects from objects. This division is what she calls an agential cut, because it separates matter through the agency of the subject. Similarly, modes of listening separate the greater process of recording into stages.

The cutting subject is the thing that acts and thus called the actor. The object is the thing influenced by this action. I identify constellations of such actions as the role. For example, in the relation between me (my physical body, my notebook, and my background) and SSE-Noord (Frans Hagenaars, its equipment, and the musicians involved), I perform the role of the researcher, and the studio is the object of study. Performativity is acting according to a role and thereby establishing a relation between a subject and an object. Agency is the potentiality of actors to perform or act. I investigate listening’s agency: actors distinguish stages in the recording process through different modes of listening.

Caleb Stuart researched performativity in relation to live performances of so-called “laptop music.” He claims that audiences can hardly relate to performances where only a laptop is present (Stuart 59). However, they could relate if they would focus on listening instead of watching the live performance (Stuart 64). He argues that these performances blur the line between documentation and performance. In this thesis, I am arguing for a similar shift, although I concentrate on the

recording process. I argue that the recording process is performative rather than documentary. With this shift, one’s engagement with the recording changes. When recording is conceived as a

performance, those involved in the recording are actors and their operations are performative. The main actor in this research is Frans Hagenaars, the owner and operator of the studio. He acts according to the roles of sound engineer, mixer, mix engineer and producer (though he prefers the word “director”) simultaneously. In Dutch, one can distinguish between a producer and a

producent, but both translate as “producer” in English. I use director to indicate the latter as it refers

to the actions related to organizing, enabling, and directing the recording sessions. According to Hagenaars, a producer is associated with a virtuoso multi-instrumentalist that has a strong opinion

(11)

on sound and production, while a director collaborates with the artist (3 Mar). One can detect from this last statement that these roles are not identities but rather temporary relations. Hagenaars acts as a director, or rather steps into a directing role, only at specific moments.

I elaborate on these relations among actors throughout this thesis, as these are not fixed at a certain stage. The relationality of events and actors is inherent both to the creation of music and to the process of doing fieldwork. This thesis reflects a research period (November 2015 – February 2016) during which I listened to and at SSE-Noord. Over this time, I listened to Frans Hagenaars, Danny Vera, Ben Bakker, Reyer Zwart, The Mysterons, Blue Grass Boogiemen, and Berend Dubbe. This thesis only pertains to the relation between the events, the actors, and myself in that segment of time.

Translation

Translation is switching from one mode of comprehension to another. If someone is unable to understand Dutch, I switch to English to express myself. Similarly, I translate my listening into written words in this thesis. Media theorist Marshall McLuhan stressed that translation is the primary force of knowledge: “… when we say that we daily know more and more about man. We mean that we can translate more and more of ourselves into other forms of expression that exceeds ourselves”

(63).7 McLuhan uses the phrase “extensions of man” to denote media (7). This phrase performatively,

constitutes the human body as an acting subject and the media as objects which are acted upon. Media extends human organs—for instance, the telescope extends the eyes. This extension is a type of translation, as sight is translated from lens to eye. In this process, human sensory organs become perceived unities. As I mentioned above, this is only a perception, by examining the ear more closely, one could discover the process of translation inside the organ. The sensory organs work so well that one often forgets they are made up of tiny parts which have developed to work in harmony. The

(12)

ideas of media as translating machines and media as extensions of human beings clash with each other, because translation happens inside the human body. The studio is also perceived as a unity, but inside the studio are several machines for translation. I use the ear as an active metaphor alongside my examination of the studio. I translate between them in order to enhance the understanding of the studio, the human ear and, ultimately, the concept of listening.

Sounds are recorded by translation from air pressure, to electric current, to digital code, back to electric current, back to air pressure, and, finally to the sounds that knock at human eardrums where they, again, set a chain of different media in motion. In this procedure, translation is another word for listening. Rather than the passive reception of sound, listening is a performative agency, because it potentially affects information as it translates from one mode of comprehension to another. Listening agency establishes a relation between the listener and the entity that is listened to. Different modes of listening invoke different relations. The studio is the specialist listening space, because it provides the possibility to perform listening agency and ultimately to establish a listening. In contrast to Szendy’s individual specialists, the studio is an assemblage of specialists that employ modes of listening. Translation takes place in order to communicate between the specialists. During the recording process, translation changes and arranges sounds. The process only exists because of listening. This thesis is an investigation into modes of listening and how they affect and enable the process of creating recorded music. This analysis deviates from common assumptions, showing the inherent multiplicity both of the studio and of the ear.

(13)

The External Ear: Creation

I sit downstairs in the main recording room of SSE-Noord, where the musicians rehearse the song that they are about to record. On the opposite side of the room, drummer Ben Bakker and bassist Reyer Zwart discuss the order of recording with an absent third person. This person could be Danny Vera, who is in the adjacent room, or Frans Hagenaars, who is upstairs. I think about the session that I witnessed last week. Without the use of headphones, the Blue Grass Boogiemen recorded their songs with four musicians and two singers in this same room. At that time, I sat behind banjo player Bart van Strien. Despite the penetrating sound of his instrument, I could hear everyone and thus get an idea of the song. This time I hear only drums, for the bass is directly plugged into the system. Luckily, Frans brings me a set of headphones so that I can listen in. Now I can confirm that Ben and Reyer are talking with Danny. I hear the guitar, the bass, and each person’s voice because of my personal headphones. The sounds are interrupted by a telephone-like voice that asks whether the musicians are ready to record. A loud metronome turns on and the musicians start to play.

Danny Vera and the Blue Grass Boogiemen exemplify the variety of sounds that enter the recording system of SSE-Noord. I consider the process by which these sounds are directed into the studio’s recording system. First, the musicians bring their ideas and capacities from outside; ideas can only enter the recording system when played in front of a microphone. The microphone represents the eardrums of the studio. The ear consists of different compartments through which sound is transmitted and transformed. This transmission starts at the eardrum: the membrane that separates the inside from the outside. The external ear directs sound to the eardrum (Widmaier, Hershel and Strang 240-41). Before the eardrums can vibrate along with the sounds played by the musicians, the sounds have to be created. In this chapter, I discuss several modes of listening that perform the primary stage of recording: tuning, forward listening and directed listening. These are all

(14)

modes that must first occur before airwaves hit the microphone membrane, causing sounds to enter the recording system.

Tuning

Before musicians can start playing, they have to tune their instruments. This means that listening begins before playing and thus before recording. Before they enter the studio to record their songs, artists exchange pre-recorded material (i.e., demos) and ideas with Hagenaars. In a subsequent meeting, Hagenaars and the artists discuss what kind of songs are to be recorded, how many musicians will participate, how well they know the music, and how many days they plan to work together. Though it appears to be a formal arrangement, the primary aim of such a meeting is to feel whether there is a good connection, or rather what kind of connection is appropriate. This stage, preceding the recording, is necessary for all actors to attune to one another.

If two strings are well attuned, one string, when struck, can cause the other to vibrate. The unstruck string sympathizes with the other, resulting in a richer sound. A recording can have a similarly rich sound if the actors sympathize with each other. When musicians are in tune with each other, both literally and metaphorically, they perform well together. Berend Dubbe, for example, first recorded his album at home and later reached out to Hagenaars to mix it. They were able to start mixing immediately because Hagenaars was in the right mindset and well prepared for this project. Hagenaars seems able to adjust his mindset to every musician he works with. Artists praise his ability to make everyone feel comfortable and to set the right atmosphere for the specific session of the moment. Tuning is necessary for sympathizing—it is a primary condition for making music together.

(15)

Modes of listening as ideality

From the preceding paragraphs, one might conclude that Hagenaars is able to fully

sympathize with an artist. While in practice this may work, it is theoretically impossible. The French philosopher Jean-Luc Nancy pointed out that in order to transfer information undisturbed there must be a commonality between receiver and sender (50). Between two strings of equal length and thickness, there is a commonality. Therefore, the struck string communicates undisturbed with the unstruck. The latter sympathizes with the former. This indicates that in every other situation, there is always some level of disturbance or noise in communication. Consequently, undisturbed

communication is an ideal form of listening. By “ideal” I do not indicate that which is highest in a hierarchical order, but rather a state which can never be met and only strived for. Hence, listening is not a univocal essence, but a relation between practice and an ideal put into action. This relation differs from mode to mode, because the actual listening is practice-dependent and not the ideal of undisturbed communication. Hagenaars’s listening, at this earliest stage of the recording process, aspires to feel (sympathize) with the recording artist in order to collaborate with them.

Forward listening

When an artist enters the studio, she or he cannot record immediately. The artist, the director, and the session musician have to start sympathizing again, this time specifically for the music of the day. Session musicians Bakker and Zwart frequently mentioned “vooruit luisteren,” or “forward listening” (as I will call it henceforth) as one of their primary qualities (23 Feb.). In forward listening, musicians listen to grasp the idea of the piece of music. Through this technique, musicians claim to hear more than is already there and especially to hear what is possibly still to be recorded. I distinguish two different modes of forward listening—in one, actors can hear from a musical gesture what an artist wants to convey, while in the other, they hear how to get there. Musicians need forward listening before they start recording, because everyone needs to be on the same page.

(16)

Verstehen and forelistening

“Een goede verstaander heeft genoeg aan een half woord” is a Dutch proverb that expresses the first mode of forward listening. It translates as “a word is enough to the wise,” but the English translation does not convey the same message. Just “half of a word” is enough for the Dutch, because she or he is a “good listener”: good listeners can grasp the intention or message from an incomplete gesture. The stem verstaan, like the German word verstehen, can be translated as both “hearing” and “understanding.” German philosopher Hans-Georg Gadamer, who extensively elaborated on verstehen, states that a text is like a question (352-53). In order to understand a text, the interpreter has to understand the question and thus to understand the one who poses the question. Forward listening is a practical verstehen, because it is related to an incomplete work, while Gadamer’s hermeneutic verstehen implies a completed work and total understanding of meaning. Therefore, forward listening is twofold. First, a practical verstehen followed by a performative explanatory or technical listening.

The performative listening that Bakker and Zwart practice concerns a work in progress. They do not hear “the future” as it will come, but they hear a future that can come. Their listening is not disclosing a truth but constructs a relation between the heard and the potential future. This performative future listening can be explained with the figurative expression “voor ogen zien” (“seeing in front of the eyes”). In Dutch, this means that one can imagine something added to the reality that is already optically perceived. Zwart proposed “voor oren horen” (“hearing in front of the ears”) that is analogous to the Dutch expression (Bakker and Zwart, 23 Feb.). I suggest to call the session musicians’ practice forelistening as a derivative of foreseeing. As musicians forelisten, they strive to understand the technical means that they will need to realize the added potential that they hear in actual music. Because it is guided by an ideal, the practice of forelistening can continue endlessly as musicians keep hearing possible improvements to the sound. Therefore, musicians

(17)

practice forward listening during the whole process of recording and not only in this initial stage. During a day of recording, one’s perspective on the song can change, so the ideal sound also changes and forward listening continues. Forelistening’s agential qualities manifest these situations of potential sound realization. Forward listening performatively separates versions or manifestations of the artists’ musical ideas.

Construction of roles

As a separate stage of the recording process, forward listening occurs mainly at sessions with session musicians positioned next to the artist. Artists eventually have the final word in decisions as they provide the first musical and textual gestures, and own the copyrights of the recorded music. The artist cannot listen forward, because the musical gestures have to be verstehen. In Gadamer’s, terms, the artist is the one who poses the question to be understood by others. Nonetheless, artist is a role that can be adopted by anyone at any time and does not always encompass all of the qualities I have described.

The Blue Grass Boogiemen exemplify this flexibility in roles. The songs they recorded are a supplement for In The Pines: a comic book by Erik Kriek. The comic artist translated five murder ballads into comic strips and invited the Blue Grass Boogiemen to record the songs. In this case, the Blue Grass Boogiemen act as session musicians. The band practices forward listening in relation to existing ballads; their forelistening concerns their own idiom. For example, they interpret “Where the Wild Roses Grow,” initially recorded by Nick Cave, by adapting it to their instrumentation: mandolin, banjo, bass and guitar. The Blue Grass Boogiemen also usually perform their own songs alongside interpretations of traditional ones, so they inhabit the role of artists on other occasions. Kriek also sang and played guitar on some songs and thereby trusted in the opinions and authority of the Blue Grass Boogiemen. These sessions show that artist and session musician are performed roles and not

(18)

fixed identities; they change according to the mode of listening employed. Forward listening is the agential cut which separates the artist from the session musician.

Forward listening shows that producer is also a performed role. According to Hagenaars, the producer is a virtuoso multi-instrumentalist who knows how to achieve a specific sound and imposes his preferences on the sessions (3 Mar.). The “virtuoso multi-instrumentalist” quality of the producer is mainly carried out by the musicians, while Hagenaars’s forelistening concerns musical

interpretation as well as the operation of the recording devices.8 This leads Hagenaars to say that the

role of producer is shared between him and the musicians (3 Mar.). Together they verstehen what an artist wants; the musicians know how to play what they forelisten and Hagenaars knows how to record it. This necessarily happens before recording can commence.

Directed listening

Forward listening is a creative and communicative listening which, in some cases, is skipped as a separate stage. This is typical for recording projects that do not involve session musicians. The Mysterons did such a project when they recorded their single “Mellow Guru.” Hagenaars still practices forward listening, but only during the day of recording and not as a separate stage. From the moment they set up their gear, the Mysterons started practicing music to get in the right mood for playing the song. Often, musicians do this to master the structure of the song and to set the direction for the actual recording. I call this mode of listening directed listening, because it involves getting everyone in sync and ready to record the song properly.

(19)

Ascoltando

I distinguish two modes of directed listening—the first is formulated in Szendy’s book with Nancy, who wrote the foreword. Nancy proposes ascoltando (“listening” in Italian) as an instruction for musicians to play while listening (ix). A musician always has to listen, of course, but the

instruction “ascoltando” indicates a kind of playing that is characterized by listening. Szendy adds that “ascoltando” evokes “following” (105-10); therefore, it implies one listener’s submission to the expression of another. The musicians start playing before they intend to record, in order to not only follow one another, but also for a mutual submission to establish.

When musicians play ascoltando, recording commences. The first element that has to be recorded is the basic take. A take is the totality of the parts that are recorded at once. The basic take is this totality and it functions as a basis for the recorded music. The musicians play their main parts for this take, causing the structure (form, harmonies, rhythms) to take shape. During this recording process, musicians never reach complete submission, because this is an ideal that cannot be attained. When they are finished playing, Hagenaars invites the musicians into the control room to listen to the recording. The musicians, for their part, often refuse to listen the take. This can occur either when they already feel that the take was lacking in quality or when they judge that they have come close to collective submission and therefore want to continue recording to maintain the harmonious state. Musicians will allow the interruption only when they feel that the basic take was good enough to review.

Guide

The second mode of directed listening has practical implications to record the basic take. For instance, both Danny Vera and Josephine van Schaik, singer of the Mysterons, prefer to record the vocal track later. Nevertheless, songs are often oriented towards singing. In the case where a vocalist prefers to record their part later, they will sing a “guide” to direct the others. This is a practical mode

(20)

of directed listening. “Guides” are parts (instrumental and non-instrumental) that are not recorded but function as a direction for the others. Guide tracks should not interfere with the actual recording and therefore are made with the assistance of multiple recording rooms and headphones. Each musician has a personal headphone remote mixing station. With this device, one can individually adjust the volume of the different signals. A musician can, for example, raise the volume of the bass because they want to guide one’s own playing. This does not affect the input signal of the respective track into the control room’s mixing console. Similarly, Hagenaars can make a click track to direct the tempo of the music. This is necessary when the artists wish to later add many more tracks to the basic track, and of course, to avoid unwanted sloppiness.

To sum up, directed listening has two modes: one pertaining to an ideal of complete

submission and the other, a practical mode. I indicate the former with ascoltando, while the latter is organized around the guide deployed by the actors at SSE-Noord. The two modes interrelate because the former can implicate the latter. The Mysterons stated that they can only play well (ascoltando) when Josephine sings along, but this is only possible through the use of guides (14 Jan.). Directed listening happens in the instant before recording and is necessary for proper recording. Directed listening involves devices that encompass the complete recording system.

Before recording commences, actors at SSE-Noord perform different modes of listening. This practice challenges theories on listening, including Gadamer’s verstehen and Nancy and Szendy’s ascoltando, because it shows that listening always pertains to an ideal of undisturbed

communication. The modes of listening discussed in this chapter resemble the concepts developed by these theorists, but relate to a practical situation; they focus on partial understanding, because complete understanding is an unattainable ideal. These modes of listening reach partial

understanding specified by their particular functions: attunement for sympathizing between actors,

(21)

playing. SSE-Noord reveals the performativity that was previously unheard in the theories I discussed in this chapter.

The performativity of these three modes of listening occurs in the translation between the temporal manifestations of the musical idea preceding the studio work (i.e., live performances or demo) and the planning of the recording process. That is, the three modes make clear for each human actor what the recording entails and which steps should be taken to realize it. The agency of these three modes of listening signifies a simultaneous departure from the musical idea and the commencement of the recording process. The musical idea exists only because the modes of listening at the studio recognize it as such, each working towards its realization. These modes

comprise the primary stage of recording, because they separate idea from recording, while preparing the actors for the recording process. These modes of listening necessarily happen before recording can commence.

(22)

The Middle Ear: Recording

Danny Vera and his band are working on the drum part for “Wrong,” after recording the basic take earlier in the day. Because it is one of the more up-tempo and aggressive songs on the album, Danny wishes to expand the sound and feels that the drum part needs body. Ben records the clatter of a cutlery tray on every first beat to give it a sharper unfocused metallic sound. He also suggests stamping on the floor to thicken the sound. Danny is enthusiastic about this idea and immediately jumps around the main recording room to find a good sound. Frans reminds the others that the room is not suited for this activity. He has built extra thick walls and windows, put dampers on the walls, and added carpets to the floor to avoid unwanted vibrations. Through this design, he intends to capture a dry sound which he can manipulate later.

While Frans explains this to Ben and Reyer, Danny continues his search for a way to make the right deep-wooden sound. There is no sound insulation in the toilet and the kitchen, but the former is too small and the latter has stone tiles. The stairs are too dangerous and the control room upstairs has also been designed to reduce vibrations. Danny finally finds satisfaction in the office: a small messy room. However, Frans foresees problems with the office, because it is the only room in the studio that is not connected to the mixing console. There is no output for headphones and no input for microphones. Nevertheless, Danny insists on using the room’s floor. Improvising, Frans builds an apparatus with a very sensitive microphone, which is placed on a stand and directed through the doorpost. The stand is placed in the hallway so that the wire can be connected with an input located in another room. Then, the basic take is softly replayed in the control room at the other side of the hall, so that the musicians can follow the tempo and rhythm. The plan succeeds and the sounds of feet stamping on a wooden floor are added to the drum part.

Frans Hagenaars is proud of his studio because there are no “mistakes” in the rooms so that the subsequent recordings “do not lie.” Hagenaars can record easily and relaxed, because of the well-designed sound insulation in the recording rooms. Recording begins as sound enters the recording

(23)

system through microphones. Sounds are amplified between the microphone and the mixing console. Without amplification, sound will not be audible; without sound, recording is pointless. In this chapter, I investigate the specific modes of listening that establish the studio as a recording space. I examine two actors that enable sounds to be recorded: the microphone and the room. These nonhuman actors perform modes of listening so that the ideas of an artist can become audible for the recording system.

The Microphone

While the musicians set up their gear, tune their instruments, and begin to rehearse, Hagenaars is setting up the microphones and amplifiers, connecting everything to the control room. He performs the role of sound engineer: the person who enables the music to be recorded and is responsible for the overall sound. With “sound,” I mean both the material and aesthetic properties of the sound, because he makes sounds audible but also creates a particular aesthetic with them. Hagenaars owns a range of different microphones, each with a characteristic sound. He decides how many microphones to use, for which purposes, and at what distance from their sources. He creates the connection between the musicians and the control room. Similarly, the bones in the middle ear are the medium. They connect incoming sounds from the external ear to the processing center in the inner ear. The microphone is another medium that connects the sounds produced by instruments with the control room. This medium can either offer assistance or resistance.

Assistive media

If there were no bones in the middle ear, then incoming sounds would immediately knock against the oval window (the oval-shaped membrane that separates the middle and inner ear). These sounds would either not be audible or at least be very weakly heard. Bones assist hearing as they amplify sound by fifteen to twenty times (Widmaier, Hershel and Strang 241). This is possible

(24)

because there is a translation: air pressure waves are received by a membrane which sets a bone lever into motion. Compression and rarefaction of air is translated into bone movement. Similarly, microphones need amplification to translate air pressure into electric current.

According to McLuhan, with translation from one medium to another, comes a redistribution of the senses; every medium appeals to some senses more than others in a unique way (19, 49). This “alteration of the sense ratios” seems anthropocentric because it views the human as an inseparable unit. McLuhan’s formulation thus deploys the linguistic performativity that Barad (803) is arguing against. To take up her critique, people perceive the ear as a unity because people describe it with a singular noun. Although, viewed from a functional perspective, one can still come to the same conclusion: the ear is perceived as a unity, because the ear acts as such. When media are figured as “extensions of the body,” as in McLuhan’s conceptualization, the implied body is a human body and thus humans are at the center of the worldview. A close study of the sense of hearing reveals a process of translation inside the hearing organ. By examining a dissected ear, one can see that the organ does not consist of one material but is instead composed of a combination of media. The claim that media are extensions of the senses is anthropocentric, because it is oriented towards human

perception.9 This view neglects the heterogeneity of the media in the ear and also obscures the fact

that the flow of information within it is bidirectional. In other words, the performativity of the organ goes unnoticed.

Reviewing McLuhan’s terminology, however, makes his theory useful for developing the concept of performativity. For instance, the word “sense” indicates more than just the organs of perception, but also refers to comprehension. Reviewing McLuhan’s theory with this broader definition allows for a non-anthropocentric perspective. Sense can be understood as mode of

comprehension. To communicate information from one mode to another, one has to translate—with comprehension comes translation. By its nature, translation alters the information, because every

9 I do not argue that McLuhan intended for his theory to be anthropocentric, because he also states that “man

(25)

mode of comprehension has its characteristic limitations. When one views the ear as a

heterogeneous assemblage of modes of comprehension, one can better understand that sound is changed through its perception. So the auditory sense is not unidirectional and anthropocentric, but instead performs a bidirectional translational agency.

Microphones have a similar function as the bones in the middle ear. They translate air into a mode of comprehension that is completely focused on sound transmission. This is an alteration of the sense ratios, because air has many qualities and is relatively bad at transporting sound, while bones and microphones are well-suited to the task. In McLuhan’s terms, air is a “cool” medium because it serves many senses, while the other two are hot, because they are solely focused on one sense (24-35). Of course, this is still too simplistic. Each person is unique and each ear is like a fingerprint; indeed, every set of middle ear bones differs in shape and size. Similarly, every microphone is unique and there are, of course, different types of microphones with characteristic modes of comprehension. Microphones are designed to listen to specific sounds in specific environments for particular purposes. While a person cannot change his middle ear bones, microphones can be changed according to the demands of human actors. Different microphones thus assist the recording of sound in their own particular ways.

To take one example, The Mysterons reserved the second day of recording for the vocal track. Van Schaik, their singer, started the day by choosing a microphone. Hagenaars suggested that

she test the U47 and the M49.10 She sung a test part with both microphones to decide which one to

use for the actual recording. The band was amazed by the sounds of both but also perceived clear differences. The U47 appeared to emphasize high-frequency peaks while the M49 was judged to have a “rounder” sound. Opinions differed because the thinner sound of the U47 better matched the song, while the M49 was appropriate for her voice. In the end, Van Schaik chose the latter,

suggesting that she performed better when she heard her own voice through this microphone.

(26)

Expressing this in terms of sense ratios, one could say that the U47 listens more accurately to high frequencies while the M49 treats the totality of frequencies equally. In other words, each

microphone performs a different mode of listening, thereby communicating information differently. This example shows that microphones perform a listening agency, because they alter the registered sound according to their technical specifications. These specifications differ from model to model but also between microphones of the same model, because every single microphone is affected by corrosion and other natural processes. All microphones translate air pressure into electric current, but each produces a unique sound and each listens differently.

Resistive media

One can view a microphone as an aid for recording the voice, but one can also understand it as an obstacle to ideal audibility; any media placed between the message and the receiver can also

be seen as creating resistance.11 In the end, the studio aspires to deliver the ideas of the artist in the

form of a song or an album. Recording is the process to make these ideas audible. This is the paradox of recording—for the ideas to be audible, they have to be translated from medium to medium and handled by different actors. As I mentioned in the section entitled “Tuning,” it is impossible to transfer ideas from one person to another without disturbance. Hagenaars acknowledges this, stating that ideally the studio is transparent, meaning that it should “not to be heard” (3 Mar.). If the studio were conceived otherwise, it would mean that he imposes his own opinion and his own sound onto the artist. This approach is one he despises—Hargenaars intends to think with (and not for) an artist, hence his preference for the role of director over producer. Nevertheless, he admits that his studio inevitably has its own sound (Hagenaars, 3 Mar.). Media as resistance corresponds to the concept of the parasite as theorized by French philosopher Michel Serres.

(27)

The French word parasite refers both to a biological parasite and to electrical static. The parasite interrupts a stream and gives it another purpose or destination; the tapeworm interrupts the stream of food for the host and makes a source of food for itself. The operator of a medium is always subordinate to the laws of the medium, claims Serres (38). Microphones interrupt the stream of air pressure, transforming it into electric current. The sound engineer, is subject to these laws of sound transmission. This explains why Hagenaars has so many different microphones. Microphones parasitize him, but at least, he can still choose which microphone will act as parasite. From Serres’s perspective, one could say that microphones have agency in their relation with the people who use them. They act as parasites on the idealized sound that the engineer is trying to realize.

I mention the parasite here as a contrasting view to McLuhan’s media theory. Both concepts explain the variety of studio recording devices and both stress the translating functions of media. McLuhan defines media as extensions of the body, thereby figuring a center (the body) with

extensions (media). In this view, each medium has a different sense ratio, emphasizing some senses, while de-emphasizing others. Media differ from each other precisely because they emphasize different senses or modes of comprehension. Serres, on the other hand, rejects the idea of a center, showing that everything is mediate and that audibility is an ideal. The two theories seem to

complement each other. Following McLuhan, we can see that every microphone performs a unique mode of listening. The agency of the microphones, in this perspective, concerns their potential affection of the sound signal. Following Serres, on the other hand, all media create the conditions for communication, because without them, streams would not be interrupted and redirected towards a receiver. The agency, in this perspective, is manifested in the constitution of subject and object, or, phrased differently, of parasite and parasitized.

(28)

The Room

When microphones listen, they register not only the direct input of an instrument or voice, but also a delayed input of the same sound. This is caused by reflection of sound against the walls and furniture of the studio. Because not every sound reflector is placed at the same distance nor made of the same material, sound reflects at different times, speeds, and volumes diffusing into a unique ambience. Just as Hagenaars owns multiple microphones, SSE-Noord has several recording rooms. Adjacent to the main recording room is a smaller room that allows for the drums to be recorded separately from the other instruments. This is intended to prevent sound leakage, but it also creates the possibility to open the door and extend the acoustics into the uninsulated hall.

Reverberation is a word that describes the ambient totality of sound. SSE-Noord is designed to

diminish unwanted reverberation; the less there is, the easier it will be to process and manipulate the sounds later. Nevertheless, reverberations assist and amplify sounds, promoting their audibility.

Construction of the self through listening

Vibration of the eardrum and the consequential motion of media inside the auditory organ is commonly called “hearing.” One’s own voice is heard through the bones of the skull and through the Eustachian tube that connects the middle ear cavity with the pharynx. No one can hear your voice in the way that you do. This hearing results from the connection between your intention (voice) and your physical body (vibrating bones and eardrum). This process therefore constructs a self-image, or rather a self-audition. Jean-Luc Nancy points out that one hears oneself inside the body and

constructs a self as a subject, because it is immediately clear that sound needs to be sent and to be received (16-17). It is the immediate referral to the self from the self that makes the self a subject of itself.

A similar process of referral also happens in non-human bodies. For example, electric guitars make almost no sound when they are not electrically amplified, because they have a very limited

(29)

soundbox for sounds to reverberate within. By contrast, acoustic guitars can be easily heard because of their larger soundboxes. For a guitar to function as a musical instrument, it has to resound itself; in other words, it must listen to itself. The soundbox’s reverberations are the immediate referral of a guitar’s sound. Although an electric guitar needs other media to be heard clearly, both electric and acoustic guitars feature a relation of something (the strings) that brings airwaves into motion and something (the sound box) that reverberates these airwaves. This relation is what constitutes it as a guitar and allows it to be handled as such by human actors.

The specific construction of form and material makes each guitar sound unique. Each guitar is a self in its own right, because it is a unique combination of sound source and reverberation. The difference between reverberation and vibration is a matter of scope determined by the body. Both phenomena refer to the reception of sound on a surface and a consequential resounding, affected by the shape and fabric of that surface. While reverberations mainly concern an outside or an inside of a body, vibrations connect the outside with the inside of a body. Walls are perceived as reverberating because a listener usually hears only the reflection of sounds that come from the same side (interior or exterior) as the listener. However, one could perceive sound from the other side of a wall (for instance, imagine standing outside a discotheque) and know that walls also vibrate. In both situations, the wall performs the same material agency, although humans perceive its effect

differently. Reverberation shows listening’s performative agency as it constitutes a relation between sound source and reverberating surface, thereby constituting a material unity. This example shows that listening is also an activity performed by non-human agents and it thereby extends Nancy’s conceptualization of listening.

Construction of SSE-Noord

The acoustic connection between source and body makes every sound unique, as every surface reflects sound differently. This is also what gives a particular room its unique sound.

(30)

Hagenaars limited the reflection of sound by putting soft materials on the walls and floor. Soft fabric absorbs rather than reflects airwaves. Hagenaars aspires to control the studio’s ambience, so that ideally, the sounds that are created are also the sounds that are recorded. Nevertheless, the walls resound in their own way, creating the particular sound of SSE-Noord. The sound engineer

manipulates sound by moving microphones closer to or further from the source, thereby recording fewer or more reverberations. Ambiance is welcome as long as it is controllable, because it shapes the uniqueness of the recorded sound.

Two sets of actors—rooms and microphones—perform the reverberation and registration of sound and thereby establish the studio: a space to record sound. This seems to resemble the

commonly held view that the studio is simply a place where artists come to document their songs. However, I have added nuance to this view by showing that the human and non-human actors in a studio practice performative listening. Through their listening agency, these actors add ambience to the produced airwaves and translate these into electric current. This changes the relation between human actors and the recording process, because the non-human actors perform a disturbance of the musical idea that enters the recording system. The recorded music features the unique sound of SSE-Noord’s rooms and microphones, although this is probably only unconsciously perceived by the passive listener.

Nancy’s notion of self construction through listening can be extended to the assemblage of human and non-human actors that constitute the recording studio, because the studio, like human listening, involves a self-referential chain of translations. In this perspective on the studio, the theories of McLuhan and Serres paradoxically invigorate each other, because the latter stresses media’s fundamental necessity for noise while the former emphasizes that every medium deploys a unique sense ratio which performs that noise. The studio is not a place of documentation, but performs a listening that transforms both the sound and its producers.

(31)

The Inner Ear: Processing

Walking up and down the stairs between the recording room and the control room is a weird experience, because the music sounds different on each floor. I stand in the doorway of the

recording room listening to the Blue Grass Boogiemen. I mainly hear the loud banjo, despite the fact that Bart, the banjo player, sits with his back to the door. Meanwhile, although Aart Schroevers and his bass are facing me, I cannot really hear him play. Upstairs, by contrast, I can hear everyone together. Frans is operating his equipment; as he turns the knobs on his mixing board, the sounds of individual instruments change. The banjo is still loud here, but Frans adjusts this. Compression lessens the volume of the instrument without diminishing its intensity. The mandolin, on the other hand, is enhanced by a low-pass filter, giving the lower frequencies more prominence. Anita’s voice comes into the mix via the reverb plate causing heavy reverberation. Erik’s voice, on the other hand, is passed through the tape recorder to give it a delay effect. Gradually the combined music sounds like a band. The sounds are balanced and clear—each part is audible on its own but everything also coheres into a unity. In the control room, I finally hear the song “Caleb Meyer” and all of its parts together, rather than the banjo-heavy sound which I encountered in the recording room.

At the end of the electrical circuit of sound is the A/D converter: a device that translates analogue signal into digital code. Digital code is interpreted by the computer which in turn directs the mixing console. Similarly, at the end of the trajectory of sound within the ear, a small organ called the organ of Corti translates mechanical power into electrical signals and sends them into the nervous system. Before sound enters the mixing console, it passes through several analogue devices that manipulate it. These devices can be divided into two kinds: imitative and reflexive. The first kind imitates sounds and thereby imitates space, because reverberation consists of multiple sound imitations and the brain perceives reverberation as space. The second kind of device reflexively responds to sound according to preset parameters. These devices together with the microphones, make up the chain of media that sound passes through before it enters the mixing console. In the

(32)

previous chapter, I examined non-human listening by microphones and rooms. In this chapter, I investigate to what extent these analogue devices also perform non-human listening.

Imitative Devices

The human auditory system does not feature an imitative function. Regions in the brainstem do connect sounds registered with similar properties within a time interval and determine location by measuring the time interval (which can be translated into distance) between these sounds that are perceived as similar. The brain associates different kinds of repetitions with different spaces and thereby helps to orient towards a sound source. Because the auditory sense does not create and only registers imitation, one can wonder whether the imitative devices of the studio are “listeners” or not.

Echo, reverb and delay

Echo, reverb, and delay are the three terms used to indicate sound imitation in sound

engineering. At SSE-Noord as well as on the internet,12 these terms are perceived as referring to

different effects. Although they are distinct, they all stem from the same sound imitation principle. One can conceive of them on a three-dimensional coordinate system. Point zero in this sphere represents a single iteration of sound, with no diffusion (first axis) and no decay (second axis). This represents delay, which is the distinct iteration of a sound within a fixed time interval. The iterated sound has exactly the same pitch, duration, amplitude, and characteristic timbre. Sound imitation is called “delay” when there is little to no diffusion or decay. The third axis, which refers to the number of iterations, does not require a change in terminology. Sound imitation is called “reverb” (a

diminutive of reverberation) when diffusion and decay are added to such an extent that one does not

12 I examined the first hits on Google and YouTube (Academy AV; Ben; crazyfiltertweaker; Hunter). These

sources discuss the differences between the terms, yet they all make different claims. Scrolling through other hits shows even more variety in explanations.

(33)

perceive any distinct reflections. The amount of diffusion and length of decay can theoretically be extended to infinity, and the effect will still be called “reverb.” Like delay, echo is a distinct imitation of a sound with multiple copies, but, like reverb, it also has a certain amount of diffusion and decay.

Only machines can imitate sound as delay. In an acoustic situation, a sound is imitated as it is reflected by a surface, but this will always be a diffuse sound. In addition, any given sound will most likely reflect from multiple surfaces at the same time. Delay thus only exists via machines, because only machines can precisely copy a given sound. When the effect of an imitative device tends towards delay, the input sound is returned as a rhythmic output. A single tone contains pitch,

volume, duration, and timbre, but the repetition of this tone adds a rhythmic dimension, because the original sound is perceived as on the beat while the imitation is perceived as off-beat. Hence, delay is not a function of hearing, but a specific tool from sound engineering with an aesthetic purpose.

Contrary to what one might assume, imitation as reverb can only be achieved by machines as well. SSE-Noord has an EMT140 plate reverb specifically designed to create reverberation. The steel reflects sound and because these reflections are diffuse reverberation is perceived. Reverb sounds distinct from spatial reverberations, because the device’s reverberation is created by one surface, rather than a multiplicity of surfaces. When the effect of an imitative device tends towards reverb, it nevertheless adds spatiality to the sound. Our auditory sense is trained to interpret spatial dimension and the material of a surface’s reverberations. One’s mind perceives an imaginary space for these sounds, because they are notably different from those created in an acoustic situation with its multiplicity of surfaces, fabrics and materials. Reverb, like delay and echo, thus only exist as a modifiable device.

Is imitation listening?

These imitative devices are like mirrors. Mirrors do not watch but nevertheless change what can be seen. Imitative devices resist being understood from a listening perspective. It is hard to call

(34)

imitation “listening,” because effects like delay, echo, and reverb are added to the input sound and not present to be perceived. One could argue that the forelistening of human actors, which I

conceptualize in chapter 2, is similar. However, the difference is that the human listeners really hear things that are not present in the music, because they have a wide frame of reference. In addition, the outcome is never exactly what they hear, because forelistening is oriented toward an ideal. These imitative devices are the opposite—they do not have a frame of reference, yet they perform accurately. Thus, forelistening is a mode of listening while imitation is not.

In McLuhan terms, imitative devices are extensions of the tactile sense or the hand, rather than extensions of the auditory sense or the ear. An echo can be produced manually by cutting and splicing the audio track several times. Hagenaars, for example, manually created an echo by copying a part of Berend Dubbe’s voice track and pasting several copies with time intervals and a decreasing

amplitude away from the copied part.13 I found the resulting sound alienating, because while I

perceived it as an echo with depth, I could not relate it to a space in the physical world. This is exactly what imitative devices accomplish as well. The work, when it concerns a normal echo, is outsourced to imitative devices, which replace (or extend) the manual manipulation of music. Unlike rooms and microphones, these imitative devices are not non-human listeners; they are more like the plectrum, a tool used by guitarists to extend the human fingers and create a particular sound. However, because they change the relation between listener and sound, these imitative devices do perform non-human agency just as a plectrum augments a guitarist’s style of playing. Therefore, imitative devices do not listen, but rather change what can be heard.

Reflexive listening

Music cognition scholars Kai Tuuri and Tuomas Eerola claim that reflexive listening is the first mode of listening, or initial response, to sound. This includes responses like shock or attempts to

(35)

locate the sound source. Hence, they also state that it is not a “pure” mode of listening because it is not conscious, or at least is characterized by a lack of focus (141). These scholars describe an

unconscious process resulting in a clear bodily response (for example: startling). Similarly, the inside of the ear also contains reflexive mechanisms. These have more ambiguous bodily effects, although they are significant for hearing. The first reflexive mechanism is located in the middle ear and acts to protect the inner ear from damage. The other is located in the inner ear and enhances audibility. In contrast to Tuuri and Eerola’s conceptualization, I argue that reflexive listening is made conscious in the studio. Both reflexive mechanisms are present in the recording studio, embodied respectively in the compressor and the equalizer, and each can be intentionally modified by human actors.

Compressors reflexively diminish the volume with a preset ratio, whenever the input exceeds a preset threshold, so that sounds are audible (i.e., not overloading the system) and mixable in the later stages of recording. A similar mechanism is also at work in the ear. The eardrum’s muscles can be tightened or slackened to control the input level of the ear, because the inner ear needs to be protected from too much energy (Widmaier, Hershel and Strang 241). An expander is the opposite of a compressor; this device diminishes the sound below a certain threshold. This makes a signal cleaner, because ambient noise is filtered out. Hagenaars does not use expanders, because the recording room at SSE-Noord already adds little ambient noise and also because he prefers a slight rumble. Although expanders and compressors have similar reflexive mechanisms, the former is not present in the ear while the latter is.

Equalizers are devices which boost or weaken specific frequency bands within a signal, resembling a function of the ear, which can also discriminate among frequencies. The inner ear is a spiral tube that is divided by a membrane. This membrane is smallest and most tense at the base of the spiral. It progressively widens and slackens, allowing high frequencies to be received at the base and low ones at the top. This spatial mapping of frequencies is called tonotopic organization

(36)

in order to focus one’s listening (Wolters and Groenewegen 190). Hence, one can discriminate between a conversation and background noise, by focusing on the specific frequencies of one’s conversational partner. Equalizers perform an improved function of this reflex, because they enable manipulation of the sound through the precise (de)emphasizing of particular frequencies.

Compressors and equalizers are devices which perform functions analogous to those found in the human ear. Expanders, by contrast, do not mimic a mechanism in the ear. Nevertheless, I

consider all of these reflexive devices to be non-human listeners. The compressor, for example, diminishes the amplitude of loud sounds a little too late and continues diminishing the amplitude even when it is not necessary. In other words, compression strives toward an ideal. This ideal is determined by the specific limitations of the ear. For example, the ear cannot hear all frequencies with equal attention. In this case, listening is thus defined after the differences in attention determined by the organ of Corti. One should measure whether something “listens” or not by looking at similar functions within the ear and not by whether some specific mechanism is similarly physically present in the ear.

Non-human listening

All of these analogue devices appear as tools in the studio, both because they are seen as necessary to the recording process and because all enable the manipulation of sounds according to preset parameters. However, whatever a machine does, it can only be called “listening” if its functions (rather than its constructions) are modeled after the ear. This in turn foregrounds the fact that linguistic performativity hovers over academic writing, because it shows that my observations are shaped by the vocabulary I have at my disposal. I therefore identify reflexive (but not imitative) devices as non-human listeners.

Equalizers and compressors perform specialist modes of listening, because they focus on and improve two reflexes of the ear. These machines enhance the technical audibility of the tracks,

Referenties

GERELATEERDE DOCUMENTEN

Agential realism is a performative perspective which aims to understand the complex processes constituted by a number of human and nonhuman forces and is a radical version of

Deze zorgstandaard hanteert het Regenboogmodel voor geïntegreerde zorg (Valentijn et al., 2013; Valentijn et al., 2016) als ordenend handvat voor de verschillende processen

Daarnaast zijn er praktijkrijpe initia- tieven voor het mechanisch schonen van spruiten, voor afzet als panklaar

After the #BOOK training was implemented, effects of reading comprehension, hedonistic reading attitude and total reading attitude were found.. The students who were part of

We must conclude that this exact relationship between corporate debt maturity of new issues and government debt maturity of new issues is non-existing at least in the period

In this work, we report the detection of proteins by means of simultaneous fluorescence and impedance measure- ments in a cyclo olefin polymer (COP) chip containing an ordered

Our BPU leadership program might inspire other centres to implement improved early mobilisation after heart surgery or any medical specialty with multi-day hospital admissions. In

Furthermore, the basis- theoretical principles and empirical findings were used in order to give and formulate pastoral guidelines to wounded Rwandese women aged between 35-55