• No results found

Perceptual Concordance of Rhythm Similarity in Electronic Dance Music - and its Interaction with Timbre and General Music Similarity

N/A
N/A
Protected

Academic year: 2021

Share "Perceptual Concordance of Rhythm Similarity in Electronic Dance Music - and its Interaction with Timbre and General Music Similarity"

Copied!
25
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Perceptual Concordance of Rhythm Similarity in

Electronic Dance Music - and its Interaction with Timbre

and General Music Similarity

Thomas Brockmeier

Amsterdam Brain and Cognition, Faculty of Science, University of Amsterdam

September 19, 2014

Abstract

The aim of the current project is to investigate the concept of rhythm similarity in electronic dance music. Similarity of fragments of music can be evaluated on several levels, such as rhythm, har-mony and timbre. While two fragments may be similar in one domain, they might be very different in others. A further goal of this project was the construction of a ground-truth dataset, which can be used to evaluate other models of rhythm similarity. It is found that human listeners are able to perceive rhythm similarity in a consistent way individually, as well as in a group. Effects of early-life musical training or musical preference on rhythm similarity judgments are negligible, but they do show a positive effect on the confidence participants had in their ratings. Considering the diversity of rhythmic features contained in the stimuli and the presence of different degrees of similarity, along with the observed concordance of the judgments, the data obtained from the current experiment can be used as a ground-truth to evaluate new models of rhythm similarity.

(2)

Contents

1 Introduction 3

2 Materials & Methods 4

2.1 Stimuli . . . 4 2.2 Procedure . . . 5 2.3 Statistical Analysis . . . 5 2.4 Experiment 1 . . . 6 2.5 Experiment 2 . . . 7 2.6 Experiment 3 . . . 7 3 Results 7 3.1 Experiment 1 . . . 8 3.2 Experiment 2 . . . 11 3.3 Experiment 3 . . . 11 3.4 Additional Analyses . . . 11

4 Interpretation & Discussion 12 4.1 Rhythm Similarity . . . 12

4.2 Rhythm, Timbre & General Similarity . . . 13

5 Conclusion 14 6 References 15 7 Appendices 17 7.1 Appendix A . . . 17 7.2 Appendix B . . . 18 7.3 Appendix C . . . 23 7.4 Appendix D . . . 25

(3)

1

Introduction

General music similarity ratings have previ-ously been assessed, notably by Novello and colleagues (2006, 2011) and Gouyon and col-leagues (2004). General music similarity sim-ply refers to the degree in which two fragments of music resemble each other. It is also possible to assess ’sub-similarities’ in music: the com-parison of a single aspect of these fragments. This leads to measures of e.g., timbral simi-larity when tone color or tone quality are con-cerned, or rhythmic similarity when aspects of sequences of music, and the relative intervals and accents between the different onsets con-tained within, are assessed as they move through time (Cao et al., 2014).

Currently, little is known about perception of rhythmic similarities in music, or even about music similarity in general (Cambouropoulos, 2009). Although rhythmic similarity models have been defined (Smith, 2012), they have rarely been evaluated on perceptual data. Fur-thermore, until now, there is no clarity on the question whether there is consensus among peo-ple on the concept of rhythmic similarity. There-fore, this was investigated in an experiment in which participants will be asked to indicate a de-gree of similarity between two music segments. Participants were queried to indicate to what ex-tent they consider pairs of fragments of music to be similar, regarding just the rhythmic aspects of the two segments, and how confident they were in each individual rating. Previous research sug-gests that individuals are indeed able to assess rhythm similarity independently of pitch (Pitt & Monahan, 1987), but this had not yet been in-vestigated across individuals.

The music used in the current study was con-fined to Electronic Dance Music (EDM). EDM is an umbrella term for different genres of

elec-tronic music such as techno, dubstep, ambient and more, which are often produced with the in-tention to be played to a dancing audience (But-ler, 2003). Not only does EDM often have a very prominent rhythmic element because of this, it is also possible to define distinct rhythmic pat-terns in its subgenres (Andersson & Eigenfeldt, 2011). The fact that rhythmic diversity is gen-erally subordinate to e.g., the harmonic prop-erties of a piece of music in Western tradition (Thaut, 2005) makes EDM a potentially inter-esting source of music to collect assessments of rhythmic similarity with. Additionally, a par-allel study (Lopez-Mejia et al., 2014) was per-formed to assess timbre similarity: the stimuli and task were the same (see below), apart from asking participants to only assess timbre instead of rhythm similarity. To replicate the result that individuals are able to consistently judge rhythm similarity, a second experiment was performed wherein a single participant performed a short-ened version of the task six times. A third part of the study - performed in collaboration with the timbre experiment mentioned above - will ad-dress general music similarity (i.e., not limited to any domain such as rhythm or timbre).

In experiment 3 participants were queried whether they found two music segments similar or not, without providing additional information as to what they should pay attention to. Partici-pants were therefore free to base their judgment on rhythm, timbre, or other aspects of the mu-sic, leading to a measure of general similarity. Examining the results from experiments 1 and 3 together with those from the timbre experiment, allowed the investigation of potential interac-tions between general music similarity and mu-sic sub-similarities. Rhythm and timbre similar-ity ratings were compared to general music sim-ilarity, making it possible to see which of these two dimensions exerted a larger influence on

(4)

general similarity (by resembling general sim-ilarity more strongly), or if the relative strengths of these influences change dynamically with dif-ferences in the rhythmic and timbral properties of the music.

2

Materials & Methods

2.1 Stimuli

20 music fragments (see appendix A) were se-lected from a by a team of six expert listen-ers. The members came from various academic backgrounds (cognitive and computational mu-sicology, cognitive (neuro)science, computer science, linguistics and mathematics) and were involved in different aspects of the current and related projects. The related projects were the timbre similarity study mentioned above, as well as another project that aims to design a compu-tational model of music similarity. The experts tried to select the stimuli in such a way that they represented the diverse range of rhythmic and timbral patterns and features that can be found within the full scope of existing EDM subgenres as well as possible. The rhythmic features were syncopation (i.e., the placement of accents on weak beats in the measure), event density (the amount of onsets present in a measure), the dis-tinction between breakbeat and four-to-the-floor rhythms (related to syncopation), tempo (within a restricted range; see below), pattern length (how many bars are looped) and the relative dis-tribution of onsets over the loop (are there more onsets at the start or end, or are they distributed evenly). The timbre features consisted of the differences between weak and strong, soft and hard, low and high energy, colorless and col-orful, cold and warm, dark and bright, acous-tic and syntheacous-tic, and weak and strong timbres (Lopez-Mejia et al., 2014). It was attempted to

select the fragments in such a way that different combinations of rhythmic and timbral features were present in the set.

The segments were treated to remove dispar-ities in volume and cut down to 12 seconds each, starting on the first downbeat of the first measure of a particular movement. Further restrictions on the dataset were: 1, a given segment needed to be considered EDM (synthesizers and/or sam-plers are essential to its sound); 2, the feature diversity found in the selected segments needed to be representative of EDM as a whole; tempo is considered a highly salient dimension for mu-sic similarity (Novello et al., 2011), therefore, 3, restrictions were placed on segment tempi in order to prevent this variable from exerting an influence on similarity judgements. Even though examples of all tempi (i.e., from ambi-ent sounds without a discernable tempo to 999 beats per minute (BPM) - or the fastest tempo possible on a given drum computer or sampler) can be found, EDM is commonly produced at around 128 beats per minute (Van Noorden & Moelands, 1999). Segments were selected in a range (112-143 BPM) surrounding this common tempo.

Tentative rhythm and timbre similarity rat-ings were collected for each possible pair of seg-ments (N = 190, appendix B), using data from pilot experiments. These pairs were then as-signed to one of four categories based on their similarity ratings: high rhythm similarity and high timbre similarity, high rhythm similarity and low timbre similarity, low rhythm similar-ity and high timbre similarsimilar-ity and low rhythm similarity and low timbre similarity. A high rhythm similarity, high timbre similarity pair would thus have a high score on both scales, whereas a low rhythm, high timbre pair would have a low rhythm similarity score and a high timbre similarity score. As a fourth restriction,

(5)

the dataset was constructed in such a way that all four combinations of high and low rhythm and timbre similarity were present at least five times. Lastly, it was made sure that there was a diverse range of EDM subgenres available within the pool, representative of what is available in the music scene.

2.2 Procedure

The experiments were set up as publicly avail-able surveys in an online environment (http: //www.surveygizmo.com/). Participants were sourced through advertisements in different on-line communities, mailing lists, and social me-dia. After being made familiar with the task through custom made sample pairs with high and low rhythmic and timbral similarity (the rhythmic patterns either overlapped completely, or showed major differences in onset placement, syncopation and event density; analogously seg-ment timbres were identical or very distinct), participants were presented on each trial with a pair of music fragments and required to rate rhythmic similarity on a 4 point scale (1. disilar, 2. somewhat dissimdisilar, 3. somewhat sim-ilar, 4. similar). Additionally, participants were queried how confident they were in their rating on a 3 point scale (not confident, somewhat con-fident, confident). After the experiment, partici-pants were presented with a short questionnaire to obtain age and gender demographics, and in-formation regarding their experience with EDM and music in general (see appendix C).

The experiment was approved by the ethics committee of the humanities department of the University of Amsterdam.

2.3 Statistical Analysis

As a measure of participant agreement, a slightly modified version of Fleiss’ kappa (Fleiss, 1971) was used. This statistic is seen as an expression of the amount to which mea-sured inter rater agreement exceeds what would be expected if all participants gave random judg-ments. Kappa ranges from -1 to 1, with positive values signifying inter rater agreement (see table 1).

Because kappa methods were designed to process nominal or binary data (Cohen, 1960, 1968; Fleiss, 1971), responses were down-sampled to a 2-point scale from the original 4-point Likert scale (’dissimilar’ and ’somewhat dissimilar’ were taken together, as were ’simi-lar’ and ’somewhat simi’simi-lar’). Kappa measures will however be provided for both the down sampled 2-point scale data and the original 4-point scale responses for sake of completion. In order to maintain a degree of continuity with the literature, kappa values will be interpreted throughout the rest of this article using conven-tions specified by Landis and Koch (1977, ta-ble 1), although concerns about the validity of this scale have been expressed in the past (Gwet, 2001).

Table 1. Interpretation of kappa values (cf. Lan-dis & Koch, 1977).

Kappa Interpretation <0 Poor agreement 0.01 - 0.20 Slight agreement 0.21 - 0.40 Fair agreement 0.41 - 0.60 Moderate agreement 0.61 - 0.80 Substantial agreement 0.81 - 1.00 Almost perfect agreement

The original kappa statistic can only be cal-culated for a complete set of ratings, i.e.,

(6)

ev-ery judge must have assessed evev-ery pair. While this was not the case in the dataset, it was pos-sible to fulfill this criterion with a slight rear-rangement. Temporarily reducing the amount of judges per item to the same amount as were present for the item with the lowest number of judges made it possible to calculate kappa for the reduced dataset. This removal was done at random and the procedure was reiterated 1000 times, after which the means of all kappa and probability values were obtained. These means will hereafter be referred to as ’kappa’ and ’p’. The removal procedure is not detrimental to the value of the kappa statistic since rater identity is discarded in the computation process. The pres-ence or abspres-ence of group agreement will there-fore be found regardless of the specific combi-nation of judges in a given iteration of the kappa procedure described above.

Running 1000 iterations of this procedure on the data, confidence = 1 included, gives a distri-bution with the following features: kappamean

= 0.1513, kappamode = 0.1378, kappamedian =

0.1512, kappamin= 0.1378, kappamax= 0.1659,

sd = 0.0047, mean z score = 5.2308 * 10-14. As becomes apparent from the low mean z score, there is very little variation in the individual iter-ations of kappa calculiter-ations. Even though there may be a slight variation in mean kappa out-put each time the procedure is performed, these differences are negligible. The procedure de-scribed above is therefore a valid way to calcu-late kappa statistics for incomplete datasets and will thus be used throughout the remainder of this article.

To compare participant groups, the

Wilcoxon rank-sum test (WRST, Mann & Whit-ney, 1947) was used. This test is used to test whether two populations are the same and as-sumes non-normally distributed data. The test outputs a statistical value U, which indicates the

difference between two rank totals. U can be un-derstood as ’the number of times observations in one sample precede or follow observations in the other sample when all scores from one group are placed in ascending order’ (Nachar, 2008). As U decreases - holding sample size in mind - it becomes less likely that this difference occurred by chance. The lowest possible value of U is zero, the highest is half the product of the number of values of the first sample times the number of values in the second (Graphpad Statistics Guide, 2014).

2.4 Experiment 1

To assess perceptual concordance of rhythm similarity, participants were presented with 62 music segment pairs. Participants (N = 58, of which 13 females; average age = 27.9, standard deviation = 8.0) were asked to rate the rhythmic similarity of each given pair and the confidence they had in their rating, as specified above.

The first and last pairs were the same for ev-ery participant and were used as a safeguard (re-ferred to as ’safety pairs’ below) to check for in-consistencies in within participant assessments. The particular pair that was used was selected because it was the most consistently and confi-dently rated pair in the pilot (very dissimilar). We were confident that very inconsistent rat-ings (i.e., the final pair being judged different than the first; ’different’ meaning a rating dif-ference of more than 1 point) of this pair would be red flags for false data and would necessitate close inspection before addition to the data pool. Post-hoc analysis revealed that pairs which re-ceived the twenty highest and twenty lowest similarity ratings were both judged highly con-fidently (mean confidencehigh = 2.7187, mean

confidencelow = 2.7858). This validates our

(7)

rated as highly similar would have sufficed as well. The remaining 60 pairs were selected semi-randomly from the total pool of 190 pairs. Prior to running this experiment, a pilot was performed. After the pilot the safeguard de-scribed above was added, but the experiment was left intact. Because the participants in the pilot were colleagues sourced offline in our lab who provided us with feedback on the task, we were confident that they completed the experi-ment to the best of their ability. Therefore, the pilot data was included in the total data pool of experiment 1. Their numbers are included in the demographic information above.

The same experiment was conducted in a different study, but assessing judgments of tim-bre similarity.

2.5 Experiment 2

For the potential concordance of participant groups’ similarity judgments (i.e., the desired result of experiment 1) to be meaningful, it has to first be proven that an individual can give con-cordant judgments to his or her own ratings. To assess this, a second experiment was performed. In order to make it possible for a single partic-ipant to rate all pairs in the experiment in one sitting a shortened version of experiment 1 was created.

Based on the results of experiment 1, 18 pairs were selected to create a sufficiently brief experiment that contained all combinations of different rhythm and timbre similarity levels. Because similarity was found to be not as po-larized (either very similar, or not similar at all) as assumed, it was decided that a third simi-larity category was required. This led to the addition of medium rhythm and medium tim-bre similarity levels. The 18 pairs subsequently contained all combinations of high, medium and

low rhythm and timbre similarity (see appendix D).

To assess whether an individual listener judges rhythm similarity consistently, one ex-pert participant (25, male, musicologist; re-ceived early-life musical training, familiar with EDM) performed the shortened experiment a to-tal of six times. Safety pairs were omitted since the participant was a trusted individual and not sourced through online advertisement.

2.6 Experiment 3

Since music is characterized by more than rhythm alone, a third experiment was run. In this experiment general music similarity ratings were collected, i.e., participants were queried whether they found a given segment pair similar or not - without providing any information what they should listen for. The results from this ex-periment were compared to the data from exper-iment 1 and the results from the parallel study on perceptual concordance of timbre similarity to examine potential interactions between these di-mensions. The task was the same as experiment 2, i.e., using the 18 pairs specified in appendix D, except that participants (N = 16, of which 4 females; average age = 33.8, standard deviation = 17.1) were asked to assess overall (general) music similarity instead of rhythm similarity.

3

Results

To guard for the inclusion of misleading data in the analysis, participants who took part in the experiment via our advertisement on the internet and gave inconsistent ratings to the safety pairs were excluded from further analysis.

It was found that the exclusion of judgments with low confidence ratings (i.e., confidence =

(8)

1) did not significantly affect inter rater agree-ment. These judgments where therefore pre-served in the final analysis, in order to keep the number of raters per pair as high as possible. This to facilitate the computation of the kappa statistic as described above.

3.1 Experiment 1

Data obtained from between 11 and 20 raters per pair (9 participants were removed based on the safety pair criteria described above) showed slight to fair agreement (2-point scale: kappa = 0.2962, p <0.05; 4-point scale: kappa = 0.1513, p <0.05; see table 1 for this and further in-terpretations of kappa values). Several partic-ipant subgroups were identified based on in-formation obtained from the questionnaire (see appendix C) and tested for agreement. Fur-thermore, a WRST was performed to check whether the groups differed significantly. Partic-ipants who had received formal musical training showed slight to fair agreement (N = 32; 2-point scale: kappa = 0.3186, p <0.05; 4-point scale: kappa = 0.1655, p <0.05), as did participants who had not (N = 26; 2-point scale: kappa = 0.2700, p <0.05; 4-point scale: kappa = 0.1455, p <0.05). WRST results showed that partici-pants’ responses differed significantly between groups (mean ratings of 1.9569 and 2.1850, re-spectively; U = 2.6798 * 106, p <0.05). Par-ticipants who were familiar with EDM showed slight to fair agreement as well (N = 33; 2-point scale: kappa = 0.3198, p <0.05; 4-point scale: kappa = 0.1532, p <0.05), as were participants unfamiliar with EDM (N = 22; 2-point scale: kappa = 0.3040, p <0.05; 4-point scale: kappa = 0.1477, p <0.05; N.B., pair 133 was excluded from the analysis due to an insufficient amount of raters (N = 1)). These two groups were not

found to be significantly different in their rat-ings (mean ratrat-ings of 2.0711 and 2.0980 respec-tively; U = 3.6062 * 106, p = 0.2889).

Three participant subgroups were checked for differences in similarity judgments per in-dividual pair using a Wilcoxon rank-sum test (WRST); the dependent variable being partici-pants’ ratings of a given pair and the indepen-dent variable being group membership. Partic-ipants with early-life musical training gave sig-nificantly (p <0.05) different answers to 13/190 pairs as compared to those who had not. Par-ticipants who claimed to be familiar with EDM gave significantly different answers to 8/190 pairs as compared to those who did not. The group that used HiFi equipment (N = 47; head-phones, studio monitors, external speakers) gave significantly different answers to 1/190 pairs as compared to those who did not (N = 11; ear buds, laptop speakers). See table 2 for WRST results.

Group differences in participant confidence were examined as well: participants with formal musical training had significantly higher confi-dence in their ratings with a total average of 2.7128, as opposed to those who had not (av-erage confidence = 2.6206, WRST: U = 3.6253 * 106, p <0.05). Participants who were familiar with EDM also had significantly higher confi-dence contrasted to those who had not (averages of 2.7324 and 2.5902, respectively; WRST: U = 3.8078 * 106, p <0.05). Males (rating con-cordance slight to fair: 2-point scale: kappa = 0.3003, p <0.05; 4-point scale: kappa = 0.1604, p <0.05) were also more confident than females (averages of 2.7169 and 2.5144, respectively; WRST: U = 5.1390 * 106, p <0.05). Rat-ing concordance of female participants was not computed due to an small sample size.

(9)

Table 2. WRST results for pairs wherein group differences were found. Group Dif ference Musical T raining -YES/NO -WRST 1/2 P air number 24 47 49 50 106 129 131 U 114,00 110,00 133,50 85,00 128,50 113,00 164,50 p 0,045 0,005 0,009 0,013 0,030 0,020 0,011 Group Dif ference Musical T raining -YES/NO -WRST 2/2 P air number 136 149 155 161 175 183 U 132,50 107,00 128,50 101,50 135,50 126,00 p 0,011 0,017 0,036 0,043 0,050 0,023 Group Dif ference EDM F amiliarity -YES/NO -WRST P air number 15 30 45 64 105 119 145 189 U 87,50 144,50 110,50 76,00 157,00 61,00 67,50 77,50 p 0,018 0,038 0,026 0,023 0,009 0,038 0,024 0,018 Group Dif ference Audio Fidelity -HiFi/LoFi -WRST P air number 118 U 124,50 p 0,049

(10)

Table 3. WRST results for rhythm and timbre versus general similarity. P air# 7 17 21 39 47 53 59 62 94 Mean Rating G 1,8 2,08 1,04 2,4 2,52 1,12 1,72 2,36 1,28 Rh ythm vs. General Similarity N Rh ythm 20 18 20 15 16 18 17 21 23 Mean Rating R 2.3500 2.1667 1.5000 2.5333 2.9375 2.4444 1.8235 2.7619 1.1739 U 498,5 547,5 508 498 473 389,5 527 533,5 633 p 0.0674 0.9588 0.0159 0.6812 0.1456 0.0000 0.7808 0.2111 0.4724 T imbre vs. General Similarity N T imbre 17 16 18 19 17 19 18 18 17 Mean Rating T 2.3529 3.0625 1.3889 3.3158 3.4706 1.1053 3.2222 2.6667 1.7059 U 468,5 406 483,5 439 416 557,5 385,5 512 448,5 p 0.0640 0.0008 0.0112 0.0023 0.0011 0.8306 0.0000 0.3167 0.0074 Predicted MRMT MRHT LRL T MRHT MRHT MRL T LRHT MRMT LRL T Result TIE RHYTHM RHYTHM RHYTHM RHYTHM TIMBRE RHYTHM TIE RHYTHM P air# 111 119 149 151 176 178 184 188 190 Mean Rating G 2 1,84 1,68 1,04 2,2 1,44 3,32 2 1,8 Rh ythm vs. General Similarity N Rh ythm 21 20 16 20 18 20 18 17 19 Mean Rating R 3.4286 3.5500 3.2500 1.3000 3.0556 1.5000 3.7222 2.5294 3.0000 U 406,5 372 364 546,5 436 575,5 479 466,5 402 p 0.0000 0.0000 0.0000 0.1951 0.0033 1.0000 0.0486 0.0585 0.0001 T imbre vs. General Similarity N T imbre 17 14 14 13 15 13 13 14 15 Mean Rating T 1.7059 2.7857 3.1429 1.1538 2.0667 2.3846 3.4615 2.3571 1.7333 U 574 400,5 375 469 528,5 386 456 462,5 525 p 0.3129 0.0024 0.0001 0.2359 0.6487 0.0008 0.2903 0.2553 0.7159 Predicted HRL T HRMT HRHT LRL T HRMT LRMT HRHT MRMT HRL T Result TIMBRE TIMBRE TIMBRE TIE TIMBRE RHYTHM TIMBRE TIE TIMBRE

Dark grey cells signify general similarity following the more dissimilar out of rhythm or timbre. Light grey cells signify unpredicted ties. White cells signify correctly predicted ties. Mean Rating G, R and T refer to the mean rating the pair received in the general, rhythm and timbre experiment respectively. Timbre data from Lopez-Mejia et al. (2014).

(11)

3.2 Experiment 2

Data obtained from a single expert participant showed moderate to substantial agreement (2-point scale: kappa = 0.6132, p <0.05; 4-(2-point scale: kappa = 0.4511, p <0.05), meaning that an individual listener is able to consistently as-sess rhythm similarity in concordance with his or her own previous judgments. Similar results were obtained by Lopez-Mejia and colleagues (2014). Participants in experiments 1 and 3 can therefore be expected to give meaningful and consistent responses as well. This solidifies the validity of conclusions drawn with respect to perceptual concordance of music similarity, be it along rhythm, timbre or general dimensions.

3.3 Experiment 3

Data obtained for general similarity assessments showed slight agreement (2-point scale: kappa = 0.1927, p <0.05; 4-point scale: kappa = 0.1453, p <0.05), meaning that some concor-dance was observed for general similarity as-sessments. Even though concordance of general similarity judgments were lower than those of rhythm similarity at this point, these values can-not be compared yet since different pairs were judged in experiments 1 and 3. See below for a further discussion of these results.

Potential group differences (cf. experiment 1) were not analyzed for experiment 3, as the number of participants in the smaller subgroups was too low to lead to reliable results.

3.4 Additional Analyses

To be able to compare the results on rhythm, timbre, and general similarity, agreement for the set of 18 pairs used in experiments 2 and 3 was also calculated for the rhythm (2-point scale:

fair agreement: kappa = 0.3280, p <0.05; 4-point scale: slight agreement: kappa = 0.1980, p <0.05) and timbre (2-point scale: fair agree-ment: kappa = 0.2835, p <0.05; 4-point scale: slight agreement: kappa = 0.1572, p <0.05) datasets. There was slight concordance in the full timbre experiment (2-point scale: kappa = 0.1223, p <0.05; 4-point scale: kappa = 0.0625, p <0.05). These results suggest that the 18 pairs used in the shortened experiment lead to less controversy amongst raters. This could be an artifact of the selection procedure: a rela-tively large number of pairs in the smaller cohort scored very high or very low - and therefore had to have been rated relatively consistently - on ei-ther the rhythm or timbre similarity dimensions as compared to the full set of 190 pairs, which contained a large amount of potentially contro-versial medium ratings.

Since rhythm and timbre are the two of the most influencing dimensions in EDM (Butler, 2006) as well as music at large (Novello, 2011), we expect that both timbre similarity and rhythm similarity play a role in music similarity in EDM in general. To assess the interplay between gen-eral similarity and rhythm and timbre similarity the following analysis was performed: rhythm and timbre similarity ratings of a given pair were compared to its general similarity judgments us-ing a WRST (table 3 for references through-out this paragraph). Significance levels and U-statistics of the rhythm-general and timbre-general group comparisons were contrasted (if both satisfied the p <0.05 threshold for signifi-cance, the lower U-value was selected). For 10 out of 18 pairs no significant differences were found between general similarity and the lower of rhythm or timbre similarity. See for exam-ple pair 17: higher timbre than rhythm simi-larity ratings were obtained; it is therefore pre-dicted that general similarity ratings resemble

(12)

the higher timbre ratings for this pair. However, a significant difference is found between gen-eral and timbre similarity ratings instead of the other way around. The results were inconclu-sive for 4 pairs with similar rhythm and timbre similarity assessments (e.g., high rhythm simi-larity, high timbre similarity; pairs 21, 94, 149 and 184). Interestingly, the combination of high rhythm and timbre similarity did not necessar-ily lead to a high general similarity rating (pair 149). Ties were predicted successfully for the four remaining pairs (7, 62, 151 and 188), al-though this was not due to the similarity ratings in one case: pair 7 has a low general similar-ity rating, but comparisons to both rhythm and timbre similarity did not cross the threshold for significance. This lead to a tie in the comparison chart: the results are equally dissimilar.

4

Interpretation & Discussion

4.1 Rhythm Similarity

Firstly, it is interesting to note that experiment 2 shows that an individual human listener is in-deed able to rate rhythm similarity in a consis-tent way. Inter rater agreement is also found, albeit to a somewhat lesser extent. While this was to be expected, this suggests that even though participants generally agree whether two rhythms are similar or not, there may be differ-ences in individual listening styles. For exam-ple: some participants may pay more attention to a given rhythmic feature - e.g., syncopation or event density - than others and individuals’ sim-ilarity judgments could therefore be weighted differently across different features.

To facilitate the search for groups with dif-ferent listening styles, subjects were asked to describe their listening strategies after complet-ing the experiment, but no patterns emerged that

could explain the decreased concordance. When mentioning rhythmic features, something they were not asked to do, most participants named syncopation or the distinction between four-to-the-floor and break beat (which is also related to syncopation).

Significant differences in ratings were found for a number of pairs between people with early life musical training and people who had not (table 2) - as well as for the groups as a whole (see above). These results could to-gether indicate that there is a difference in lis-tening styles between the two groups. While the kappa values of these groups did not in-crease much compared to the kappa value of the group at large - all groups are equally con-cordant in their judgments - the two subgroups (receiving early life musical training or not) did prove to give significantly different ratings over-all. Members of the different groups therefore agreed equally strongly, but on a different set of answers. Whether these variations stem from differential group conservativeness - i.e., partic-ipants who had received musical training giving lower answers than those who had not, but along a similar distribution - or from a completely dif-ferent interpretation however remains to be seen. Inter rater agreement within different par-ticipant subgroups was examined as well. All considered groups showed fair agreement with similar kappa ratings. The subgroups gave com-parable responses in their similarity judgments per pair. Early-life musical training exerted the largest influence, leading to different answers for 13/190 pairs - still a relatively small amount. Attempts to find commonalities by ear to con-nect these pairs have thus far proven unsuccess-ful. Perhaps further research will be able to identify a potential underlying pattern.

Other participant subgroups were compara-ble in their similarity judgments as well. The

(13)

pairs that were assessed differently by the sub-groups were different for every pair. While it is very interesting that the HiFi and LoFi sub-groups only differed on one pair, it has to be stressed that the LoFi group was very small. This led to several pairs being only judged by a small number of raters, or by none at all. This result could be promising for further online ex-periments if it turns out that differences in audio hardware have a negligible effect on data, but it will have to be replicated with a sufficiently large cohort before this can be concluded.

Considering the similar concordance statis-tics of all subgroups, as well as the small dif-ferences in individual pair judgments, it is note-worthy that a seemingly heterogeneous cohort of participants is fairly consistent in its assess-ment of rhythmic similarities in music. While the agreement within the groups is not as high as for an individual participant, a trend emerges that human listeners do indeed perceive rhythm and rhythmic similarity in a similar way.

An aspect of the task wherein significant group differences were found was in the con-fidence raters had in their own judgment. Par-ticipants who had received early-life musical training or were familiar with EDM were sig-nificantly more confident in their judgments of rhythm similarity than their counterparts, as were males compared to females.

The assessments of overall rhythmic similar-ity also lead to a ground-truth dataset that con-tains information regarding the degree of rhyth-mic similarity between pairs of segments, as per-ceived by participants. This dataset can subse-quently be used to test models of rhythmic sim-ilarity and judge their performance accordingly.

4.2 Rhythm, Timbre & General

Simi-larity

Interestingly, perceptual concordance for rhythm similarity was substantially higher than for timbre and general similarity. It is possible that this stems from the fact that the question to assess general similarity is more open to in-terpretation than the specific assignment to lis-ten to rhythm alone. When performing general similarity judgments, it is possible that some participants have an individual preference for a given sub similarity such as rhythm or timbre. Because the task is less specific, participants are left freer in their approach. This can lead to an emergence of a wider range of individual strate-gies and subsequently a lower concordance for the cohort at large.

Another question to be answered was whether there is a connection between general similarity and rhythmic or timbral sub-similarity in music. To investigate this, all rhythm and tim-bre similarity ratings of a given pair were com-pared to the general similarity judgments of the same pair using a WRST. A tendency was found that general similarity was statistically similar to the dimension that showed lower similarity. For example, a LRHT pair such as pair 17 (table 3) resulted in a rejection of the null hypothesis (i.e., group medians are different) at the 5% level for timbre similarity, but not for rhythm. Similar results were found for nearly all assessed pairs. Furthermore, this effect was observed for com-binations with both relatively higher timbre as well as rhythm similarity ratings, indicating that it is not a consequence of one dimension over-ruling the other.

The notion that general music similarity equates lower rhythmic or timbral similarity is quite counterintuitive and was therefore investi-gated more thoroughly. The pattern was

(14)

consis-tent throughout all assessed pairs and the differ-ences were generally not marginally significant. Closer examination of mean general similarity ratings per pair (table 3) reveals that only one of the eighteen pairs shows high general similar-ity (pair 184, mean rating = 3.1875). The other pairs all scored between 1.0625 and 2.5, show-ing that participants identified virtually no eral music similarity in the stimuli. Since gen-eral similarity data was unavailable - e.g., from a pilot experiment - when the third experiment was designed, it was impossible to control for its absence beforehand.

In order to properly assess any potential in-terplay between general music similarity and sub similarities such as rhythm and timbre sim-ilarity, all dimensions will have to be controlled similarly. Not only all possible combinations of rhythm and timbre similarity will have to be present in the stimuli, as they were, but all pos-sible combinations of high, medium and low general similarity have to be available as well. This may be possible, but would require the con-struction of a new set of segment pairs based on the current rhythm and timbre similarity ings, as well as additional general similarity rat-ings. These general similarity ratings are, how-ever, currently unavailable. A rerun of an exper-iment wherein a large number of pairs are as-sessed (such as experiment 1) examining

gen-eral similarity could provide a dataset that con-tains a full spread of different general similar-ity ratings. It might subsequently be possible to pick a set of pairs from a combination of all three datasets (experiment 1, its timbre analog, and the hypothetical experiment described here) combined may then yield all similarity combina-tions described above. This information could then be used to examine potential connections between general, rhythm and timbre similarity.

5

Conclusion

Listeners, whether individually or in a group, show agreement in their responses when queried whether or not two music segments resemble each other rhythmically. Effects of early-life musical training or musical preference on agree-ment of rhythm similarity judgagree-ments are small, but present. Whether these effects stem from an actual difference in listening strategy, or from differential group conservativeness remains an open question for now and could prove to be an interesting topic for further research.

A larger effect was found for rater confi-dence: those who had received early-life musi-cal training or were familiar with EDM, were somewhat more confident in their judgments of rhythm similarity, as were males.

(15)

6

References

Anderson, C., & Eigenfeldt, A. (2011). A New Analytical Method for the Musical Study of Elec-tronica. Proceedings of the Electroacoustic Music Studies Conference, Sforzando!, 1-12.

Butler, M. (2006). Unlocking the Groove: Rhythm, Meter and Musical Design in Electronic Dance Music. Indiana University Press, IN, USA.

Cambouropoulos, E. (2009). How Similar is Similar? Musicae Scientiae, 13(1 suppl), 7-24. Cao, E., Lotstein, M., & Johnson-Laird, P. N. (2014). Similarity and Families of Musical Rhythms. Music Perception: An Interdisciplinary Journal, 31(5), 444-469.

Cohen, J. (1960). A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20(1), 37-46.

Cohen, J. (1968). Weighted kappa: Nominal Scale Agreement Provision for Scaled Disagreement or Partial Credit. Psychological Bulletin, 70(4), 213-220.

Fleiss, J. L. (1971). Measuring Nominal Scale Agreement among many Raters. Psychological Bulletin, 76(5), 378.

Graphpad Statistics Guide (2014, Accessed Wed. 03-09-2014): http://cdn.graphpad.com/docs/ prism/6/Prism-6-Statistics-Guide.pdf.

Gouyon, F., Dixon, S., Pampalk, E., & Widmer, G. (2004). Evaluating Rhythmic Descriptors for Musical Genre Classification. Proceedings of the AES 25th International Conference, 196-204. Gwet, K. (2001). Handbook of Inter-Rater Reliability. STATAXIS Publishing Company, Gaithers-burg, MD, USA.

Jones, M. C., Downie, J. S., & Ehmann, A. F. (2007). Human Similarity Judgments: Implications for the Design of Formal Evaluations. Proceedings of the International Society for Music Informa-tion Retrieval, 539-542.

Landis, J. R., & Koch, G. G. (1977). The Measurement of Observer agreement for Categorical Data. Biometrics, 33, 159-174.

Lopez-Mejia, D.I., M. Sadakata, A. Honingh (2014). Perceptual Concordance of Timbre Similar-ity. Unpublished data.

Mann, H. B., & Whitney, D. R. (1947). On a Test of Whether One of two Random Variables is Stochastically Larger than the Other. The Annals of Mathematical Statistics, 18(1) 50-60.

(16)

Nachar, N. (2008). The Mann-Whitney U: a Test for Assessing Whether Two Independent Samples Come from the Same Distribution. Tutorials in Quantitative Methods for Psychology, 4(1), 13-20. Novello, A., McKinney, M. F., & Kohlrausch, A. (2006). Perceptual Evaluation of Music Similar-ity. Proceedings of the International Society for Music Information Retrieval, 246-249.

Novello, A., McKinney, M. M., & Kohlrausch, A. (2011). Perceptual Evaluation of Inter-Song Similarity in Western Popular Music. Journal of New Music Research, 40(1), 1-26.

Pitt, M. A., & Monahan, C. B. (1987). The Perceived Similarity of Auditory Polyrhythms. Percep-tion & Psychophysics, 41(6), 534-546.

Smith, L. (2012). Rhythmic Similarity Using Metrical Profile Matching. Proceedings of the 2010 International Computer Music Conference, 177-182.

Thaut, M. (2005). Rhythm, Music, and the Brain: Scientific Foundations and Clinical Applica-tions. Routledge, London, UK.

Van Noorden, L., & Moelants, D. (1999). Resonance in the Perception of Musical Pulse. Journal of New Music Research, 28(1), 43-66.

(17)

7

Appendices

7.1 Appendix A

List of segments:

Artist Title

Afrojack Die Hard

Amon Tobin Get Your Snack On

Aphex Twin Cornish Acid

Autechre Clipper

Burial Loner

Bonobo Kong

Clark Com Touch

Cornelius Breezin’

Daft Punk Around The World

Deadmau5 Soma

Flying Lotus Parisian Goldfish

Leftfield Original

Massive Attack Teardrop

Merzbow Transformed Into Food

Orbital Euphoria

Prodigy, The Firestarter

Ricardo Villalobos Amazordum

Richie Hawtin Orange 2

Squarepusher Fat Controller

UMEK Efortil

Underworld Crocodile

(18)

7.2 Appendix B List of pairs:

Pair# Segment 1 Segment 2

1 Aphex Twin - Cornish Acid Cornelius - Breezin’

2 Aphex Twin - Cornish Acid Prodigy, The - Firestarter

3 Aphex Twin - Cornish Acid Burial - Loner

4 Aphex Twin - Cornish Acid Amon Tobin - Get Your Snack On

5 Aphex Twin - Cornish Acid Clark - Com Touch

6 Aphex Twin - Cornish Acid Underworld - Crocodile

7 Aphex Twin - Cornish Acid Afrojack - Die Hard

8 Aphex Twin - Cornish Acid Massive Attack - Teardrop

9 Aphex Twin - Cornish Acid Leftfield - Original

10 Aphex Twin - Cornish Acid Daft Punk - Around The World

11 Aphex Twin - Cornish Acid Deadmau5 - Soma

12 Aphex Twin - Cornish Acid Squarepusher - Fat Controller

13 Aphex Twin - Cornish Acid Flying Lotus - Parisian Goldfish

14 Aphex Twin - Cornish Acid Orbital - Euphoria

15 Aphex Twin - Cornish Acid UMEK - Efortil

16 Aphex Twin - Cornish Acid Merzbow - Transformed Into Food

17 Aphex Twin - Cornish Acid Autechre - Clipper

18 Aphex Twin - Cornish Acid Ricardo Villalobos - Amazordum

19 Aphex Twin - Cornish Acid Richie - Hawtin - Orange 2

20 Cornelius - Breezin’ Prodigy, The - Firestarter

21 Cornelius - Breezin’ Burial - Loner

22 Cornelius - Breezin’ Amon Tobin - Get Your Snack On

23 Cornelius - Breezin’ Clark - Com Touch

24 Cornelius - Breezin’ Underworld - Crocodile

25 Cornelius - Breezin’ Afrojack - Die Hard

26 Cornelius - Breezin’ Massive Attack - Teardrop

27 Cornelius - Breezin’ Leftfield - Original

28 Cornelius - Breezin’ Daft Punk - Around The World

29 Cornelius - Breezin’ Deadmau5 - Soma

30 Cornelius - Breezin’ Squarepusher - Fat Controller

31 Cornelius - Breezin’ Flying Lotus - Parisian Goldfish

32 Cornelius - Breezin’ Orbital - Euphoria

33 Cornelius - Breezin’ UMEK - Efortil

34 Cornelius - Breezin’ Merzbow - Transformed Into Food

(19)

36 Cornelius - Breezin’ Ricardo Villalobos - Amazordum

37 Cornelius - Breezin’ Richie - Hawtin - Orange 2

38 Prodigy, The - Firestarter Burial - Loner

39 Prodigy, The - Firestarter Amon Tobin - Get Your Snack On

40 Prodigy, The - Firestarter Clark - Com Touch

41 Prodigy, The - Firestarter Underworld - Crocodile

42 Prodigy, The - Firestarter Afrojack - Die Hard

43 Prodigy, The - Firestarter Massive Attack - Teardrop

44 Prodigy, The - Firestarter Leftfield - Original

45 Prodigy, The - Firestarter Daft Punk - Around The World

46 Prodigy, The - Firestarter Deadmau5 - Soma

47 Prodigy, The - Firestarter Squarepusher - Fat Controller

48 Prodigy, The - Firestarter Flying Lotus - Parisian Goldfish

49 Prodigy, The - Firestarter Orbital - Euphoria

50 Prodigy, The - Firestarter UMEK - Efortil

51 Prodigy, The - Firestarter Merzbow - Transformed Into Food

52 Prodigy, The - Firestarter Autechre - Clipper

53 Prodigy, The - Firestarter Ricardo Villalobos - Amazordum

54 Prodigy, The - Firestarter Richie - Hawtin - Orange 2

55 Burial - Loner Amon Tobin - Get Your Snack On

56 Burial - Loner Clark - Com Touch

57 Burial - Loner Underworld - Crocodile

58 Burial - Loner Afrojack - Die Hard

59 Burial - Loner Massive Attack - Teardrop

60 Burial - Loner Leftfield - Original

61 Burial - Loner Daft Punk - Around The World

62 Burial - Loner Deadmau5 - Soma

63 Burial - Loner Squarepusher - Fat Controller

64 Burial - Loner Flying Lotus - Parisian Goldfish

65 Burial - Loner Orbital - Euphoria

66 Burial - Loner UMEK - Efortil

67 Burial - Loner Merzbow - Transformed Into Food

68 Burial - Loner Autechre - Clipper

69 Burial - Loner Ricardo Villalobos - Amazordum

70 Burial - Loner Richie - Hawtin - Orange 2

71 Amon Tobin - Get Your Snack On Clark - Com Touch

72 Amon Tobin - Get Your Snack On Underworld - Crocodile

73 Amon Tobin - Get Your Snack On Afrojack - Die Hard

74 Amon Tobin - Get Your Snack On Massive Attack - Teardrop 75 Amon Tobin - Get Your Snack On Leftfield - Original

(20)

77 Amon Tobin - Get Your Snack On Deadmau5 - Soma

78 Amon Tobin - Get Your Snack On Squarepusher - Fat Controller 79 Amon Tobin - Get Your Snack On Flying Lotus - Parisian Goldfish 80 Amon Tobin - Get Your Snack On Orbital - Euphoria

81 Amon Tobin - Get Your Snack On UMEK - Efortil

82 Amon Tobin - Get Your Snack On Merzbow - Transformed Into Food

83 Amon Tobin - Get Your Snack On Autechre - Clipper

84 Amon Tobin - Get Your Snack On Ricardo Villalobos - Amazordum

85 Amon Tobin - Get Your Snack On Richie - Hawtin - Orange 2

86 Clark - Com Touch Underworld - Crocodile

87 Clark - Com Touch Afrojack - Die Hard

88 Clark - Com Touch Massive Attack - Teardrop

89 Clark - Com Touch Leftfield - Original

90 Clark - Com Touch Daft Punk - Around The World

91 Clark - Com Touch Deadmau5 - Soma

92 Clark - Com Touch Squarepusher - Fat Controller

93 Clark - Com Touch Flying Lotus - Parisian Goldfish

94 Clark - Com Touch Orbital - Euphoria

95 Clark - Com Touch UMEK - Efortil

96 Clark - Com Touch Merzbow - Transformed Into Food

97 Clark - Com Touch Autechre - Clipper

98 Clark - Com Touch Ricardo Villalobos - Amazordum

99 Clark - Com Touch Richie - Hawtin - Orange 2

100 Underworld - Crocodile Afrojack - Die Hard

101 Underworld - Crocodile Massive Attack - Teardrop

102 Underworld - Crocodile Leftfield - Original

103 Underworld - Crocodile Daft Punk - Around The World

104 Underworld - Crocodile Deadmau5 - Soma

105 Underworld - Crocodile Squarepusher - Fat Controller

106 Underworld - Crocodile Flying Lotus - Parisian Goldfish

107 Underworld - Crocodile Orbital - Euphoria

108 Underworld - Crocodile UMEK - Efortil

109 Underworld - Crocodile Merzbow - Transformed Into Food

110 Underworld - Crocodile Autechre - Clipper

111 Underworld - Crocodile Ricardo Villalobos - Amazordum

112 Underworld - Crocodile Richie - Hawtin - Orange 2

113 Afrojack - Die Hard Massive Attack - Teardrop

114 Afrojack - Die Hard Leftfield - Original

115 Afrojack - Die Hard Daft Punk - Around The World

116 Afrojack - Die Hard Deadmau5 - Soma

(21)

118 Afrojack - Die Hard Flying Lotus - Parisian Goldfish

119 Afrojack - Die Hard Orbital - Euphoria

120 Afrojack - Die Hard UMEK - Efortil

121 Afrojack - Die Hard Merzbow - Transformed Into Food

122 Afrojack - Die Hard Autechre - Clipper

123 Afrojack - Die Hard Ricardo Villalobos - Amazordum

124 Afrojack - Die Hard Richie - Hawtin - Orange 2

125 Massive Attack - Teardrop Leftfield - Original

126 Massive Attack - Teardrop Daft Punk - Around The World

127 Massive Attack - Teardrop Deadmau5 - Soma

128 Massive Attack - Teardrop Squarepusher - Fat Controller

129 Massive Attack - Teardrop Flying Lotus - Parisian Goldfish

130 Massive Attack - Teardrop Orbital - Euphoria

131 Massive Attack - Teardrop UMEK - Efortil

132 Massive Attack - Teardrop Merzbow - Transformed Into Food

133 Massive Attack - Teardrop Autechre - Clipper

134 Massive Attack - Teardrop Ricardo Villalobos - Amazordum

135 Massive Attack - Teardrop Richie - Hawtin - Orange 2

136 Leftfield - Original Daft Punk - Around The World

137 Leftfield - Original Deadmau5 - Soma

138 Leftfield - Original Squarepusher - Fat Controller

139 Leftfield - Original Flying Lotus - Parisian Goldfish

140 Leftfield - Original Orbital - Euphoria

141 Leftfield - Original UMEK - Efortil

142 Leftfield - Original Merzbow - Transformed Into Food

143 Leftfield - Original Autechre - Clipper

144 Leftfield - Original Ricardo Villalobos - Amazordum

145 Leftfield - Original Richie - Hawtin - Orange 2

146 Daft Punk - Around The World Deadmau5 - Soma

147 Daft Punk - Around The World Squarepusher - Fat Controller 148 Daft Punk - Around The World Flying Lotus - Parisian Goldfish

149 Daft Punk - Around The World Orbital - Euphoria

150 Daft Punk - Around The World UMEK - Efortil

151 Daft Punk - Around The World Merzbow - Transformed Into Food

152 Daft Punk - Around The World Autechre - Clipper

153 Daft Punk - Around The World Ricardo Villalobos - Amazordum

154 Daft Punk - Around The World Richie - Hawtin - Orange 2

155 Deadmau5 - Soma Squarepusher - Fat Controller

156 Deadmau5 - Soma Flying Lotus - Parisian Goldfish

157 Deadmau5 - Soma Orbital - Euphoria

(22)

159 Deadmau5 - Soma Merzbow - Transformed Into Food

160 Deadmau5 - Soma Autechre - Clipper

161 Deadmau5 - Soma Ricardo Villalobos - Amazordum

162 Deadmau5 - Soma Richie - Hawtin - Orange 2

163 Squarepusher - Fat Controller Flying Lotus - Parisian Goldfish 164 Squarepusher - Fat Controller Orbital - Euphoria

165 Squarepusher - Fat Controller UMEK - Efortil

166 Squarepusher - Fat Controller Merzbow - Transformed Into Food

167 Squarepusher - Fat Controller Autechre - Clipper

168 Squarepusher - Fat Controller Ricardo Villalobos - Amazordum

169 Squarepusher - Fat Controller Richie - Hawtin - Orange 2 170 Flying Lotus - Parisian Goldfish Orbital - Euphoria

171 Flying Lotus - Parisian Goldfish UMEK - Efortil

172 Flying Lotus - Parisian Goldfish Merzbow - Transformed Into Food 173 Flying Lotus - Parisian Goldfish Autechre - Clipper

174 Flying Lotus - Parisian Goldfish Ricardo Villalobos - Amazordum 175 Flying Lotus - Parisian Goldfish Richie - Hawtin - Orange 2

176 Orbital - Euphoria UMEK - Efortil

177 Orbital - Euphoria Merzbow - Transformed Into Food

178 Orbital - Euphoria Autechre - Clipper

179 Orbital - Euphoria Ricardo Villalobos - Amazordum

180 Orbital - Euphoria Richie - Hawtin - Orange 2

181 UMEK - Efortil Merzbow - Transformed Into Food

182 UMEK - Efortil Autechre - Clipper

183 UMEK - Efortil Ricardo Villalobos - Amazordum

184 UMEK - Efortil Richie - Hawtin - Orange 2

185 Merzbow - Transformed Into Food Autechre - Clipper

186 Merzbow - Transformed Into Food Ricardo Villalobos - Amazordum 187 Merzbow - Transformed Into Food Richie - Hawtin - Orange 2

188 Autechre - Clipper Ricardo Villalobos - Amazordum

189 Autechre - Clipper Richie - Hawtin - Orange 2

(23)

7.3 Appendix C

The questionnaire that was provided upon completion of the task:

What is your age?

What is your gender?

Have you had formal musical training? At what age did you start?

In what musical style did you receive your main musical training (e.g., classical, jazz, pop, etc.)?

What instruments do you play?

Do you work with music professionally? What do you do?

How familiar are you with Electronic Dance Music?

Not familiar at all/Somewhat familiar/Familiar/Very familiar. How would you define yourself (select all that apply)?

Listener/Musician or Producer/DJ/Other

What is your favourite EDM subgenere (select all that apply)?

Breakbeat/Drum and Bass/Dubstep/Electro/House/Techno/Trance/UK Garage/Other

Other than EDM, what is your favourite music genre (select all that apply)? Pop/Rock/Jazz/Blues/Classical/World Music/Other

(24)

While rating the pairs of musical segments, what did you use to play the audio (select all that ap-ply)?

Headphones/Earbuds/Laptop Speakers/External Speakers/Professional Monitors/Other

What strategy did you use to rate the similarity of the music segments? This doesn’t have to be very thorough, but please mention what helped you during the task.

What would you say is the percentage of songs presented that you knew before taking this experi-ment?

Do you have any further comments about the experiment? Please provide us with your feedback!

Would you like to be contacted in the future to take part in music-related experiments? If so, please provide your e-mail address.

(25)

7.4 Appendix D

Pairs used in experiments 2 and 3. Values are based on pilot and intermediate results from experi-ment 1. Se gment 1 Se gment 2 Cate gory Rh y. Mean T im. Mean Rh y. SD T im. SD The Prodigy -Firestarter Squarepusher -F at Controller HRHT 3.1 3.25 0.8756 0.9653 Daft Punk -Around The W orld Orbital -Euphoria HRHT 3.3333 3.1429 0.7071 0.9493 UMEK -Efortil Richie Ha wtin -Orange 2 HRHT 3.867 3.417 0.3519 0.9962 Underw orld -Crocodile Ricardo V illalobos -Amazordum HRL T 3.4375 2 0.8921 1.1282 Orbital -Euphoria UMEK -Efortil HRL T 3.0909 2 0.8312 0.9129 Ricardo V illalobos -Amazordum Richie Ha wtin -Orange 2 HRL T 3.0833 1.6429 0.7930 0.9288 Aphe x T win -Cornish Acid Autechre -Clipper LRHT 2.0833 3.1 1.1645 0.7379 Burial -Loner Massi v e Attack -T eardrop LRHT 1.8333 3.0833 1.0299 0.9962 Cornelius -Breezin Burial -Loner LRL T 1.3571 1.25 0.7449 0.4523 Clark -Com T ouch Orbital -Euphoria LRL T 1 1.6667 0 0.6513 Daft Punk -Around The W orld Merzbo w -T ransformed Into F ood LRL T 1.1538 1.1667 0.5547 0.3892 Afrojack -Die Hard Orbital -Euphoria HRMT 3.5333 2.7857 0.8338 0.6993 The Prodigy -Firestarter Amon T obin -Get Y our Snack On MRHT 2.2222 3.25 0.9718 0.7538 Aphe x T win -Cornish Acid Afrojack -Die Hard MRMT 2.3571 2.333 1.0082 0.9847 Burial -Loner Deadmau5 -Soma MRMT 2.7333 2.5 0.7037 0.7977 Autechre -Clipper Ricardo V illalobos -Amazordum MRMT 2.5833 2.3077 0.7930 1.0316 Orbital -Euphoria Autechre -Clipper LRMT 1.3333 2.3846 0.6172 0.7679 The Prodigy -Firestarter Ricardo V illalobos -Amazordum MRL T 2.5 1.1538 1 0.3755

Referenties

GERELATEERDE DOCUMENTEN

kende van die studenteliggame aan die P.U. Dit het tot uiting gekom in die protes teen die plakket voor die Admm1stratrewegebou ; ondersteuning met Intervarsity ;

To answer this question, we need to establish a testing framework, including (i) a method that enables us to structurally di↵erentiate between healthy networks and depressed

In an attempt to document the anuran diversity in north-eastern KwaZulu-Natal, South Africa, the present study was conducted by making use of passive acoustic monitoring (PAM) via

Courts for sexual offences like the one in Bloemfontein, claim thus not only to have streamlined the judicial process with regard to sex crimes and to have improved the conviction

Therefore, this paper presents three similarity metrics that can be used to an- swer queries on process repositories: (i) node matching similarity that compares the labels

Furthermore, because all previous training data were not collected by explicitly asking listeners to rate acoustic similarity in a controlled experiment, the previously

Step 1: Identity features Step 2: Compute feature similarity and matching Step 3: Estimate process similarity and classify processes Step 4: Compare potentially relevant models

responsles op deze zlnnen worden gedcrnlneerd door de letter b , $at. dus nleÈ de bedoellng