University of Huddersfield
Symposium organised by the Analysis of Electroacoustic Music working group of the French Society for Music Analysis (SFAM) in collaboration with the Centre for Research in New Music (CeReNeM) of the University of Huddersfield.
Location: Room RHG/03, The Researcher Hub, University of Huddersfield
Time: 9.30 – 12.30 and 14.00 – 18.00
Admission free – All welcome
Invited guest contributors: Michael Clarke (CeReNeM, University of Huddersfield), John Dack (Middlesex University)
Speakers: Alain Bonardi (Ircam – Paris 8 University), Bruno Bossis (Rennes 2 University – Paris-Sorbonne University), Pierre Couprie (Paris-Sorbonne University), Frédéric Dufeu (CeReNeM, University of Huddersfield), Mikhail Malt (Ircam), Laurent Pottier (Jean Monnet University, Saint-Étienne)
9.45 – Introduction
9.50 – From Graphic/Verbal Description to Interpretation
10.40 – The Representation(s) of Electroacoustic Music: From acoustics to musical analysis
Pierre Couprie, Mikhail Malt
11.30 Coffee break
11.45 – Characterization of Individual Electric Sounds
12.30 Lunch break
14.00 – 20 years of Interactive Music Software at Huddersfield
14.40 – Modelling of Digital Tools and Instruments for Composition and Performance
Alain Bonardi, Frédéric Dufeu
15.30 Tea break
15.45 – Behaviour and Notation of Electroacoustic Music
16.30 – Session: Software Developments for the Analysis of Electroacoustic Music
Alain Bonardi, Pierre Couprie, Frédéric Dufeu, Mikhail Malt
John Dack: From Graphic/Verbal Description to Interpretation
According to Boulez an ‘active’ analytical method must begin with ‘the most minute and exact observation possible of the musical facts confronting us’. This observation must, however, lead to the method’s ‘highest point’: ‘the interpretation of the structure’. In my talk I will discuss the importance of observation and description in arriving at an interpretation of musical events. I shall use examples of analyses of acousmatic works where I have produced a graphic description (albeit a crude one) of the musical events supplemented by verbal commentaries. I shall explain how these processes were necessary for an interpretation of the music and how listening methods remain central to an analytical procedure which attempts to situate the music within a broad cultural context. My examples will be taken mainly from French acousmatic music.
Pierre Couprie, Mikhail Malt: The Representation(s) of Electroacoustic Music: From acoustics to musical analysis
How to move from sound representations to musical analytical representations? Musicologists use various types of sound representations to analyse electroacoustic music: waveform, sonogram, differential sonogram, similarity matrix, or audio descriptor extraction. These different types of acoustic representations are the basic tools to explore and extract information to complete aural analysis. On the other side, researchers create musical representations during the analytical process. From structural representations to paradigmatic charts or typological maps, the goal of musical representations is to explore masked relations between sounds (paradigmatic level), micro-structures (syntagmatic level) or external significations (referential level). The relation between both types of representations – acoustics and musical – often consists in associating them through panes or software layers. Transferring information between them or extracting information from acoustic representation to create analytical graphics are complex operations. They need to read acoustic representations, filter insignificant parts, create pre-representations, and associate them to some other information to create analytical representations. To realize these operations, there are two main categories of software. The Acousmographe was developed to draw graphic representations guided by simple acoustic analysis. The second generation, represented by EAnalysis (De Montfort University) and TIAALS (University of Huddersfield) improves features of the Acousmographe with analytical tools to explore the sound, work with other types of data, or focus on musical analysis. This presentation will explore methods to improve these techniques and propose some new research directions for the next generation of software.
Laurent Pottier: Characterization of Individual Electric Sounds
This presentation focuses on the description of musical electric sounds, which are very simple from the pitches points of view (mainly single notes, with octave, fifth, or seventh) but very different about harmonic and spectral contents, using DSP devices (distortion, fuzz, feedback, reverberation). The examples used are sounds of electric guitars, extracted from references pieces (e.g. recorded by the Beatles) and sounds of analogue synthesizers (e.g. an ARP Odyssey). These types of sounds play an important role in the production of pop and rock music from the 1950s (for the guitar) and the 1970s (for synthesizers). We try to highlight tools that can describe these types of sounds and that are relevant for the perception. We realised correspondence analyses to place sound timbres in multidimensional spaces, in order to evaluate the criteria efficiency. We also present the results of perceptual tests (of similarity) made when comparing synthesized sounds, produced from data provided by the various analyses, and the original sounds.
– Fundamental estimation;
– Additive analysis for separating the harmonic part from the noisy part;
– Detection of virtual pitches;
– Ratio between the RMS of additive synthesis and the RMS of the residue;
– Spectral envelope;
– Spectral envelope of the noisy component;
– Spectral centroid;
– Dispersion of the FFT spectrum.
Michael Clarke: 20 years of Interactive Music Software at Huddersfield
This presentation will discuss the development of a number of different software packages for music I have been involved in at Huddersfield over the last 20 years. Different approaches have been taken, different aspects of music have been involved and the capability of technology has transformed greatly over that time, but one common feature has remained: a concern to use software to enable students and researchers to engage interactively with music as sound.
Alain Bonardi, Frédéric Dufeu: Modelling of Digital Tools and Instruments for Composition and Performance
The analysis of the creative processes involved in electroacoustic music may to a large extent rely on the thorough study of the technological tools used for the realisation of the musical work, both on the composition and on the performance sides. Analysing an electroacoustic work on the basis of the study of its creative context, technological tools and compositional methods may constitute a useful approach to a better understanding of its related creative processes. However, the implementation of such an approach, mainly based on the hardware or software elements used during the creation of a given work, is not straightforward. First, it implies that the considered technologies are still in use and have not become irreversibly obsolete. New performances of works are good opportunities for such investigation since they often provide its technical updating and therefore require deep understanding of the composer’s intentions. Musicologists also need to have access to the resources, which may not be available without a direct contact with the composer. Assuming these conditions are reached, the musicological and organological studies can encounter another issue, particularly in the digital domain: the sources are not always presented under forms that are directly readable by the analyst, for instance with a specific programming language. Despite all these possible difficulties, many cases of technological tools lean themselves to an in-depth investigation, leading to relevant conclusions on some of the creative processes appearing in the field of electroacoustic music.
Bruno Bossis: Behaviour and Notation of Electroacoustic Music
After a brief reminder of the general context of musical notation in the twentieth century, the talk will focus on specific points about the notation of the electronics on mixed music scores. Different ideas and points of view will be discussed, mainly based on the relations between electronics, acoustics and notation. Establishing typologies, or at least trying to do so, implies some very interesting problems. Among these difficulties is the complex time-related response of the electronics. Then, the interest of the concept of behaviour will be exposed. The main goal remains to understand better musical functions and to analyse all these musical pieces. This concept of behaviour is more than a general or abstract idea. We think that this notion is particularly useful and has to be developed and deepened, in relation with modularity and relational organisation. Thus, different specialities and scientific domains have to be involved, e.g. musical analysis, acoustics, psychology or semiotics. The intent of such a research consists in approaching electronics with new conceptual tools in order to avoid too many shade effects in the process of analysis, especially in the process of segmentation.
Alain Bonardi, Pierre Couprie, Frédéric Dufeu, Mikhail Malt: Software Developments for the Analysis of Electroacoustic Music
Developing software for musicological purpose combines two types of researches. First, musicologists need software with specific features, e.g. to have acoustic representations or to navigate inside audio-visual files (to compare different part of same work or several works, to extract musical parameters, etc.). On the other hand, researchers also need musicological tools to assist them in musical segmentation, to create appropriate analytical representations, to work on various genres of music for a large audience, or to facilitate their workflow (export/import to/from other software, share with other researchers, etc.). Moreover, musicologists also want to experiment new analytical processes adapted from other researches (audio descriptors, new types of musical representations, etc.). Thus, previous analytical tools are too limited for these new musical approaches: musicologists need to work with developers (or develop themselves) to create new software and imagine perspectives for future applications.