Internet time delay: a new musical language for a new time basis
Introduction
There are several challenges that need to be taken into consideration in Network Music Performance (NMP) with geographically displaced contributors. As discussed in the paper of Jonas Braasch [1], these challenges are mostly technical and they involve physical-distance and latency issues. However, acceptance of these issues has been contributing to major innovations with, for instance, the development of a metronomic pulse system contributing to a new kind of time basis of this media as mentioned by Chafe Cáceres et al. in the paper of Roger Mills [2]. Nevertheless, logistical considerations for NMP occurring simultaneously in different time zones still remain strongly present. If networked performers are still to the vagaries of speed and bandwidth of multiple networks, and differences in audio-visual streaming applications [2], and if latency problems remain a significant issue for audio-visual streaming of live NMP, one can rather reflect on trying to find a new musical language working on this new time basis [3].
Discussion
PHYSICAL-DISTANCE
First of all, Braash argues that human communication language can be correctly established within the environment where it is practised [1]. Consequently, the only reliable solution to optimize the signals of our human interactions depends on a given acoustic environment. According to him, the acoustic environment seems to be decisive in spreading sounds efficiently. For instance, he mentions that the actual record for real acoustic long-distance communication is known as being held by sperm whales (Clark & Clapham, 2004) which songs can carry hundreds of miles across the ocean using the acoustic properties of the Sound Fixing and Ranging channels (SOFAR channels; Ewing & Worzel, 1948). This being mainly due to their water environment in which sperm whales spread signals at a speed reaching approximately 1480 m/s (5 336,435 km/h) at a temperature of 20°C, whereas sounds in air environment are limited to a speed of approximately 340 m/s (1224 km/h). This implies that while much has been achieved to reduce telematic music systems latency, the physical-distance between two collaborators still remains what determines the achievable minimal propagation delay. As a result, transmission delay induced by the telematic systems prevents us from reaching a virtual environment in “real-time”.
LATENCY
To perceive musical interaction as natural in NMP, sounds coming to the human ear should not be displaced in time more than 20 milliseconds [4]. This means that for mutual awareness to be supported in a NMP, the maximum threshold should be around 40 milliseconds (the time it would take a performer to perceive a second performer’s reaction to his or her action) [5]. Moreover, even though electric signals could transfer audio data at the speed of light (approximately 300,000 km per seconds) with an unlimited bandwidth, latency would still reach approximately 133.4 milliseconds, which is much higher than the tolerable threshold [5]. Thereby, one of the biggest problems NMP is confronted with is that latency is an integral part of sound because it is generated by the performer’s local system and sent through the network system. If there is too much latency in the network system, it becomes thus difficult to create collective playability, especially if the musicians wish to adjust their playing or coordinate according to the sounds they hear or receive [2]. Therefore, acceptance of these technical challenges seems to be unavoidable.
A NEW TIME BASIS FOR A NEW MUSICAL LANGUAGE
The character, nature and structure of the music played, and the types of instruments and systems used, determines the acceptance of the technical challenges linked to the latency. Consequently, synchronization elements can be used in NMP to find a new musical language that could work on a new time basis [3]. Because latency induces some artificial and fluctuating artefacts linked to the variations in data transfers on the network, one could rather accept transmission delays, time-based errors, de-sequencing or even partial loss of content [5], by considering their aesthetical properties. That is of revealing a materiality (or granularity) linked to the technique of the audiovisual streams, which could be accepted as a new sound material. Furthermore, this new kind of media could create a creative online community which will not be oriented towards time-limited event scenarios but which could manipulate and transform this new sound material while listening to music created collectively [5].
References
[1] J. Braasch, (2009) The Telematic Music System: Affordances for a New Instrument to Shape the Music of Tomorrow, Contemporary Music Review, 28(4), 421-432. See https://doi.org/10.1080/07494460903422404
[2] R. Mills, (2019) Tele-Improvisation: Intercultural Interaction in the Online Global Music Jam Session, Springer Series on Cultural Computing.
[3] G. Föllmer, (2001) Crossfade - Sound Travels on the Web - Soft Music, a joint project of San Francisco Museum of Modern Art, San Francisco, California; ZKM-Center for Art and Media, Karlsruhe, Germany; Walker Art Center, Minneapolis, Minnesota; Goethe Forum, Munich, Germany. See http://crossfade.walkerart.org
[4] I. Hirsh, (1959) Auditory Perception of Temporal Order, Journal of the Acoustical Society of America 31, No. 6, 759–767.
[5] A. Barbosa, (2003) Displaced Soundscapes: A Survey of Network Systems for Music and Sonic Art Creation, Leonardo Music Journal, 13(11), 53–59. See https://doi.org/10.1162/096112104322750791
.