A review of Natasha Barretts presentation at IMV November 24th

At our department this fall, we’ve been so fortunate as to have a series of great talks presented by the Department of Musicology called Women in Music Technology. November 24th I was present at the talk of Natasha Barrett presenting her Interactive Music System (IMS) the “Transformer #1”.

In this blog post I will present this work as I understood it, and linger around some interesting aspects of it that Natasha Barrett brought up herself during the talk.

The “Transformer #1” is a very complex concept to wrap your head around if you try to categorize it as an IMS. Here’s my attempt to explain how it works in very broad terms:

Ok, so on stage (We were mostly just shown video excerpts of the actual performance) you would see a performer singing (or producing sounds) and dancing/moving, and behind her there’s a screen showing visuals. But the sounds you’re hearing are not just the sounds of the performer, but some very interesting musical results that for me as an observer seems to be both affected by the movement and the original sounds produced by the human performer. In addition to that, the visuals also seem to be affected by the actions and sounds of the human performer. And notice the word affected; as a spectator it does not sound/appear like it is directed or controlled by the human performer, my intuitive feeling as a spectator was actual musical interplay between human performer and the system.

Hopefully you have an idea of how I experienced it (if not cheat and watch the youtube video), but here are my attempts to categorize it (based on my experience as an observer and the very thoughtful observations by the creator herself about the IMS):

  • It is an IMS, using advanced many-to-many mappings (Machine Learning, Music Information Retrieval and specifically designed algorithms) of audio and motion capture data from a performer to create sounds (both through sound synthesis and in order to choose audio from a huge database of samples)

  • It is a visual generative system, using the input from the human performer (audio or motion capture) to decide on parameters for computer generated visuals.

  • It is a composition (!), as it is constructed as one performance with four different acts/parts, where different algorithms in both visual and audio systems are used for each section. Also, the system utilizes a huge dataset of samples recorded by the composer for sound production, and the composer has designed the whole system and supervised the training of the Machine Learning algorithms.

  • It is a performer on its own, playing an integral part in the performance; taking inputs from the human performer and creating an output which I believe is hard to replicate based on the complexity of the system.

As a composer myself, the fact that the creator thought this system should be considered as a composition was eye-opening. From what I’ve seen and heard from the short talk I fully support this claim. And that amazes me.

I would really encourage you to to experience it yourself (Oh, I forgot to mention that for the actual live performance all the audio is spatially encoded, giving the audience a 360 degree sonic experience). Run to this link and get a ticket for the show at Henie Onstad Kunstsenter 3.12.2022 or 4.12.2022, or if you’re reading this too late check out this video.