JUSTWAVES – !ME(NOTme)
DESCRIPTION
Given the current importance of the means of visual expression, present in almost all music festivals, and given that in many cases they have been relegated to the background, !ME(NOTme) has sought to reclaim said medium through performances where the music or sound is generated from images and not the other way around.
In this framework he has investigated in several artistic residencies, together with the visualist Antón Babinski, different ways of sonifying visual inputs, using analog methods as well as current techniques and technologies (computer vision, live coding,…), creating the projects: ” We are not musicians” and “Chromatic Dissonance // RGBdB”.
In this project “JustWaves”, !ME(NOTme) sets out alone to create an experimental sound and music performance, where all the sounds will be generated by the visuals that he will create live. These will be created manually with liquid inks and chemical reactions, resembling the techniques used in the 60s-70s in “Liquid light shows” performances, but adding a layer of new technologies to the process.
As a visualist, he has always been interested in the way our cognitive system interprets all external signals (waves), in a very personal way. Although you can quantify how and how much of a wave we receive, you cannot know how each person actually perceives them, entering into the definition of Qualia, which stipulates that the properties of sensory experiences are, by definition, epistemologically unknowable in the absence of direct experience.
Therefore, in this project the same concept is explored at a computational level, where two programs will exchange data (digital waves), and interpret them both sonically and visually. The visual compositions generated live will be recorded by a camera, interpreted by a computer vision algorithm programmed by !ME, which will collect data such as colors, shapes, movement and position of the visual elements, and send them to another program also made by himself, who will receive all that data and reinterpret it into sound.
The final result that will be shown on the screen will be made up of the images created, along with the sound code that interprets it and the data that is traveling from one program to another in the virtual space where both live. The code that reinterprets the sound, created in supercollider, is designed to be able to be changed live, performing “Live Coding” practices. In this way, the public is not only a participant in the audiovisual result, but in the entire process of how it is being created, both in the “virtual” and physical parts.
PERFORMANCE DURATION
30-35 minutes.
IT HAS BEEN PREVIOUSLY SEEN IN:
- Lux Festival (2023). https://latermicamalaga.com/lux-festival-de-artes-luminicas-de-malaga/
- Festival VIU (2022). Video resumen: https://www.youtube.com/watch?v=9GCRBPNCNz0 Actuación completa: https://youtu.be/UM2ygl0n1Xw?t=7942 (desde 2h.12m.42s a 2h.44m.06s)
- Festival Algopolis (2021). https://www.youtube.com/live/kYW9SSRZAvU?si=VL4CP4xE33qfFcEm&t=15226
- Bits & Bots: https://youtu.be/oihIA6Z_tgk?t=4