IRCAM Forum 2025 – Turning Pixels into Sound: Sonifying The Powder Toy

During our visit to IRCAM Forum 2025, one of the most unexpected and inspiring presentations came from Kieran McAuliffe, who introduced us to a unique way of experiencing a video game — not just visually, but sonically. His project, Sonifying The Powder Toy, brought an old genre of games to life in a way that made both sound designers and game designers lean forward.

If you’ve never heard of it, The Powder Toy is part of a quirky, cult genre called “falling sand games.”

https://powdertoy.co.uk/

These are open-ended, sandbox-style simulations where players interact with hundreds of different particles — fire, water, electricity, explosives, gases, and even fictional materials — all rendered with surprising physical detail. It’s chaotic, visual, and highly addictive. But one thing it never had was sound.

Kieran, with his background as a composer, guitarist, and researcher, decided to change that. His project wasn’t just about adding booms and fizzles. He approached the challenge like a musical instrument designer: how can you play this game with your ears?

The problem was obvious. The game’s physics engine tracks up to 100,000 particles updating 60 times per second — trying to create sounds for every interaction would melt your CPU. So instead, Kieran developed a method of analytic sonification: instead of responding to every pixel, his system tracks the overall distribution of particles and generates sound textures accordingly.

That’s where it gets beautifully nerdy. He uses something called stochastic frequency-modulated granular synthesis. In simpler terms, think of it like matching grains of sand with grains of sound — short, tiny bursts of tones that collectively create textures. Each type of material in The Powder Toy — be it lava, fire, or metal — gets its own “grain stream,” with parameters like pitch, modulation, duration, and spatial position derived from the game’s internal data.

To make all of this work, Kieran built a custom Max/MSP external called LuaGran~. This clever little tool lets him embed Lua scripts directly inside Max, giving him the power to generate and manipulate thousands of grains per second. It allows for both tight control and high performance — a critical balance when your “instrument” is a particle system going haywire in real time.

Some mappings were linear — like more fire equals higher pitch — while others used neural networks or probabilistic logic to shape more complex sonic behaviors. It was a blend of art and science, intuition and math.

During the presentation, I had the chance to join Kieran live by downloading his forked version of The Powder Toy, which sends Open Sound Control (OSC) data to his Max patch. Within minutes, a room full of laptops was sonically simulating plasma storms and chemical reactions. It was fun, chaotic, and surprisingly musical.

One thing that stood out was how Kieran resisted the temptation to make the sound effects too “realistic.” Instead, he embraced abstraction. A massive explosion might not sound like a movie boom — it might produce a textured whoosh or a burst of granular noise. His goal was not to recreate reality, but to enhance the game’s emergent unpredictability with equally surprising sounds.

He described the system more like a musical instrument than a tool, and that’s how he uses it — for laptop ensemble pieces, sound installations, and live improvisation. Still, he hinted at the potential for this to evolve into a standalone app or even a browser-based instrument. The code is open source, and the LuaGran~ tool is already on his GitHub (though it still needs some polish before wider distribution).

https://github.com/trian-gles

As sound designers and creatives, this project reminds us that sound can emerge from the most unexpected places — and that play, chaos, and curiosity are powerful creative engines. The Powder Toy might look like a simple retro game, but under Kieran’s hands, it becomes a dense sonic playground, a platform for experimentation, and a surprisingly poetic meeting of code and composition.

If you’re curious, I encourage you to try it out, explore the sounds it makes, and maybe even mod it yourself. Because as Kieran showed us, sometimes the most interesting instruments are the ones hiding inside games.

Here you can find manual how to instal game and sonification:

https://tinyurl.com/powder-ircam

It’s more fun to do it with friends)

IRCAM Forum 2025 – RIOT v3: A Real-Time Embedded System for Interactive Sound and Music

When you think of motion tracking, you might imagine a dancer in a suit covered with reflective dots, or a game controller measuring hand gestures. But at this year’s IRCAM Forum in Paris, Emmanuel Fléty and Marc Sirguy introduced R-IoT v3, the latest evolution of a platform developed at IRCAM for real-time interactive audio applications. For students and professionals working in sound design, physical computing, or musical interaction, RIOT represents a refreshing alternative to more mainstream tools like Arduino, Raspberry Pi, or Bela—especially when tight timing, stability, and integration with software environments like Max/MSP or Pure Data are key.

What is it, exactly?

RIOT v3 is a tiny device—about the size of a USB stick—that can be attached to your hand, your foot, a drumstick, a dancer’s back, or even a shoe. Once it’s in place, it starts capturing your movements: tilts, spins, jumps, shakes. All of that motion is sent wirelessly to your computer in real time.

What you do with that data is up to you. You could trigger a sound sample every time you raise your arm, filter a sound based on how fast you’re turning, or control lights based on the intensity of your movements. It’s like turning your body into a musical instrument or a controller for your sound environment.

What’s special about version 3?

Unlike Raspberry Pi, which runs a full operating system, or Arduino, which can have unpredictable latency depending on how it’s programmed, RIOT runs bare metal. This means there’s no operating system, no background tasks, no scheduler—nothing between your code and the hardware. The result: extremely low latency, deterministic timing, and stable performance—ideal for live scenarios where glitches aren’t an option.

In other words, RIOT acts like a musical instrument: when you trigger something, it responds immediately and predictably.

The third generation of RIOT introduces some important updates:

  • Single-board design: The previous versions required two boards—the main board and an extension board—but v3 integrates everything into a single PCB, making it more compact and easier to work with.
  • RP2040 support: This version is based on the RP2040 chip, the same microcontroller used in the Raspberry Pi Pico. It’s powerful, fast, and has a growing ecosystem.
  • Modular expansion: For more complex setups, add-ons are coming soon—including boards for audio I/O and Bluetooth/WiFi connectivity.
  • USB programming via riot-builder: The new software tool lets you write C++ code, compile it, and upload it to the RIOT board via USB—no need for external programmers. You can even keep your Max or Pure Data patch running while uploading new code.

Why this matters for sound designers

We often talk about interactivity in sound design—whether for installations, theatre, or music—but many tools still assume that the computer is the main performer. RIOT flips that. It gives you a way to move, breathe, and act—and have the sound respond naturally. It’s especially exciting if you’re working in spatial sound, live performance, or experimental formats.

And even if you’ve never touched an Arduino or built your own electronics, RIOT v3 is approachable. Everything happens over WiFi or USB, and it speaks OSC, a protocol used in many creative platforms like Max/MSP, Pure Data, Unity, and SuperCollider. It also works with tools some of you might already know, like CataRT or Comote.

Under the hood, it’s fast. Like really fast. It can sense, process, and send your movement data in under 2 milliseconds, which means you won’t notice any lag between your action and the response. It can also timestamp data precisely, which is great if you’re recording or syncing with other systems.

The device is rechargeable via USB-C, works with or without a battery, and includes onboard storage. You can edit configuration files just like text. There’s even a little LED you can customize to give visual feedback. All of this fits into a board the size of a chewing gum pack.

And yes—it’s open source. That means if you want to tinker later on, or work with developers, you can.

https://github.com/Ircam-R-IoT

A tool made for experimentation

Whether you’re interested in gesture-controlled sound, building interactive costumes, or mapping motion to filters and samples in real time, RIOT v3 is designed to help you get there faster and more reliably. It’s flexible enough for advanced setups but friendly enough for students or artists trying this for the first time.

At FH Joanneum, where design and sound design meet across disciplines, a tool like this opens up new ways of thinking about interaction, performance, and embodiment. You don’t need to master sensors to start exploring your own body as a controller. RIOT v3 gives you just enough access to be dangerous—in the best possible way.

Experiment I: Embodied Resonance – Heart rate variability (HRV) as mental health indicator

Heart rate is a fundamental indicator of mental health, with heart rate variability (HRV) playing a particularly significant role. HRV refers to the variation in time intervals between heartbeats, reflecting autonomic nervous system function and overall physiological resilience. It is measured using time-domain, frequency-domain, or non-linear methods. Higher HRV is associated with greater adaptability and lower stress levels, while lower HRV is linked to conditions such as PTSD, depression, and anxiety disorders.

Studies have shown that HRV differs between healthy individuals and those with PTSD. In a resting state, people with PTSD typically exhibit lower HRV compared to healthy controls. When exposed to emotional triggers, their HRV may decrease even further, indicating heightened sympathetic nervous system activation and reduced parasympathetic regulation. Bessel van der Kolk’s work in “The Body Keeps the Score” highlights how trauma affects autonomic regulation, leading to dysregulated physiological responses under stress.

There are two primary methods for measuring heart rate: electrocardiography (ECG) and photoplethysmography (PPG). 

FeatureECGPPG
Measurement PrincipleUses electrical signals produced by heart activityUses light reflection to detect blood flow changes
AccuracyGold standard for medical HR monitoringUses ECG as reference for HR comparison
Heart Rate (HR) MeasurementHighly accurateSuitable for average or moving average HR
Heart Rate Variability (HRV)Can extract R-peak intervals with millisecond accuracyLimited by sampling rate, better for long-duration measurements (>5 min)
Time to Obtain ReadingQuick, no long settling time requiredRequires settling time for ambient light compensation, motion artifact correction
picsensor namelinkpricewhat it measuresspecificationfeaturesusage case
Gravity: Analog Heart Rate Monitor Sensor (ECG) for Arduinobuy$19.90electrical activity of the heartInput Voltage: 3.3-6V (5V recommended)Output Voltage: 0-3.3VInterface: AnalogOperating current: <10mAHeart Rate Monitor Sensor x1Sensor cable – Electrode Pads (3 connector) x1Biomedical Sensor Pad x6https://emersonkeenan.net/arduino-hrv/
Gravity:Analog/Digital PPG Heart Rate Sensorbuy$16.00blood volume changingInput Voltage (Vin): 3.3 – 6V (5V recommended) Output Voltage: 0 – Vin (Analog), 0/ Vin (Digital) Operating current: <10mAAnalog (pulse wave) & Digital(heart rate), configurable outputhttps://www.dfrobot.com/blog-767.html
MAX30102 PPG Heart Rate and Oximeter Sensorbuy$21.90blood volume changing + blood oxygen saturationPower Supply Voltage: 3.3V/5VWorking Current: <15mACommunication Method: I2C/UARTI2C Address: 0x57https://community.dfrobot.com/makelog-313158.html
Fermion: MAX30102 PPG Heart Rate and Oximeter Sensorbuy$15.90blood volume changing + blood oxygen saturationPower Supply: 3.3VWorking Current: <15mACommunication: I2C/UARTI2C Address: 0x57https://community.dfrobot.com/makelog-311968.html
SparkFun Single Lead Heart Rate Monitorbuy$21.50electrical activity of the heartOperating Voltage – 3.3VAnalog OutputLeads-Off DetectionShutdown PinLED Indicatorno electrodes
extra cables cost $5.50 extra electrodes $8.95
https://anilmaharjan.com.np/blog/diy-ecg-ekg-electrocardiogram 
Sparkfun: Pulse Sensorbuy$26.95blood volume changingInput Voltage (VCC) – 3V to 5.5VOutput Voltage – 0.3V to VCCSupply Current – 3mA to 4mAhttps://microcontrollerslab.com/pulse-sensor-esp32-tutorial/
SparkFun Pulse Oximeter and Heart Rate Sensorbuy$42.95blood volume changing + blood oxygen saturationI2C interface I2C Address: 0x55https://github.com/sparkfun/SparkFun_Bio_Sensor_Hub_Library
Keyestudio AD8232 ECG Measurement Heart Monitor Sensor Module buy9,25€electrical activity of the heartPower voltage:DC 3.3VOutput:analog outputInterface(connect RA, LA, RL): 3PIN, 2.54PIN or earphone jackhttps://wiki.keyestudio.com/Ks0261_keyestudio_AD8232_ECG_Measurement_Heart_Monitor_Sensor_Module

ECG records the electrical activity of the heart using electrodes placed on the skin, providing high accuracy in detecting R-R intervals, which are critical for HRV analysis. PPG, in contrast, uses optical sensors to detect blood volume changes in peripheral tissues, such as fingertips or earlobes. While PPG is convenient and widely used in consumer devices, it is more susceptible to motion artifacts and may not provide the same precision in HRV measurement as ECG.

Additionally, some PPG sensors include pulse oximetry functionality, measuring both heart rate and blood oxygen saturation (SpO2). One such sensor is the MAX30102, which uses red and infrared LEDs to measure oxygen levels in the blood. The sensor determines SpO2 by comparing light absorption in oxygenated and deoxygenated blood. Since oxygen levels can influence cognitive function and stress responses, these sensors have potential applications in mental health monitoring. However, SpO2 does not provide direct information about autonomic nervous system function or HRV, making ECG a more suitable method for this project.

For this project, ECG is the preferred method due to its superior accuracy in HRV analysis. Among available ECG sensors, the AD8232 module is a suitable choice for integration with microcontrollers such as Arduino. The AD8232 is a single-lead ECG sensor designed for portable applications. It amplifies and filters ECG signals, making it easier to process the data with minimal noise interference. The module includes an output that can be directly read by an analog input pin on an Arduino, allowing real-time heart rate and HRV analysis.

HRV is calculated based on the time intervals between successive R-peaks in the ECG signal. One of the most commonly used HRV metrics is the root mean square of successive differences (RMSSD), which is computed using the formula:

where RRi represents the ith R-R interval, and N is the total number of intervals. Higher RMSSD values indicate greater parasympathetic activity and better autonomic balance. Among ECG sensors available on the market, the Gravity: Analog Heart Rate Monitor Sensor (ECG) is the most suitable for this project. It is relatively inexpensive, includes electrode patches in the package, and has well-documented Arduino integration, making it an optimal choice for HRV measurement in experimental and practical applications.

Explore IV: Embodied Resonance – Final concept

What is Embodied Resonance?

Embodied Resonance is an experimental audio performance that investigates the interplay between trauma, physiological responses, and immersive sound. By integrating biofeedback sensors with spatial sound, this project translates the body’s real-time emotional states into an evolving sonic landscape. Through this process, Embodied Resonance aims to create an intimate and immersive experience that bridges personal narrative with universal themes of emotional resilience and healing.

Reference Works

Inspiration for this project draws heavily from groundbreaking works in biofeedback art. For instance, Tobias Grewenig’s Emotion’s Defibrillator (2005) inspired me to explore how visual imagery can serve as emotional triggers, sparking physiological responses that drive sound. Grewenig’s project combines sensory input with dynamic visual feedback, using breathing, pulse, and skin sensors to create a powerful interactive experience. His exploration of binaural beats and synchronized visuals provided a foundation for my use of AR imagery and biofeedback systems​.

Another profound influence is the project BODY ECHOES, which integrates EMG sensors, breathing monitors, and sound design to capture inner bodily movements and translate them into a spatialized audio experience. This project highlights how subtle physiological states, such as changes in muscle tension or breathing rhythms, can form the basis of a compelling sonic narrative​. It has inspired my approach to using EMG and respiratory sensors as key components for translating physical states into sound.

How Does It Work?

The performance involves the use of biofeedback sensors to capture physiological data such as:

  • Electromyography (EMG) to measure muscle tension
  • Electrodermal Activity (EDA/GSR) to track stress levels via skin conductivity
  • Heart Rate (ECG/PPG) to monitor pulse fluctuations and emotional arousal
  • Respiratory Sensors to analyze breath patterns

This real-time data is processed using software like Max/MSP and Ableton Live, which maps physiological changes to dynamic sound elements. Emotional triggers, such as augmented reality (AR) images chosen by the audience, influence the performer’s physiological responses, which in turn shape the sonic environment.

Core Components of the Project

  1. Emotional Triggers and Biofeedback: The audience plays an active role by selecting AR-displayed imagery, which elicits emotional and physiological responses from the performer.
  2. Sound Mapping and Generation: Physiological changes dynamically alter elements of the soundscape.
  3. Spatial Audio and Immersion: An Ambisonic sound system enhances the experience, surrounding the audience in a three-dimensional sonic space.
  4. Interactive Performance Structure: The performer’s emotional and physical state directly influences the performance, creating a unique, real-time interaction between artist and audience.

Why is This Project Important?

Embodied Resonance is an innovative approach to understanding how trauma manifests in the body and how it can be externalized through sound. This project:

  • Explores the intersection of biofeedback technology, music, and performance art
  • Provides a new medium for emotional processing and healing through immersive sound
  • Pushes the boundaries of interactive performance, inviting the audience into a participatory experience
  • Challenges conventional notions of musical composition by integrating the human body as an instrument

Why Do I Want to Work on It?

As a sound producer, performer, and music editor, I have always been fascinated by the connections between sound, emotion, and the body. My personal journey with trauma and healing has shaped my artistic explorations, driving me to create a performance that not only expresses these experiences but also fosters a shared space for reflection and empathy. By combining my technical skills with deep personal storytelling, I aim to push the boundaries of sonic expression.

How Will I Realize This Project?

Methods & Techniques

  • Research: Studying trauma, somatic therapy, and the physiological markers of emotional states.
  • Technology: Utilizing biofeedback sensors and signal processing tools to create real-time sound mapping.
  • Performance Development: Experimenting with gesture analysis and embodied interaction.
  • Audience Engagement: Exploring ways to integrate audience input via AR-triggered imagery.

Necessary Skills & Resources

  • Sound Design & Synthesis: Proficiency in Ableton Live, Max/MSP, and Envelop for Live.
  • Sensor Technology: Understanding EMG, ECG, and GSR sensor integration.
  • Spatial Audio Engineering: Knowledge of Ambisonic techniques for immersive soundscapes.
  • Programming: Implementing interactive elements using coding languages and software.
  • Theoretical Research: Studying literature on biofeedback art, music therapy, and embodied cognition.

Challenges and Anticipated Difficulties

Spatial Audio Optimization: Achieving an immersive sound experience that maintains clarity and emotional depth.

Technical Complexity: Ensuring seamless integration of biofeedback data into real-time sound processing requires rigorous calibration and testing.

Emotional Vulnerability: The deeply personal nature of the performance may present emotional challenges, requiring careful preparation.

Audience Interaction: Designing a system that effectively incorporates audience input without disrupting the emotional flow.

Bibliography

Explore III: Embodied Resonance – Refining the Project Vision

Primary Intention:

The project’s core goal is to create an embodied, immersive experience where the performer’s movements and physiological signals interact with dynamic soundscapes, reflecting states of stress, panic, and resolution. This endeavor seeks to explore the intersection of the body, trauma, and sound as a medium of expression and understanding.

Tasks Fulfilled by the Project:

  1. Expressive Performance: Convey the visceral experience of stress and trauma through movement and sound.
  2. Interactive Soundscapes: Use real-time biofeedback to dynamically alter sound parameters, enhancing the audience’s sensory engagement.
  3. Therapeutic Exploration: Demonstrate the potential of somatic expression and sound for trauma exploration and healing.

Main Goals:

  1. Develop a cohesive interaction between biofeedback, sound design, and movement.
  2. Design an immersive auditory space using ambisonics.
  3. Create an emotionally impactful narrative through choreography and sound dynamics.

Steps for Project Implementation

Identifying Subtasks:

  1. Movement and Choreography Exploration:
    • Research and refine body movements that mirror states of stress and release.
    • Develop movement scores aligned with sound triggers.
  2. Biofeedback and Technology Integration:
    • Select and test wearable sensors for movement and physiological signals (e.g., heart rate monitors, EMG sensors).
    • Map sensor data to sound parameters using tools like Max/MSP or Pure Data.
  3. Sound Design and Ambisonics:
    • Create a palette of sound textures representing emotional states.
    • Test and refine 3D spatial audio setups.
  4. Rehearsal and Iteration:
    • Practice interaction between movement and sound.
    • Adjust mappings and refine performance flow.

Determining the Sequence:

  1. Begin with movement research and initial choreography.
  2. Set up and test biofeedback systems.
  3. Integrate sound design with real-time data mappings.
  4. Conduct iterative rehearsals and refine dynamics.

Description of Subtasks

Required Information and Conditions:

  • Knowledge of movement techniques representing trauma.
  • Understanding biofeedback sensors and data processing.
  • Familiarity with ambisonic sound design principles.

Methods:

  • Employ somatic techniques and physical theater practices for movement.
  • Use biofeedback-driven sound generation software for real-time interaction.
  • Apply iterative testing and rehearsal methods for refinement.

Existing Knowledge and Skills:

  • Dance and performance experience.
  • Basic knowledge of sensor technologies and sound design tools.
  • Understanding of trauma’s physical manifestations through literature.

Additional Resources:

  • Sensors and biofeedback devices.
  • Ambisonic Toolkit and spatial audio software.
  • Research materials on trauma and biofeedback in art.

Timeline Overview

Current Semester – “Explore” Phase:

  • Research movement responses to stress and trauma.
  • Test sensors and sound mapping tools.
  • Document all findings to create the exposé and prepare for the oral presentation.

Second Semester – “Experiment” Phase:

  • Prototype interactions between movement, biofeedback, and sound.
  • Evaluate the feasibility and emotional resonance of the prototypes.
  • Incorporate feedback and iterate designs.

Third Semester – “Product” Phase:

  • Combine prototypes into a cohesive performance.
  • Optimize the interplay between sound and movement.
  • Conclude with final documentation and a presentation of the complete performance.

Questions for Exploration

  • What additional biofeedback sensors and sound techniques can enhance the performance?
  • How can movement scores effectively translate the emotional states into physical expressions?
  • What feedback mechanisms will refine the audience’s immersive experience?

Explore II: Embodied Resonance – First draft

A live performance where the body’s movement and physiological responses interact with real-time, 3D soundscapes, creating an auditory and sensory experience that embodies the physical and emotional states associated with trauma, stress, or panic.


Core Elements

  1. Live Movement and Performance:
    • Physical Expression: Expressive body movements are used to convey states of stress, panic, and tension. Movements could be choreographed or improvised, incorporating controlled gestures, sudden shifts, and spasmodic motions that mirror the body’s natural reactions to trauma.
    • Sensor Integration: The performer will be equipped with wearable sensors (e.g., accelerometers, heart rate monitors, muscle tension sensors) to capture real-time data that triggers sound changes.
  2. Sound Design and Biofeedback:
    • Real-time Data to Sound Mapping: The data from the sensors can be mapped to sound parameters such as volume, pitch, and spatial positioning. 
    • Spatial Audio (Ambisonics): the 3D sound environment where the sound moves with the performer, simulating the feeling of being surrounded by or caught in an experience of panic.
    • Sound Layers and Textures: Layer sounds that range from chaotic, dissonant clusters to more open, calming tones, symbolizing shifts between heightened panic and brief moments of relief.
  3. Interactive Performance Dynamics:
    • Feedback Loops: The performer’s movements could influence sound parameters, and changes in sound could, in turn, affect how the performer responds (e.g., sudden loud or abrupt sounds causing physical shifts).
    • Immersive Auditory Space: Spatial audio setup will immerse the audience, making them feel as though they are within the performance’s sonic realm or inside the performer’s body.
  4. Choreography and Movement Techniques:
    • Imitating Panic and Stress:
      • Breath Control: Rapid, shallow breathing or uneven breathing patterns to simulate panic.
      • Body Tension and Release: Show how different areas of the body can tense up and release in response to imagined threats.
      • Sudden, Erratic Movements: Imitate fight-or-flight reactions through jerky, uncoordinated gestures.
    • Movement Scores: Create a set of movement phrases that can be triggered by specific sound cues, with each phase representing a different level of intensity or emotional state.

Implementation Steps:

  1. Initial Research and Movement Exploration:
    • Spend time exploring how the body naturally responds to stress through dance or physical theatre techniques.
    • Record and analyze your body’s response to various stimuli to understand how to replicate these in a performance context.
  2. Tech Setup and Testing:
    • Choose sensors capable of tracking movement and vital signs, such as wearable accelerometers and heart rate monitors.
    • Connect the sensors to real-time audio processing software (e.g., Max/MSP, Pure Data) to create dynamic sound generation based on data input.
    • Experiment with one biofeedback sensor (e.g., heartbeat or EMG) and connect it to sound manipulation software.
    • Test simple ambisonic setups to understand spatial audio placement.
  3. Sound Design:
    • Use ambisonics to experiment with how sounds can be positioned and moved in 3D space.
    • Create a palette of sound elements that represent different stress levels, such as soft background noise, mechanical sounds, distorted human voices, and deep bass thuds.
  4. Rehearsals and Iteration:
    • Conduct rehearsals where you practice the movement and sound interaction, making adjustments to the data-to-sound mappings to achieve the desired response.
    • Test with different inputs to refine the sonic representation of the body’s signals.
    • Refine the performance flow by timing the intensity of movements and sound shifts to ensure coherence and emotional impact.

Resources

Body and Trauma

  1. The Body Keeps the Score by Bessel van der Kolk
  2. Waking the Tiger: Healing Trauma by Peter Levine

Sound Design and Technology

  1. Sound Design: The Expressive Power of Music, Voice and Sound Effects in Cinema by David Sonnenschein
  2. Immersive Sound: The Art and Science of Binaural and Multi-Channel Audio edited by Agnieszka Roginska and Paul Geluso

Tools and Tutorials

  1. Ambisonic Toolkit (ATK)
  2. Cycling ’74 Max/MSP Tutorials

Artistic and Conceptual References

  1. Janet Cardiff – Known for immersive sound installations, especially her 40-Part Motet.
  2. Meredith Monk – Combines movement and sound to explore human experience.
  3. Christine Sun Kim – Explores sound and silence through the lens of the body and perception.

Academic Research in Sound and Perception

  1. Music, Cognition, and Computerized Sound: An Introduction to Psychoacoustics by Perry Cook 

Explore I: Body and Sound – Looking for the Idea

My Background and Interests

My journey into sound and technology started with my experiments in movement-based sound design. One of my first projects used ultrasonic sensors and Arduino technology to transform body movement into music. I was fascinated by the idea of turning motion into sound, mapping gestures into an interactive sonic experience. This led me to explore other ways of integrating physical action with sound manipulation, such as using MIDI controllers and custom-built sensors.

I see sound as more than just music—it’s a form of expression, communication, and interaction. My interest in sound design is rooted in its ability to create immersive experiences, whether through spatial sound, interactivity, or emotional storytelling. I love experimenting with unconventional ways of generating and manipulating sound, pushing beyond traditional composition to explore new territories.

Right now, I’m particularly interested in how sound connects to the body. How can movement or internal processes be used as an instrument? How do physical states influence the way we experience sound? These are the questions that drive my current explorations.

Idea Draft for a Future Project

At first, I was focused on transforming movement into sound. My early idea was to explore sensors that could read touch, direction, and motion, allowing me to control different sound layers by moving my body. I imagined a 3D sound composition where gestures could manipulate textures, rhythms, and effects in real-time. Maybe even integrating voice elements, allowing me to shape effects with both movement and singing.

Over time, my focus shifted. Instead of external movement, I started thinking about internal body processes—breath, heartbeat, muscle tension. What if sound could react to what happens inside the body rather than just external gestures? This led to the idea of biofeedback-driven sound, where physiological data becomes a source of real-time sonic transformation.

The concept is still in development, but the main idea remains the same: exploring the relationship between the body and sound in a way that is immersive, interactive, and emotionally driven. Whether through movement or internal signals, I want to create a performance where sound is a direct extension of the body’s state, turning invisible experiences into something that can be heard and felt.

Moving Forward

This project is still evolving. It might become a performance, an installation, or something entirely different. Right now, I’m in the phase of exploring what’s possible. Sound and the body are deeply connected, and I want to keep pushing that connection in new and unexpected ways.