Explore IV: Embodied Resonance – Final concept

What is Embodied Resonance?

Embodied Resonance is an experimental audio performance that investigates the interplay between trauma, physiological responses, and immersive sound. By integrating biofeedback sensors with spatial sound, this project translates the body’s real-time emotional states into an evolving sonic landscape. Through this process, Embodied Resonance aims to create an intimate and immersive experience that bridges personal narrative with universal themes of emotional resilience and healing.

Reference Works

Inspiration for this project draws heavily from groundbreaking works in biofeedback art. For instance, Tobias Grewenig’s Emotion’s Defibrillator (2005) inspired me to explore how visual imagery can serve as emotional triggers, sparking physiological responses that drive sound. Grewenig’s project combines sensory input with dynamic visual feedback, using breathing, pulse, and skin sensors to create a powerful interactive experience. His exploration of binaural beats and synchronized visuals provided a foundation for my use of AR imagery and biofeedback systems​.

Another profound influence is the project BODY ECHOES, which integrates EMG sensors, breathing monitors, and sound design to capture inner bodily movements and translate them into a spatialized audio experience. This project highlights how subtle physiological states, such as changes in muscle tension or breathing rhythms, can form the basis of a compelling sonic narrative​. It has inspired my approach to using EMG and respiratory sensors as key components for translating physical states into sound.

How Does It Work?

The performance involves the use of biofeedback sensors to capture physiological data such as:

  • Electromyography (EMG) to measure muscle tension
  • Electrodermal Activity (EDA/GSR) to track stress levels via skin conductivity
  • Heart Rate (ECG/PPG) to monitor pulse fluctuations and emotional arousal
  • Respiratory Sensors to analyze breath patterns

This real-time data is processed using software like Max/MSP and Ableton Live, which maps physiological changes to dynamic sound elements. Emotional triggers, such as augmented reality (AR) images chosen by the audience, influence the performer’s physiological responses, which in turn shape the sonic environment.

Core Components of the Project

  1. Emotional Triggers and Biofeedback: The audience plays an active role by selecting AR-displayed imagery, which elicits emotional and physiological responses from the performer.
  2. Sound Mapping and Generation: Physiological changes dynamically alter elements of the soundscape.
  3. Spatial Audio and Immersion: An Ambisonic sound system enhances the experience, surrounding the audience in a three-dimensional sonic space.
  4. Interactive Performance Structure: The performer’s emotional and physical state directly influences the performance, creating a unique, real-time interaction between artist and audience.

Why is This Project Important?

Embodied Resonance is an innovative approach to understanding how trauma manifests in the body and how it can be externalized through sound. This project:

  • Explores the intersection of biofeedback technology, music, and performance art
  • Provides a new medium for emotional processing and healing through immersive sound
  • Pushes the boundaries of interactive performance, inviting the audience into a participatory experience
  • Challenges conventional notions of musical composition by integrating the human body as an instrument

Why Do I Want to Work on It?

As a sound producer, performer, and music editor, I have always been fascinated by the connections between sound, emotion, and the body. My personal journey with trauma and healing has shaped my artistic explorations, driving me to create a performance that not only expresses these experiences but also fosters a shared space for reflection and empathy. By combining my technical skills with deep personal storytelling, I aim to push the boundaries of sonic expression.

How Will I Realize This Project?

Methods & Techniques

  • Research: Studying trauma, somatic therapy, and the physiological markers of emotional states.
  • Technology: Utilizing biofeedback sensors and signal processing tools to create real-time sound mapping.
  • Performance Development: Experimenting with gesture analysis and embodied interaction.
  • Audience Engagement: Exploring ways to integrate audience input via AR-triggered imagery.

Necessary Skills & Resources

  • Sound Design & Synthesis: Proficiency in Ableton Live, Max/MSP, and Envelop for Live.
  • Sensor Technology: Understanding EMG, ECG, and GSR sensor integration.
  • Spatial Audio Engineering: Knowledge of Ambisonic techniques for immersive soundscapes.
  • Programming: Implementing interactive elements using coding languages and software.
  • Theoretical Research: Studying literature on biofeedback art, music therapy, and embodied cognition.

Challenges and Anticipated Difficulties

Spatial Audio Optimization: Achieving an immersive sound experience that maintains clarity and emotional depth.

Technical Complexity: Ensuring seamless integration of biofeedback data into real-time sound processing requires rigorous calibration and testing.

Emotional Vulnerability: The deeply personal nature of the performance may present emotional challenges, requiring careful preparation.

Audience Interaction: Designing a system that effectively incorporates audience input without disrupting the emotional flow.

Bibliography

Explore III: Embodied Resonance – Refining the Project Vision

Primary Intention:

The project’s core goal is to create an embodied, immersive experience where the performer’s movements and physiological signals interact with dynamic soundscapes, reflecting states of stress, panic, and resolution. This endeavor seeks to explore the intersection of the body, trauma, and sound as a medium of expression and understanding.

Tasks Fulfilled by the Project:

  1. Expressive Performance: Convey the visceral experience of stress and trauma through movement and sound.
  2. Interactive Soundscapes: Use real-time biofeedback to dynamically alter sound parameters, enhancing the audience’s sensory engagement.
  3. Therapeutic Exploration: Demonstrate the potential of somatic expression and sound for trauma exploration and healing.

Main Goals:

  1. Develop a cohesive interaction between biofeedback, sound design, and movement.
  2. Design an immersive auditory space using ambisonics.
  3. Create an emotionally impactful narrative through choreography and sound dynamics.

Steps for Project Implementation

Identifying Subtasks:

  1. Movement and Choreography Exploration:
    • Research and refine body movements that mirror states of stress and release.
    • Develop movement scores aligned with sound triggers.
  2. Biofeedback and Technology Integration:
    • Select and test wearable sensors for movement and physiological signals (e.g., heart rate monitors, EMG sensors).
    • Map sensor data to sound parameters using tools like Max/MSP or Pure Data.
  3. Sound Design and Ambisonics:
    • Create a palette of sound textures representing emotional states.
    • Test and refine 3D spatial audio setups.
  4. Rehearsal and Iteration:
    • Practice interaction between movement and sound.
    • Adjust mappings and refine performance flow.

Determining the Sequence:

  1. Begin with movement research and initial choreography.
  2. Set up and test biofeedback systems.
  3. Integrate sound design with real-time data mappings.
  4. Conduct iterative rehearsals and refine dynamics.

Description of Subtasks

Required Information and Conditions:

  • Knowledge of movement techniques representing trauma.
  • Understanding biofeedback sensors and data processing.
  • Familiarity with ambisonic sound design principles.

Methods:

  • Employ somatic techniques and physical theater practices for movement.
  • Use biofeedback-driven sound generation software for real-time interaction.
  • Apply iterative testing and rehearsal methods for refinement.

Existing Knowledge and Skills:

  • Dance and performance experience.
  • Basic knowledge of sensor technologies and sound design tools.
  • Understanding of trauma’s physical manifestations through literature.

Additional Resources:

  • Sensors and biofeedback devices.
  • Ambisonic Toolkit and spatial audio software.
  • Research materials on trauma and biofeedback in art.

Timeline Overview

Current Semester – “Explore” Phase:

  • Research movement responses to stress and trauma.
  • Test sensors and sound mapping tools.
  • Document all findings to create the exposé and prepare for the oral presentation.

Second Semester – “Experiment” Phase:

  • Prototype interactions between movement, biofeedback, and sound.
  • Evaluate the feasibility and emotional resonance of the prototypes.
  • Incorporate feedback and iterate designs.

Third Semester – “Product” Phase:

  • Combine prototypes into a cohesive performance.
  • Optimize the interplay between sound and movement.
  • Conclude with final documentation and a presentation of the complete performance.

Questions for Exploration

  • What additional biofeedback sensors and sound techniques can enhance the performance?
  • How can movement scores effectively translate the emotional states into physical expressions?
  • What feedback mechanisms will refine the audience’s immersive experience?

Explore II: Embodied Resonance – First draft

A live performance where the body’s movement and physiological responses interact with real-time, 3D soundscapes, creating an auditory and sensory experience that embodies the physical and emotional states associated with trauma, stress, or panic.


Core Elements

  1. Live Movement and Performance:
    • Physical Expression: Expressive body movements are used to convey states of stress, panic, and tension. Movements could be choreographed or improvised, incorporating controlled gestures, sudden shifts, and spasmodic motions that mirror the body’s natural reactions to trauma.
    • Sensor Integration: The performer will be equipped with wearable sensors (e.g., accelerometers, heart rate monitors, muscle tension sensors) to capture real-time data that triggers sound changes.
  2. Sound Design and Biofeedback:
    • Real-time Data to Sound Mapping: The data from the sensors can be mapped to sound parameters such as volume, pitch, and spatial positioning. 
    • Spatial Audio (Ambisonics): the 3D sound environment where the sound moves with the performer, simulating the feeling of being surrounded by or caught in an experience of panic.
    • Sound Layers and Textures: Layer sounds that range from chaotic, dissonant clusters to more open, calming tones, symbolizing shifts between heightened panic and brief moments of relief.
  3. Interactive Performance Dynamics:
    • Feedback Loops: The performer’s movements could influence sound parameters, and changes in sound could, in turn, affect how the performer responds (e.g., sudden loud or abrupt sounds causing physical shifts).
    • Immersive Auditory Space: Spatial audio setup will immerse the audience, making them feel as though they are within the performance’s sonic realm or inside the performer’s body.
  4. Choreography and Movement Techniques:
    • Imitating Panic and Stress:
      • Breath Control: Rapid, shallow breathing or uneven breathing patterns to simulate panic.
      • Body Tension and Release: Show how different areas of the body can tense up and release in response to imagined threats.
      • Sudden, Erratic Movements: Imitate fight-or-flight reactions through jerky, uncoordinated gestures.
    • Movement Scores: Create a set of movement phrases that can be triggered by specific sound cues, with each phase representing a different level of intensity or emotional state.

Implementation Steps:

  1. Initial Research and Movement Exploration:
    • Spend time exploring how the body naturally responds to stress through dance or physical theatre techniques.
    • Record and analyze your body’s response to various stimuli to understand how to replicate these in a performance context.
  2. Tech Setup and Testing:
    • Choose sensors capable of tracking movement and vital signs, such as wearable accelerometers and heart rate monitors.
    • Connect the sensors to real-time audio processing software (e.g., Max/MSP, Pure Data) to create dynamic sound generation based on data input.
    • Experiment with one biofeedback sensor (e.g., heartbeat or EMG) and connect it to sound manipulation software.
    • Test simple ambisonic setups to understand spatial audio placement.
  3. Sound Design:
    • Use ambisonics to experiment with how sounds can be positioned and moved in 3D space.
    • Create a palette of sound elements that represent different stress levels, such as soft background noise, mechanical sounds, distorted human voices, and deep bass thuds.
  4. Rehearsals and Iteration:
    • Conduct rehearsals where you practice the movement and sound interaction, making adjustments to the data-to-sound mappings to achieve the desired response.
    • Test with different inputs to refine the sonic representation of the body’s signals.
    • Refine the performance flow by timing the intensity of movements and sound shifts to ensure coherence and emotional impact.

Resources

Body and Trauma

  1. The Body Keeps the Score by Bessel van der Kolk
  2. Waking the Tiger: Healing Trauma by Peter Levine

Sound Design and Technology

  1. Sound Design: The Expressive Power of Music, Voice and Sound Effects in Cinema by David Sonnenschein
  2. Immersive Sound: The Art and Science of Binaural and Multi-Channel Audio edited by Agnieszka Roginska and Paul Geluso

Tools and Tutorials

  1. Ambisonic Toolkit (ATK)
  2. Cycling ’74 Max/MSP Tutorials

Artistic and Conceptual References

  1. Janet Cardiff – Known for immersive sound installations, especially her 40-Part Motet.
  2. Meredith Monk – Combines movement and sound to explore human experience.
  3. Christine Sun Kim – Explores sound and silence through the lens of the body and perception.

Academic Research in Sound and Perception

  1. Music, Cognition, and Computerized Sound: An Introduction to Psychoacoustics by Perry Cook 

Explore I: Body and Sound – Looking for the Idea

My Background and Interests

My journey into sound and technology started with my experiments in movement-based sound design. One of my first projects used ultrasonic sensors and Arduino technology to transform body movement into music. I was fascinated by the idea of turning motion into sound, mapping gestures into an interactive sonic experience. This led me to explore other ways of integrating physical action with sound manipulation, such as using MIDI controllers and custom-built sensors.

I see sound as more than just music—it’s a form of expression, communication, and interaction. My interest in sound design is rooted in its ability to create immersive experiences, whether through spatial sound, interactivity, or emotional storytelling. I love experimenting with unconventional ways of generating and manipulating sound, pushing beyond traditional composition to explore new territories.

Right now, I’m particularly interested in how sound connects to the body. How can movement or internal processes be used as an instrument? How do physical states influence the way we experience sound? These are the questions that drive my current explorations.

Idea Draft for a Future Project

At first, I was focused on transforming movement into sound. My early idea was to explore sensors that could read touch, direction, and motion, allowing me to control different sound layers by moving my body. I imagined a 3D sound composition where gestures could manipulate textures, rhythms, and effects in real-time. Maybe even integrating voice elements, allowing me to shape effects with both movement and singing.

Over time, my focus shifted. Instead of external movement, I started thinking about internal body processes—breath, heartbeat, muscle tension. What if sound could react to what happens inside the body rather than just external gestures? This led to the idea of biofeedback-driven sound, where physiological data becomes a source of real-time sonic transformation.

The concept is still in development, but the main idea remains the same: exploring the relationship between the body and sound in a way that is immersive, interactive, and emotionally driven. Whether through movement or internal signals, I want to create a performance where sound is a direct extension of the body’s state, turning invisible experiences into something that can be heard and felt.

Moving Forward

This project is still evolving. It might become a performance, an installation, or something entirely different. Right now, I’m in the phase of exploring what’s possible. Sound and the body are deeply connected, and I want to keep pushing that connection in new and unexpected ways.

Explore I: Image Extender – Image sonification tool for immersive perception of sounds from images and new creation possiblities

The project would be a program that uses either AI-content recognition or a specific sonification algorithm by using equivalent of the perception of sight (cross-model metaphors).

examples of cross modal metaphors (Görne, 2017, S.53)

This approach could serve two main audiences:

1. Visually Impaired Individuals:
The tool would provide an alternative to traditional audio descriptions, aiming instead to deliver a sonic experience that evokes the ambiance, spatial depth, or mood of an image. Instead of giving direct descriptive feedback, it would use non-verbal soundscapes to create an “impression” of the scene, engaging the listener’s perception intuitively. Therefore, the aspect of a strict sonification language might be a good approach. Maybe even better than just displaying the sounds of the images. Or maybe a mixture of both.

2. Artists and Designers:
The tool could generate unique audio samples for creative applications, such as sound design for interactive installations, brand audio identities, or cinematic soundscapes. By enabling the synthesis of sound based on visual data, the tool could become a versatile instrument for experimental media artists.

Purpose

The core purpose would be the mixture of both purposes before, a tool that supports and helps creating in the same suite.

The dual purpose of accessibility and creativity is central to the project’s design philosophy, but balancing these objectives poses a challenge. While the tool should serve as a robust aid for visually impaired users, it also needs to function as a practical and flexible sound design instrument.

The final product can then be used by people who benefit from the added perception they get of images and screens and for artists or designers as a tool.

Primary Goal

A primary goal is to establish a sonification language that is intuitive, consistent, and adaptable to a variety of images and scenes. This “language” would ideally be flexible enough for creative expression yet structured enough to provide clarity for visually impaired users. Using a dynamic, adaptable set of rules tied to image data, the tool would be able to translate colors, textures, shapes, and contrasts into specific sounds.

To make the tool accessible and enjoyable, careful attention needs to be paid to the balance of sound complexity. Testing with visually impaired individuals will be essential for calibrating the audio to avoid overwhelming or confusing sensory experiences. Adjustable parameters could allow users to tailor sound intensity, frequency, and spatialization, giving them control while preserving the underlying sonification framework. It’s important to focus on realistic an achievable goal first.

  • planning on the methods (structure)
  • research and data collection
  • simple prototyping of key concept
  • testing phases
  • implementation in an standalone application
  • ui design and mobile optimization

The prototype will evolve in stages, with usability testing playing a key role in refining functionality. Early feedback from visually impaired testers will be invaluable in shaping how soundscapes are structured and controlled. Incorporating adjustable settings will likely be necessary to allow users to customize their experience and avoid potential overstimulation. However, this customization could complicate the design if the aim is to develop a consistent sonification language. Testing will help to balance these needs

Initial development will target desktop environments, with plans to expand to smartphones. A mobile-friendly interface would allow users to access sonification on the go, making it easier to engage with images and scenes from any device.

In general, it could lead to a different perception of sound in connection with images or visuals.

Needed components

Technological Basis:

Programming Language & IDE:
The primary development of the image recognition could be done in Python, which offers strong libraries for image processing, machine learning, and integration with sound engines. Also wekinator could be a good start for the communication via OSC for example.

Sonification Tools:
Pure Data or Max/MSP are ideal choices for creating the audio processing and synthesis framework, as they enable fine-tuned audio manipulation. These platforms can map visual data inputs (like color or shape) to sound parameters (such as pitch, timbre, or rhythm).

Testing Resources:
A set of test images and videos will be required to refine the tool’s translations across various visual scenarios.

Existing Inspirations and References:

– Melobytes: Software that converts images to music, highlighting the potential for creative auditory representations of visuals.

– VOSIS: A synthesizer that filters visual data based on grayscale values, demonstrating how sound synthesis can be based on visual texture.

– image-sonification.vercel.app: A platform that creates audio loops from RGB values, showing how color data can be translated into sound.

– BeMyEyes: An app that provides auditory descriptions for visually impaired users, emphasizing the importance of accessibility in technology design.

Academic Foundations:

Literature on sonification, psychoacoustics, and synthesis will support the development of the program. These fields will help inform how sound can effectively communicate complex information without overwhelming the listener.

References / Source

Görne, Tobias. Sound Design. Munich: Hanser, 2017.

#05 The Art and Science of Sound

Sound is more than a medium for communication—it’s a profound tool for conveying meaning, evoking emotions, and guiding interaction. Two critical concepts in this domain, Perception, Cognition and Action in Auditory Displays and Sonic Interaction Design (SID), illustrate the potential of sound to transform user experiences. Let’s dive into these fascinating dimensions and explore how they enrich interaction design.

Understanding Auditory Displays: Perception Meets Cognition

The world of sound is intricate, with perception playing a central role in translating acoustic signals into meaning. Chapter 4 of The Sonification Handbook emphasizes the interplay between low-level auditory dimensions (pitch, loudness, timbre) and higher-order cognitive processes.

1. Multidimensional Sound Mapping: Designers often map data variables to sound dimensions. For instance:
• Pitch represents stock price fluctuations.
• Loudness indicates proximity to thresholds.

2. Dimensional Interaction: These mappings aren’t always independent. For example, a rising pitch combined with falling loudness can distort perceptions, leading users to overestimate changes.

3. Temporal and Spatial Cues: Sound’s inherent temporal qualities make it ideal for monitoring processes and detecting anomalies. Spatialized sound, like binaural audio, enhances virtual environments by creating immersive experiences.

The Human Connection

What sets auditory displays apart is their alignment with human cognition:
Auditory Scene Analysis: Our brains can isolate sound streams (a melody amidst noise).
Action and Perception Loops: Interactive displays that let users modify sounds in real-time (tapping to control rhythm) leverage embodied cognition, connecting users’ actions to auditory feedback.

Sonic Interaction Design: Designing for Engagement

SID extends the principles of auditory perception into the realm of interaction. It focuses on creating systems where sound is an active, responsive participant in user interaction. This isn’t about adding sound arbitrarily; it’s about making sound integral to the product experience.

Core Concepts:

1. Closed-Loop Interaction: Users generate sound through actions, which then guide their behavior. Think of a rowing simulator where audio feedback helps athletes fine-tune their movements.

2. Multisensory Design: SID integrates sound with visual, tactile, and proprioceptive cues, ensuring a cohesive experience. For example, the iPod’s click wheel creates a pseudo-haptic illusion through auditory feedback.

3. Natural Sounds vs. Arbitrary Feedback: Research shows users prefer natural, intuitive sound interactions—like the “clickety-clack” of a spinning top model—over abstract sounds.

Aesthetic and Emotional Dimensions

Sound isn’t just functional; it’s deeply emotional:
Pleasantness and Annoyance: Sounds that align with user expectations can make interactions enjoyable, while poorly designed sounds risk irritation.
Emotional Resonance: Artifacts like the Blendie blender, which responds to vocal imitations, evoke playful and emotional responses, enhancing engagement.

Techniques for Sonic Innovation

Both frameworks underline the importance of crafting meaningful sonic interactions. Here’s how designers can apply these insights:

1. Leverage Auditory Feedback Loops:
Use real-time feedback to enhance tasks requiring precision. A surgical tool that changes pitch based on pressure can guide users intuitively.

2. Foster Emotional Connections:
Integrate sounds that mirror real-world actions or emotions. For example, soundscapes that reflect pouring water can make mundane interactions delightful.

3. Design for Multisensory Consistency:
Ensure that sound complements visual and tactile feedback. Synchronizing auditory and visual cues can improve user understanding and create a seamless experience.

The Future of Interaction Design with Sound

As technology evolves, sound’s role in interaction design will expand—from aiding navigation in virtual reality to enhancing everyday products with subtle, meaningful audio cues. By combining cognitive insights with creative sound design, we can craft experiences that are not only functional but also profoundly human.

Reference

T. Hermann, A. Hunt, and J. G. Neuhoff, Eds., The Sonification Handbook, 1st ed. Berlin, Germany: Logos Publishing House, 2011, 586 pp., ISBN: 978-3-8325-2819-5.

https://sonification.de/handbook

1. The Emotional and Cognitive Power of Audio-visuals in Interactive Environments

Audiovisual elements play a crucial yet often underestimated role in shaping user experiences in interactive environments such as art exhibitions, video mapping, and installations. While visual elements tend to dominate as the primary focus, audiovisual integration—combining both sound and visuals—enhances emotional engagement, guides attention, and fosters spatial awareness. In environments where users actively interact with the space, audiovisual components transcend mere accompaniment, becoming vital parts of the experience that strengthen the connection between the user and their surroundings. This study delves into the impact of audiovisual stimuli in these settings, particularly investigating how sound and visuals together influence user cognition, emotional responses, and overall engagement.

Example 1: teamLab Borderless – This immersive exhibit blends sound, visuals, and user interactions to create a cohesive environment where sound guides participants’ movement and adds emotional depth to the visual narrative.

Example 2: In the SoundScape installation at the Museum of Modern Art (MoMA), curated soundscapes synchronize with projections to create a rich sensory experience, showing how sound can manipulate emotions and guide attention.
Example 3: BLCK SUN performance by AMIANGELIKA is experienced in spatial audio and recorded in real-time using analog synthesizers, digital instruments, and visual programming networks that interact to create a time-sensitive, immersive audio-visual environment that enhances how the audience perceives the story.

RESEARCH QUESTIONS

The research focuses on how audiovisual environments, such as sound, visual projections, and other sensory stimuli, affect user interaction and engagement in interactive spaces.
As anticipated in the main title, the question which I will try to answer is “How do audiovisual elements in interactive environments influence cognitive and emotional responses in users?”. Hopeful that these will bring me to answer the main topic I will research on sub questions such as:

  • How do combinations of visuals and sound enhance emotional and cognitive engagement in interactive art?
  • How can users’ interaction with audiovisual stimuli alter their perception of an installation or exhibition?
  • How does interactivity in audiovisual environments shape user agency and immersion?

IS IT RELEVANT

Yes it is. understanding how audiovisual elements influence user engagement will allow designers to create more effective and emotionally engaging experiences. The findings will contribute to better user experience in interactive art installations, exhibitions and entertainments venues.

CHALLANGES EXPECTED

Some of the challenges that can be faced during the research are: finding the right balance of audiovisual stimuli without overwhelming or confusing the user. Different cultures and individuals may respond differently to audiovisual stimuli. Not all users will interact with the environment in the same way.

PERSONAL MOTIVATION

My interest in interactive environments has been deeply influenced by personal experiences in art exhibitions and installations, where the power of audiovisual elements was so overwhelming that it triggered physical discomfort to the point where I had to leave the space. This intense reaction made me realize the profound impact that sound, visuals, and their combination can have on a person’s emotional and cognitive state. This research is not only important for my academic journey but also for my future career, as it will allow me to learn how to direct user experiences more effectively. I am particularly interested in creating environments where users can not only experience but also manipulate the audiovisual elements, enabling them to have more control over their sensory interactions.