Product II – Embodied Resonance – Initial Signal Testing and Electrode Placement

After completing the hardware setup, the next step was to verify whether the system was capable of producing usable physiological signals. For this purpose, a minimal Arduino sketch was written to read raw analog values from the ECG and GSR sensors and stream them via the serial interface. The goal at this stage was not data recording or analysis, but a basic functional test of the signal paths. The code continuously reads the ECG signal from analog input A1 and the GSR signal from analog input A2, printing both values as comma-separated numbers to the Serial Monitor. A short delay was introduced to limit the sampling rate and ensure stable serial transmission.

const int ecgPin = A1;

const int gsrPin = A2;

void setup() {

  Serial.begin(115200);

}

void loop() {

  int ecgValue = analogRead(ecgPin);

  int gsrValue = analogRead(gsrPin);

  // print CSV row: ecg,gsr

  Serial.print(ecgValue);

  Serial.print(“,”);

  Serial.println(gsrValue);

  delay(5);

}

Once the code was running, the next critical step was the physical placement of the ECG electrodes. This proved to be one of the most challenging parts of the initial testing phase. Online sources provide a wide range of DIY electrode placement schemes, many of which are inconsistent or oversimplified. In particular, a previously referenced HRV-related Arduino project suggested placing electrodes on the arms. This configuration was tested first, but the resulting signal made it difficult to identify clear R-peaks in the serial plotter, which are essential for ECG interpretation and HRV analysis.

Example of ECG electrode placement as proposed in the “Arduino and HRV Analysis” project and author’s implementation. https://emersonkeenan.net/arduino-hrv/ 

The official documentation of the ECG sensor instead recommended chest-based electrode placement. However, this approach also required careful positioning to achieve a clean signal. 

ECG electrode placement on the chest as recommended in the official sensor documentation.

https://www.dfrobot.com/product-1510.html

The most reliable guidance was found in a tutorial video presented by a medical professional, which explained proper ECG electrode placement in practical terms. The key insight was that electrodes should not be placed directly on bone. Instead, they must be positioned on soft tissue—below the shoulder and above the rib cage.

The ECG cables were clearly labeled by the manufacturer:

L (left) electrode placed on the left side of the chest

R (right) electrode placed symmetrically on the right side

F (foot/reference) electrode placed on the lower left abdomen, below the rib cage

Additionally, skin preparation proved to be essential. Degreasing the skin before attaching the electrodes significantly improved signal quality. After applying these corrections and restarting the Arduino sketch, distinct ECG peaks became clearly visible in the serial output.

Raw ECG signal displayed in the Serial Plotter, showing clearly identifiable R-peaks during initial signal testing.

In contrast, the GSR sensor required far less preparation. It was simply attached to the fingers, and a signal was immediately observable. However, even during these initial tests it became evident that the GSR signal was highly noisy and would require filtering and post-processing in later stages of the project.

GSR sensor placement on the fingers during data acquisition.

Several practical limitations of the Arduino IDE became apparent during this testing phase. One major drawback was the inability to adjust the grid or scaling in the Serial Plotter, which made live signal inspection inconvenient. Furthermore, the current version of the Arduino IDE no longer allows direct export of serial data to CSV format from the monitor. This limitation necessitated additional tooling and custom scripts in later stages to enable proper data logging and analysis.

Product I – Embodied Resonance – Hardware Selection and System Setup

RECAP:
Embodied Resonance investigates how the body of a person with lived experience of war responds to auditory triggers that recall traumatic events, and how these responses can be expressed through sound. The heart is central to this project, as cardiac activity – particularly heart rate variability (HRV)—provides detailed insight into stress regulation and autonomic nervous system dynamics associated with post-traumatic stress disorder (PTSD).

The conceptual direction of the work is shaped by my personal experience of the war in Ukraine and an interest in trauma physiology. I was drawn to the idea that trauma leaves measurable traces in the body—signals that often remain inaccessible through language but can be explored scientifically and translated into sonic form. This approach was influenced by The Body Keeps the Score by Bessel van der Kolk, which emphasizes embodied memory and non-verbal manifestations of trauma.

In the previous semester, the project focused on exploratory work with existing physiological datasets. A large open-access dataset on stress-induced myocardial ischemia was used to study cardiac behavior under rest, stress, and recovery conditions. Although not designed specifically for PTSD research, the dataset includes participants with PTSD, anxiety-related disorders, and cardiovascular conditions, offering a diverse basis for analysis.

During this phase, Python tools based on the NeuroKit2 library were developed to compute time- and frequency-domain HRV metrics from ECG recordings. Additional scripts transformed these parameters into MIDI patterns and continuous controller (CC) data for sound synthesis and composition. Initial experiments with real-time HRV streaming were also conducted, but they revealed significant limitations: many HRV metrics require long analysis windows and are computationally demanding, making them unsuitable for stable real-time sonification.

In the current semester, corresponding to the Product phase, the project transitions from simulation-based exploration to work with my own body. During earlier presentations, concerns were raised regarding the ethical implications of experiments that could potentially lead to re-traumatization, particularly when involving other participants with war-related trauma. In response, I decided not to extrapolate the experiment to other Ukrainians at this stage and to limit the investigation to my own physiological responses.

Furthermore, instead of exposing myself to recorded sirens at arbitrary times, I chose to record my ECG during the weekly civil defense siren tests that take place every Saturday in Graz. This context offers a meaningful contrast: for most residents of Austria, the siren test is a routine element of everyday life, largely stripped of emotional urgency. For someone with lived experience of war, however, the same sound carries associations of immediate danger. By situating the recordings within this real, socially normalized setting, the project examines how a familiar public signal can produce profoundly different embodied responses depending on personal history.

Before starting the experimental recordings, it was necessary to select and acquire appropriate sensors and a microcontroller. Prior to purchase, a short survey of available biosensing hardware was conducted, with particular attention paid to signal quality, availability of documentation, and the existence of example projects demonstrating practical use. An additional criterion was whether the sensors had been previously employed in projects related to heart rate variability (HRV) analysis.

For ECG acquisition, the DFRobot Gravity Heart Rate Monitor Sensor was selected. This sensor offered a favorable balance between cost and functionality, providing all required cables as well as disposable electrodes. Importantly, it had been used in a well-documented HRV-focused project, which served as a valuable technical reference during development and troubleshooting. In addition to ECG, a galvanic skin response (GSR) sensor from Seeed Studio was included to explore changes in skin conductance as an additional physiological marker of stress. While GSR was not part of the previous semester’s research, it was included experimentally to assess whether this modality could provide complementary information. At this stage, the structure and usefulness of GSR data were not yet fully predictable and were treated as exploratory. As a microcontroller, the Arduino MKR WiFi 1010 was chosen. 

The full list of acquired components is as follows:

  • Arduino MKR WiFi 1010
  • DFRobot Gravity Heart Rate Monitor Sensor (ECG)
  • DFRobot Disposable ECG Electrodes
  • Seeed Studio GSR Sensor
  • Seeed Studio 4-pin Male Jumper to Grove Conversion Cable
  • Breadboard (400 holes)
  • Male-to-male jumper wires
  • Male-to-female jumper wires
  • Potentiometers (100 kΩ)
  • LiPo Battery 1S 3.7 V 500 mAh (not used in final setup)

The total cost of the hardware provides amounted to approximately 80 EUR. The initial motivation for this choice was the possibility of wireless data transmission via WiFi or Bluetooth. In practice, however, wireless communication was not required. Due to the high motion sensitivity of both ECG and GSR sensors, recordings had to be performed in a largely static position, making a wired USB connection to the computer sufficient. For this reason, a battery intended for mobile operation was ultimately not used.

For software configuration, the Arduino IDE was installed. Although I had prior experience working with Arduino hardware several years ago, the interface had changed significantly. To support the Arduino MKR WiFi 1010, the SAMD Boards package was additionally installed via the Boards Manager. After software setup, all components were connected according to a simple wiring scheme that required no additional electronic elements. 

Figure 1. Wiring diagram of the experimental setup with Arduino MKR WiFi 1010, ECG sensor, and GSR sensor.

The Arduino ground (GND) was connected to the ground rail of the breadboard, and the 5 V output was connected to the power rail.

The ECG sensor was connected as follows:

GND (black wire) → ground rail on the breadboard

VCC (red wire) → 5 V power rail

Signal output (blue wire) → analog input A1 on the Arduino

The GSR sensor was connected as follows:

GND (black wire) → ground rail on the breadboard

VCC (red wire) → 3.3 V output on the Arduino

Signal output (yellow wire) → analog input A2 on the Arduino

Figure 2 illustrates the complete wiring configuration of the system, including the Arduino MKR WiFi 1010, ECG sensor, GSR sensor, and breadboard power distribution.

Figure 2. Physical hardware configuration used for ECG and GSR data recording.

#6 Final Prototype and Video

Have fun with this Video to find out what my actual Prototype is.

Reflection

This project began with a vague idea to visualize CO₂ emissions — and slowly took shape through cables, sensors, and a healthy amount of trial and error. Using a potentiometer and a proximity sensor, I built a simple system to scroll through time and trigger animated data based on presence. The inspiration came from NFC tags and a wizard VR game (yes, really), both built on the idea of placing something physical to trigger something digital. That concept stuck with me and led to this interactive desk setup. I refined the visuals, made the particles feel more alive. I really want to point out how important it is to ideate and keep testing your ideas, because there will always be changes in your plans or something won’t work etc. Let’s go on summer vacation now 😎

#5 Vizualisation Refinement and Hardware Setup

Over the past few weeks, this project slowly evolved into something that brings together a lot of different inspirations—some intentional, some accidental. Looking back, it really started during the VR project we worked on at the beginning of the design week. We were thinking about implementing NFC tags, and there was something fascinating about the idea that just placing an object somewhere could trigger an action. That kind of physical interaction stuck with me.

NFC Tag

Around the same time, we got a VR headset to develop and test our game. While browsing games, I ended up playing this wizard game—and one small detail in it fascinated me. You could lay magical cards onto a rune-like platform, and depending on the card, different things would happen. It reminded me exactly of those NFC interactions in the real world. It was playful, physical, and smart. That moment clicked for me, I really like the idea that placing something down could unlock or reveal something.

Wizard Game

Closing the Circle

That’s the energy I want to carry forward into the final version of this project. I’m imagining an interactive desk where you can place cards representing different countries and instantly see their CO2 emission data visualized. For this prototype, I’m keeping it simple and focused—Austria only, using the dataset I already processed. But this vision could easily scale: more countries, more visual styles, more ways to explore and compare. Alongside developing the interaction concept, I also took time to refine the visualization itself. In earlier versions, the particle behavior and data mapping were more abstract and experimental—interesting, but sometimes a bit chaotic. For this version, I wanted it to be more clear and readable without losing that expressive quality. I adjusted the look of the CO2 particles to feel more alive and organic, giving them color variation, slight flickering, and softer movement. These small changes helped shift the visual language from a data sketch to something that feels more atmospheric and intentional. It’s still messy in a good way, but now it communicates more directly what’s at stake.

Image Reference

Image 1 (NFC Tag): https://www.als-uk.com/news-and-blog/the-future-of-nfc-tags/

Image 2 (Wizard Game): https://www.roadtovr.com/the-wizards-spellcasting-vr-combat-game-early-access-launch-trailer-release-date/

#4 Alright… Now What?

So far, I’ve soldered things together (mentally, not literally), tested sensors, debugged serial communication, and got Arduino and Processing talking to each other. That in itself feels like a win. But now comes the real work: What do I actually do with this setup?

At this stage, I started combining the two main inputs—the proximity sensor and the potentiometer into a single, working system. The potentiometer became a kind of manual timeline scrubber, letting me move through 13 steps that represent a line, which should be a test for a potential timeline? The proximity sensor added a sense of presence, acting like a trigger that wakes the system up when someone approaches. Together, they formed a simple but functional prototype of a prototype, a rough sketch of the interaction I’m aiming for. It helped me think through how the data might be explored, not just visually, but physically, with gestures and motion. This phase was more about testing interaction metaphors than polishing visuals—trying to understand how something as abstract as historical emissions can be felt through everyday components like a knob and a distance sensor. This task pointed out to me, how important testing and the ideation of your ideas can be, to get a better understanding of your own thoughts and to form a more precise imagination of your plan.

Small Prototype to connect sensors in one file

Things about to get serious

Building on the knowledge I gained during the ideation phase, I connected my working sensor system, a potentiometer and proximity sensor to the Processing sketch I had developed during design week. That earlier version already included interaction through Makey Makey and homemade aluminum foil buttons, which made for a playful and tactile experience. In my opinion, the transfer to Arduino technology made the whole setup easier to handle and much cleaner—fewer cables, more direct control, and better integration with the Processing environment. The potentiometer now controls the timeline of Austria’s CO2 emissions, while the proximity sensor acts as a simple trigger to activate the visualization. This transition from foil to microcontroller reflects how the project evolved from rough experimentation into a more stable, cohesive prototype.

#3 Serial Communication Between Arduino and Processing

By this point, I had some sensors hooked up and was starting to imagine how my prototype might interact with Processing. But getting data from the physical world into my visuals? That’s where serial communication came in! On the Arduino side, I used “Serial.begin(9600)” to start the connection, and “Serial.println()” to send sensor values. In my case, it was messages like “true” when a hand moved close to the distance sensor, and “false” when it moved away. On the Processing side, I used the Serial library to open the port and listen for data. Every time a new message came in, I could check if it was “true” or “false”, and change what was being shown on screen — red background, green background, whatever. So I was prototyping the prototype, you could say.

Why this is so fascinating and helpful 🤯

I wanted to build something quick, easy to use and reactive—and serial communication made it possible to prototype fast without diving into WiFi, Bluetooth, or custom protocols. It lets me test ideas in minutes: turn a knob, wave a hand, watch the screen respond. And for something as conceptual and messy as visualizing CO2 history with simple and fast coding, that immediacy is everything.

Imagine you’re at an interactive museum exhibit about climate change. As a visitor approaches a screen, a hidden distance sensor detects their presence. The Arduino sends “true” to Processing, which triggers a cinematic fade-in of historical CO2 data and a narration starts playing. When the visitor steps away, the system fades back into a passive state, waiting for the next interaction. That whole experience? Driven by serial communication. One cable. A few lines of code. Huge impact.

Some helpful links for those who are interested in serial communication:

https://learn.sparkfun.com/tutorials/connecting-arduino-to-processing/all

#2 First Steps with Arduino

So my initial project about the CO2 emissions in AUT had 13 steps on a timeline you could loop through with key controls. So I’m thinking how am I gonna set my Arduino parts up, to work with my existing concept? This blogpost should tell you about my first steps, trying to figure that out. Connecting the parts—trying to get progress towards the concept of my existing one.

My thoughts creating this code were pretty loose at first. I just wanted to get some kind of input from the potentiometer, without fully knowing what I’d do with it yet. I had the concept of a CO2 visualization in the back of my mind, and I knew I had split the data into 13 time periods earlier, so I figured I’d map the potentiometer to 13 steps and see what happens. It was more about testing how I could interact with the data physically, using whatever tools I had lying around. The code itself is super basic—it just checks if the current step has changed and then sends that info over serial. It felt like a small but useful first step.

Also i integrated a distance modulino already thinking about how i could use this one for my prototype.

With a very basic setup from the library, to get the input of the sensor. I wrote a sketch that just triggers true or false when i move my hand over the sensor. I am thinking about my very first idea of the design week, to trigger an interaction/visualisation when i step on a plate with the shape of the country I want to see the emission data of. Maybe I can go in this direction this time? I want to give you another picture to show you what I mean.

Of course, this will not be realizable now but thinking about the map interaction could be a good concept for the technological boundaries I have set with my pieces I got from the FH.

16 Morse Code with Arduino Summary + Video

Over the semester, I built a simple Arduino-based Morse code prototype. It started with just three buttons to create Morse code messages, which were then turned into sound. I quickly realized that keeping it on one device didn’t make much sense, so I connected the Arduino to Wi-Fi and used OSC to send messages to my laptop. From there, I added a decoding function that translated Morse into readable text. In the final step, I built a basic web interface where you could type a message, send it to the Arduino, and see it displayed on an LED matrix. My idea is to use this setup to teach kids about encryption in a playful way. Along the way, I learned a lot about Arduino syntax, using libraries, and how to build Wi-Fi and web-based interfaces—opening up tons of new creative possibilities for me.

15 Creating a Web Interface for Arduino

Like I have teased in the last blog post, I came across a YouTube video, that showed, how to create a web interface for an Arduino. This has a number of use cases, live sensor monitoring, remote control, live system feedback or interactive installations. This makes it possible to control how the user can interact with an Arduino, using a platform, that they already know.

When the Arduino connects to a WiFi network, it gets an IP address and starts a tiny web server that can talk to web browsers. When you open that IP address in your browser, the browser sends a request to the Arduino. The Arduino responds with a simple web page, in my case a form in which you can write morse code. If you type something and click “Submit,” the browser sends the text back to the Arduino. The Arduino reads and understands the send information and can react accordingly. This way, the Arduino works like a tiny website, letting you interact with it through any browser.

I once again started with an example, I found in the WiFiS3 library of Arduino, “SimpleWebServerWiFi”. This code generated a very simple website with which you could turn on and off an LED on the Arduino. Using this as my starting point, I first wanted to expand the web interface, which took a little longer than it would usually take to build, since I had to upload the code multiple times and change it so it finally looked “good enough” for this prototype. But doing the interface it self was just the easy part.

Before
After

Next I wanted to give my simple interface some functionality, there for the form I created on the Arduino needed to send the data input by the user back to the Arduino, so it could understand and for now translate it. And I have to be honest, I really tried to understand the code but just couldn’t figure out, how it worked, so I asked ChatGPT to help me out. Using its infinite wisdom it created a short piece of code, that converted the users input into a string, that could be understood by the code, I had written before.

The next step was easy, I added the code for decoding the message, I created last week and explained in the last blog post. Now I just needed the Arduino to display the message, after it received one, which was easy enough by just adding an “if” statement, that would only add extra content to the website, if a message had been received before. And like that, I finished the next version of my chaotic morse code prototype.

Now that I’ve built this basic version, I’ve been thinking about how this kind of setup could be used in different contexts. For example, it could be adapted for interactive museum exhibits, where visitors can type a message into a browser on their phone and trigger lights, sounds, or physical movements in an installation. It could also be used for DIY home automation, like controlling lights. Or it might become a learning tool for kids, where they can experiment with inputs and immediately see results, helping them understand communication systems like Morse code in a playful, hands-on way.

Instructions

If you wanted to try it out yourself, here was what you needed:

  • An Arduino
  • a Laptop or any other device that can use a Browser ;D

This time it is even simpler, plugin the Arduino, change the SSID & Password to fit your home WiFi network and upload the sketch. In the Serial Monitor you will then see the Arduinos IP address, now open a browser and type in the shown IP address. Now you should see the simple interface I created, the only thing to do now is to write some morse code and let the Arduino decode it.

2.3. Exploring Technology for My Lo-Fi Phygital Prototype

In my previous post, I explored phygital experiences that connect visitors to cultural content through tactile and digital storytelling. Now, I’m moving into the prototyping phase, and to bring these kinds of interactions to life, I’m turning to microcontrollers.

At the same time, I’ve been thinking more about the story I want my prototype to tell. Since my focus is on history and cultural heritage, and because I’m still fairly new to Graz, I saw this project as a unique opportunity to explore the city through this design challenge. My initial idea was to highlight the city’s well-known landmarks, but that felt too predictable. Instead, I want to uncover the hidden, quirky, and lesser-known places that give Graz its unique character. My goal is to create a lo-fi prototype that invites people to touch and listen, triggering short sounds or spoken fragments linked to unusual locations and landmarks in Graz.

Why Microcontrollers?

Microcontrollers offer a way to bridge physical input (like touch or proximity) with digital output (like sound, light, or video). They’re lightweight, flexible, and ideal for low-fidelity prototypes, the kind that let me quickly explore how interaction feels without fully building the final experience.

For a museum-like experience or an interactive city artifact, microcontrollers allow subtle, intuitive interactions, like triggering a sound when you place your hand on a surface, or activating a voice from an object when you stand near it. They’re perfect for phygital storytelling rooted in emotion, mystery, and place.

What My Prototype Needs to Do

To support this narrative direction, I want to create an experience that allows people to uncover hidden details about Graz through sound. Each interaction will trigger a short audio response that reveals something unexpected or overlooked.

Technically, it needs to:

  • Input: Detect touch or proximity
  • Output: Play short audio clips
  • Interaction: Simple, screen-free feedback
  • Portability: USB- or battery-powered
  • Expandability: Easy to add more spots and sounds

Why Sound?

For this project, sound will serve as the main storytelling layer. 

Each interaction might trigger:

  • A whispered story or urban myth
  • A short audio poem or phrase
  • Field recordings from that specific location
  • A strange or surreal audio cue (like an echo, animal noise, or machine hum)

Unlike visuals or text, sound allows for immediacy and interpretation. People don’t just hear, they imagine. And that makes it ideal for revealing the hidden soul of a place like Graz.

Microcontroller Options

Arduino UNO
+ Compatible with sensors and DFPlayer Mini, well supported.
– Requires extra components for audio, more setup.

Touch Board (Bare Conductive)
+ 12 built-in capacitive touch sensors, MP3 playback from microSD, perfect for touch-based sound triggers.
– Slightly bulkier and more expensive, fewer I/O pins.

Makey Makey
+ Very fast and beginner-friendly.
– Needs a computer, limited interaction types, not standalone.

Raspberry Pi
+ Great for future audio-visual expansion.
– Too complex for lo-fi prototyping, more fragile.

What’s Next

After this research, I’ve decided to use the Touch Board for my first prototype. It’s specifically designed for sound-triggered, touch-based interactions, making it ideal for what I want to create: a playful and poetic interface that reveals hidden stories through sound. Its built-in MP3 playback and capacitive touch support mean I can keep my setup compact and focus on designing the experience, not just wiring the tech.

My first test setup will include:

  • Input: Touch sensor (built into the board)
  • Output: MP3 sound through speaker/headphones
  • Feedback: A single LED to show when a sound is playing
  • Goal: When someone touches a marked location on the map, a sound plays, revealing part of Graz that’s normally overlooked.

This early version will help me test the feeling of the interaction before I scale up to a full map or multi-point layout.