SOFTWARE AND DATA PIPELINE

    1.  Data Flow Overview

The data pipeline is structured in three different phases: acquisition, post-processing, and sonification. The first part, Acquisition includes independent capturing of audio (Zoom H4n, contact microphone), motion (x-IMU3), and video/audio (GoPro Hero 3). Then, in the next step, post-processing uses the x-IMU3 SDK to decode the recorded data. This data is then send via OSC to Pure Data and is there translated into its different parameters. 

The sonification and audio transformation are carried out also using Pure Data.

This architectural structure supports a secure workflow and easy synchronization in post.

  1. Motion Data Acquisition

Motion data was recorded onboard the x-IMU3 device. After each session, files were extracted using the x-IMU3 GUI and decoded into CSVs. These contain accelerometer, gyroscope, and orientation values with timestamps (x-io Technologies, 2024). Python scripts parsed the data and prepared OSC messages for transmission to Pure Data. The timing issue is faced with the help of synchronizing big movements in rotation or acceleration during the long recording all devices. (Wright et al., 2001).

The Audio recorded from the contact mic is a simple mono WAV file and is Pure Data and later Davinci Resolve for the audio video final cut. Looking at the recordings, the signal primarily consisted of strong impact sounds, board vibrations, water interactions and movements of the surfer. These recordings are used directly for the sound design of the movie. During the main part of the movie, when the surfer stands on the board, this audio will also be modulated using the motion data of the sensor reflecting on the gestures and board dynamics. (Puckette, 2007; Roads, 2001).

  1. Video and Sync Reference

Having all this different not in synchronized time recorded data files leaves a great question of exact synchronization. Therefore, a test was conducted which will be explained in more detail in the section: 10. SURF SKATE SIMULATION AND TEST RECORDINGS. The movement of surfing was simulated using a surf skateboard, on which a contact microphone was mounted on the bottom of the deck. In addition to the microphone also the motion sensor was placed next to the microphone. Now, having the image and the two sound sources (contact microphone and audio of the Sony camera) I could synchronize both recordings in post-production using Davinci Resolve. Here the main key findings were the importance of great labeling of the tracks and clear documentation of each recording. During the final recordings on the surfboard the GoPro Hero 3 will act as an important tool to synchronize all the different files in the end. Another audio output of the GoPro acts as an additional backup for a more stable synchronization workflow. Here test runs on the skateboard are essential to be able to manage all the files in post-production later.  (Watkinson, 2013).

The motion data recorded on the ximu3 sensor is replayed on the GUI of the sensor and can then send the data via OSC to Pure Data. Parameters such as pitch, roll, and vertical acceleration can then be mapped to different variables like grain density, stereo width, or filter cutoff frequency. (Puckette, 2007).

  1. Tools and Compatibility

All tools are selected based on compatibility and possibility to record under this special conditions. The toolchain includes:

  • x-IMU3 SDK and GUI (macOS) for sensor decoding
  • Python 3 for OSC streaming and data parsing
  • Pure Data for audio synthesis
  • DaVinci Resolve for editing and timeline alignment

This architecture functions as the basic groundwork of the project setup and can still be expanded using different software’s of python code to add more individualization during different steps of the process. (McPherson & Zappi, 2015).

  1.  Synchronization Strategy

Looking deeper into the Synchronization part of the project, challenges arrise. Because there is no global time setting for all devices, they have to run individually and then be synchronized in post-production. Here working with good documentation and clear labels of each track helps to get a good overview. Especially the data of the motion sensor will have a lot of information and needs to be time aligned with the audio. Synchronizing audio and video, however, is for sure a smaller challenge, because of the multiple different audio sources and the GoPro footage. A big impact or a strong turn of the board can then be mapped to the audio and video timeline. The advantage of one long recording of a 30 min surf session is for sure, that the possibility for such an event increase over time. Tests with the skateboard, external video and audio from the contact microphone were already successful.


On the image the setup in Davinci Resolved shows the synchronization of the contact microphone (pink) and the external audio of the Sony Alpha 7iii (green). Here the skateboard was hit against the floor in a rhythmical pattern, creating this noticeable spikes in audio on both devices. This rhythmical movement can also be seen on the XIMU3 sensor. 

13 Adding UDP/OSC to Arduino

If you have read my previous blog post, the next step comes pretty natural. Just having one device creating and displaying morse code, defeats the purpose of this early way of communication. So I sat down, to set up a network communication between the Arduino and my laptop, which sounded easier than it was.

Since I had used OSC messages in countless classes before, I wanted to come up with a sketch, that could send those messages. Searching for a possibility to send this messages over WiFi, I started by looking at the examples, that were already provided by Arduino and I found something! Part of the WifiS3 library, there was a sketch, that showed how to send & receive UDP messages. Great! I uploaded the sketch and tried sending a message to the Arduino, using a simple Max patch. The message was received, although the response message wasn’t.

As you can see on the screenshot above, Max received a message, but it wouldn’t display its contents, since I had no idea what went wrong, I tried to adjust the message, so it would be a multiple of four, just like Max asked. But I just got another error message:

Still no idea what this error message was supposed to mean, but I kept trying. I reduced the length of the “message name string”, but without any success. I still got the same error message, as before, even though an even shorter message name wouldn’t have made any sense.

Defeated, I went to class the next day and talked about my problem with a fellow student. He brought to my attention, that Daniel Fabry had shared an example for the same thing last semester, which I knew worked, since I tried it in class, I just had forgotten about it. So I took a look at his sketch, which used an entirely different library. The code syntax was early identical, but the library was different. With my new knowledge, I adapted my code again and this time, it worked!

Now my Max patch could receive strings from the Arduino, great! As a next step, I updated my patch to actually replay the received morse code message. And my new version was done! Now messages could actually be sent wirelessly to other devices making actual communication possible.

This little detour into OSC & WiFi with Arduino really got me interested to explore this topic further. I am excited to find out the things that are possible using this technology.

Instructions

For the second version, you need:

  • an Arduino (capable of using Modulinos)
  • the three button Modulino
  • a Laptop with Max

Before uploading the sketch to the Arduino, you need to go into the “secrets.h” tab and enter your WiFi SSID (Name) and password. After this, go to the “sendIP” variable and change the Ip Address, to target your laptop. After applying these changes, upload the patch & build a simple UDP receive logic in Max, similar to the one you can see on my screenshots.