Our original objective was simple on paper: write a Python utility that would stream real-time heart-rate-variability (HRV) data from a live ECG feed and let a digital-audio workstation turn those numbers into music. The script was expected to read 0.1-second ECG chunks, detect new R-peaks, derive the seven canonical HRV metrics (HR, SDNN, RMSSD, VLF, LF, HF, LF/HF) and hand fresh values to the sound engine every tick. In theory that would allow a performer’s physiology to shape the soundtrack moment-by-moment.
Phase 1 – “print-only” prototype
The first proof-of-concept program did nothing but compute the metrics and dump them to the terminal. Because the offline reference library (hrv_plot.py) already produced correct numbers, the live version mimicked its algorithm line-for-line, apart from using a sliding buffer instead of reading the whole file at once. The console printed something every 0.1 s, CPU usage looked negligible, and we assumed we were done.
Phase 2 – OSC transport and the DAW deterrent
To let a synthesiser listen, we wrapped the same loop with an Open Sound Control sender. The data reached Ableton Live but required an external bridge, custom track routing and an additional plug-in to map OSC to parameters. Latency was fine, usability was not: every DAW session began with ten minutes of cabling virtual ports. We dropped OSC and decided to embed the information in ordinary MIDI where any host can record or map it instantly.
Phase 3 – first real-time MIDI attempt, and the “melting clock”
The next script converted each 0.1-s update into a drum hit on one of sixteen cells; HR controlled velocity, SDNN and RMSSD drove two CC lanes. For the first five seconds everything grooved, then tempo collapsed. Profiling showed peak detection becoming slower and slower: our buffer was ten seconds long, so every tick the detector rescanned an ever-growing vector. With 500 Hz sampling that was half a million points per second.
Phase 4 – carving the code into independent daemons
We split responsibilities into two stand-alone scripts:
• hrv_live_print.py – generate just the metrics once every 0.1 s
• ecg_to_drum_online.py – listen to the stream and build MIDI
The HRV process still lagged behind wall-time, so the music engine received late packets and eventually starved.
Phase 5 – “peak-triggered” optimisation and the HR crisis
To cut CPU we tried a different paradigm: compute HRV only when the detector reported a new R-peak; between peaks resend the previous values. That immediately fixed the frame-rate problem but created another: HR oscillated between plausible and absurd numbers (e.g. 40 → 140 → 55 bpm within one second). The fault was a race condition—the timestamp of the newest peak was sometimes outside the analysis window because the buffer indices were updated before the metric window limits. After correcting the order of operations HR stabilised within ±2 bpm of the offline truth. Unfortunately SDNN, RMSSD and especially the spectral powers still diverged by factors of two to ten.
Why the divergence never vanished
- Time-domain spread metrics need at least fifty RR intervals; with resting heart-rate that is a full minute, which our 10-s buffer could never deliver, hence wildly inflated variance.
- Spectral power below 0.15 Hz requires windows ≥ 120 s; shortening the segment erases low-frequency bins, so LF and VLF shrink to almost zero or explode at random.
- Re-detecting peaks on a moving buffer changes the set of RR intervals at every tick, adding jitter no post-hoc analysis has to face.
- Any attempt to enlarge the window brings back the original slowdown; shrinking it further removes the physiological meaning altogether.
Take-away – offline wins, HRV is not a live control signal
After weeks of profiling, refactoring and cheating (full-track pre-detection disguised as streaming) the conclusion is clear: classic HRV statistics are intrinsically retrospective. They trade temporal resolution for statistical power. In a concert-length performance the musician would have to wait one–three minutes before a genuine LF/HF change manifests, which is musically useless. Short windows restore immediacy but turn the output into coloured noise and defeat the scientific basis of HRV.
A narrow exception
Plain heart rate—computed from the last RR pair—can be broadcast at 10 Hz with negligible latency, so HR alone might still serve as a slow modulator. It is, however, a single scalar that drifts over tens of seconds; by itself it is too static to drive anything more than subtle filter sweeps.
Final verdict
For faithful physiology-driven sound the only robust path is to run a full offline analysis first or to invent new, deliberately short-window descriptors instead of relying on canonical HRV. The experiment showed that real-time HRV is a conceptual mismatch rather than a coding glitch, and that insight will save us from chasing the same mirage in future projects.