One of the most rewarding parts of this phase was the technical deep dive into the Apple ecosystem. From Swift and AVAudioEngine to sensor fusion and FFT algorithms, I learned how to architect complex audio apps natively.
Challenges included:
- Managing multichannel audio in real time
- Implementing head tracking across threads
- Creating reactive user interfaces with
SwiftUI - Performing spectral deconvolution on mobile hardware
These skills are transferable to other platforms — Unity, Unreal, WebXR — but more importantly, they changed my understanding of how sound design tools are built. Not just for artists, but by artists who code.
