Prototyping XI: Image Extender – Image sonification tool for immersive perception of sounds from images and new creation possibilities

Smart Sound Selection: Modes and Filters

1. Modes: Random vs. Best Result

  • Best Result Mode (Quality-Focused)
    The system prioritizes sounds with the highest ratings and download counts, ensuring professional-grade audio quality. It progressively relaxes standards (e.g., from 4.0+ to 2.5+ ratings) if no perfect match is found, guaranteeing a usable sound for every tag.
  • Random Mode (Diverse Selection)
    In this mode, the tool ignores quality filters, returning the first valid sound for each tag. This is ideal for quick experiments or when unpredictability is desired or to be sure to achieve different results.

2. Filters: Rating vs. Downloads

Users can further refine searches with two filter preferences:

  • Rating > Downloads
    Favors sounds with the highest user ratings, even if they have fewer downloads. This prioritizes subjective quality (e.g., clean recordings, well-edited clips).
    Example: A rare, pristine “tiger growl” with a 4.8/5 rating might be chosen over a popular but noisy alternative.
  • Downloads > Rating
    Prioritizes widely downloaded sounds, which often indicate reliability or broad appeal. This is useful for finding “standard” effects (e.g., a typical phone ring).
    Example: A generic “clock tick” with 10,000 downloads might be selected over a niche, high-rated vintage clock sound.

If there would be no matching sound for the rating or download approach the system gets to the fallback and uses the hierarchy table privided to change for example maple into tree.

Intelligent Frequency Management

The audio engine now implements Bark Scale Filtering, which represents a significant improvement over the previous FFT peaks approach. By dividing the frequency spectrum into 25 critical bands spanning 20Hz to 20kHz, the system now precisely mirrors human hearing sensitivity. This psychoacoustic alignment enables more natural spectral adjustments that maintain perceptual balance while processing audio content.

For dynamic equalization, the system features adaptive EQ Activation that intelligently engages only during actual sound clashes. For instance, when two sounds compete at 570Hz, the EQ applies a precise -4.7dB reduction exclusively during the overlapping period.

o preserve audio quality, the system employs Conservative Processing principles. Frequency band reductions are strictly limited to a maximum of -6dB, preventing artificial-sounding results. Additionally, the use of wide Q values (1.0) ensures that EQ adjustments maintain the natural timbral characteristics of each sound source while effectively resolving masking issues.

These core upgrades collectively transform Image Extender’s mixing capabilities, enabling professional-grade audio results while maintaining the system’s generative and adaptive nature. The improvements are particularly noticeable in complex soundscapes containing multiple overlapping elements with competing frequency content.

Visualization for a better overview

The newly implemented Timeline Visualization provides unprecedented insight into the mixing process through an intuitive graphical representation.

Prototyping X: Image Extender – Image sonification tool for immersive perception of sounds from images and new creation possibilities

Researching Automated Mixing Strategies for Clarity and Real-Time Composition

As the Image Extender project continues to evolve from a tagging-to-sound pipeline into a dynamic, spatially aware audio compositing system, this phase focused on surveying and evaluating recent methods in automated sound mixing. My aim was to understand how existing research handles spectral masking, spatial distribution, and frequency-aware filtering—especially in scenarios where multiple unrelated sounds are combined without a human in the loop.

This blog post synthesizes findings from several key research papers and explores how their techniques may apply to our use case: a generative soundscape engine driven by object detection and Freesound API integration. The next development phase will evaluate which of these methods can be realistically adapted into the Python-based architecture.

Adaptive Filtering Through Time–Frequency Masking Detection

A compelling solution to masking was presented by Zhao and Pérez-Cota (2024), who proposed a method for adaptive equalization driven by masking analysis in both time and frequency. By calculating short-time Fourier transforms (STFT) for each track, their system identifies where overlap occurs and evaluates the masking directionality—determining whether a sound acts as a masker or a maskee over time.

These interactions are quantified into masking matrices that inform the design of parametric filters, tuned to reduce only the problematic frequency bands, while preserving the natural timbre and dynamics of the source sounds. The end result is a frequency-aware mixing approach that adapts to real masking events rather than applying static or arbitrary filtering.

Why this matters for Image Extender:
Generated mixes often feature overlapping midrange content (e.g., engine hums, rustling leaves, footsteps). By applying this masking-aware logic, the system can avoid blunt frequency cuts and instead respond intelligently to real-time spectral conflicts.

Implementation possibilities:

  • STFTs: librosa.stft
  • Masking matrices: pairwise multiplication and normalization (NumPy)
  • EQ curves: second-order IIR filters via scipy.signal.iirfilter

“This information is then systematically used to design and apply filters… improving the clarity of the mix.”
— Zhao and Pérez-Cota (2024)

Iterative Mixing Optimization Using Psychoacoustic Metrics

Another strong candidate emerged from Liu et al. (2024), who proposed an automatic mixing system based on iterative masking minimization. Their framework evaluates masking using a perceptual model derived from PEAQ (ITU-R BS.1387) and adjusts mixing parameters—equalization, dynamic range compression, and gain—through iterative optimization.

The system’s strength lies in its objective function: it not only minimizes total masking but also seeks to balance masking contributions across tracks, ensuring that no source is disproportionately buried. The optimization process runs until a minimum is reached, using a harmony search algorithm that continuously tunes each effect’s parameters for improved spectral separation.

Why this matters for Image Extender:
This kind of global optimization is well-suited for multi-object scenes, where several detected elements contribute sounds. It supports a wide range of source content and adapts mixing decisions to preserve intelligibility across diverse sonic elements.

Implementation path:

  • Masking metrics: critical band energy modeling on the Bark scale
  • Optimization: scipy.optimize.differential_evolution or other derivative-free methods
  • EQ and dynamics: Python wrappers (pydub, sox, or raw filter design via scipy.signal)

“Different audio effects… are applied via an iterative Harmony searching algorithm that aims to minimize the masking.”
— Liu et al. (2024)

Comparative Analysis

MethodCore ApproachIntegration PotentialImplementation Effort
Time–Frequency Masking (Zhao)Analyze masking via STFT; apply targeted EQHigh — per-event conflict resolutionMedium
Iterative Optimization (Liu)Minimize masking metric via parametric searchHigh — global mix clarityHigh

Both methods offer significant value. Zhao’s system is elegant in its directness—its per-pair analysis supports fine-grained filtering on demand, suitable for real-time or batch processes. Liu’s framework, while computationally heavier, offers a holistic solution that balances all tracks simultaneously, and may serve as a backend “refinement pass” after initial sound placement.

Looking Ahead

This research phase provided the theoretical and technical groundwork for the next evolution of Image Extender’s audio engine. The next development milestone will explore hybrid strategies that combine these insights:

  • Implementing a masking matrix engine to detect conflicts dynamically
  • Building filter generation pipelines based on frequency overlap intensity
  • Testing iterative mix refinement using masking as an objective metric
  • Measuring the perceived clarity improvements across varied image-driven scenes

References

Zhao, Wenhan, and Fernando Pérez-Cota. “Adaptive Filtering for Multi-Track Audio Based on Time–Frequency Masking Detection.” Signals 5, no. 4 (2024): 633–641. https://doi.org/10.3390/signals5040035:contentReference[oaicite:2]{index=2}

Liu, Xiaojing, Angeliki Mourgela, Hongwei Ai, and Joshua D. Reiss. “An Automatic Mixing Speech Enhancement System for Multi-Track Audio.” arXiv preprint arXiv:2404.17821 (2024). https://arxiv.org/abs/2404.17821:contentReference[oaicite:3]{index=3}

Prototyping IX: Image Extender – Image sonification tool for immersive perception of sounds from images and new creation possibilities

Advanced Automated Sound Mixing with Hierarchical Tag Handling and Spectral Awareness

The Image Extender project continues to evolve in scope and sophistication. What began as a relatively straightforward pipeline connecting object recognition to the Freesound.org API has now grown into a rich, semi-intelligent audio mixing system. This recent development phase focused on enhancing both the semantic accuracy and the acoustic quality of generated soundscapes, tackling two significant challenges: how to gracefully handle missing tag-to-sound matches, and how to intelligently mix overlapping sounds to avoid auditory clutter.

Sound Retrieval Meets Semantic Depth

One of the core limitations of the original approach was its dependence on exact tag matches. If no sound was found for a detected object, that tag simply went silent. To address this, I introduced a multi-level fallback system based on a custom-built CSV ontology inspired by Google’s AudioSet.

This ontology now contains hundreds of entries, organized into logical hierarchies that progress from broad categories like “Entity” or “Animal” to highly specific leaf nodes like “White-tailed Deer,” “Pickup Truck,” or “Golden Eagle.” When a tag fails, the system automatically climbs upward through this tree, selecting a more general fallback—moving from “Tiger” to “Carnivore” to “Mammal,” and finally to “Animal” if necessary.

Implementation of temporal composition

Initial versions of Image Extender merely stacked sounds on top of each other by only using the spatial composition in the form of panning. Now, the mixing system behaves more like a simplified DAW (Digital Audio Workstation). Key improvements introduced in this iteration include:

  • Random temporal placement: Shorter sound files are distributed at randomized time positions across the duration of the mix, reducing sonic overcrowding and creating a more natural flow.
  • Automatic fade-ins and fade-outs: Each sound is treated with short fades to eliminate abrupt onsets and offsets, improving auditory smoothness.
  • Mix length based on longest sound: Instead of enforcing a fixed duration, the mix now adapts to the length of the longest inserted file, which is always placed at the beginning to anchor the composition.

These changes give each generated audio scene a sense of temporal structure and stereo space, making them more immersive and cinematic.

Frequency-Aware Mixing: Avoiding Spectral Masking

A standout feature developed during this phase was automatic spectral masking avoidance. When multiple sounds overlap in time and occupy similar frequency bands, they can mask each other, causing a loss of clarity. To mitigate this, the system performs the following steps:

  1. Before placing a sound, the system extracts the portion of the mix it will overlap with.
  2. Both the new sound and the overlapping mix segment are analyzed via FFT (Fast Fourier Transform) to determine their dominant frequency bands.
  3. If the analysis detects significant overlap in frequency content, the system takes one of two corrective actions:
    • Attenuation: The new sound is reduced in volume (e.g., -6 dB).
    • EQ filtering: Depending on the nature of the conflict, a high-pass or low-pass filter is applied to the new sound to move it out of the way spectrally.

This spectral awareness doesn’t reach the complexity of advanced mixing, but it significantly reduces the most obvious masking effects in real-time-generated content—without user input.

Spectrogram Visualization of the Final Mix

As part of this iteration, I also added a spectrogram visualization of the final mix. This visual feedback provides a frequency-time representation of the soundscape and highlights which parts of the spectrum have been affected by EQ filtering.

  • Vertical dashed lines indicate the insertion time of each new sound.
  • Horizontal lines mark the dominant frequencies of the added sound segments. These often coincide with spectral areas where notch filters have been applied to avoid collisions with the existing mix.

This visualization allows for easier debugging, improved understanding of frequency interactions, and serves as a useful tool when tuning mixing parameters or filter behaviors.

Looking Ahead

As the architecture matures, future milestones are already on the horizon. We aim to implement:

  • Visual feedback: A real-time timeline that shows audio placement, duration, and spectral content.
  • Advanced loudness control: Integration of dynamic range compression and LUFS-based normalization for output consistency.

Prototyping VIII: Image Extender – Image sonification tool for immersive perception of sounds from images and new creation possibilities

Sound-Image Matching via Semantic Tag Comparison

Continuing development on the Image Extender project, I’ve been exploring how to improve the connection between recognized visual elements and the sounds selected to represent them. A key question in this phase has been: How do we determine if a sound actually fits an image, not just technically but meaningfully?

Testing the Possibilities

I initially looked into using large language models to evaluate the fit between sound descriptions and the visual content of an image. Various API-based models showed potential in theory, particularly for generating a numerical score representing how well a sound matched the image content. However, many of these options required paid access or more complex setup than suited this early prototyping phase. I also explored frameworks like LangChain to help with integration, but these too proved a bit unstable for the lightweight, quick feedback loops I was aiming for.

A More Practical Approach: Semantic Comparison

To keep things moving forward, I’ve shifted toward a simpler method using semantic comparison between the image content and the sound description. In this system, the objects recognized in an image are merged into a combined tag string, which is then compared against the sound’s description using a classifier that evaluates their semantic relatedness.

Rather than returning a simple yes or no, this method provides a score that reflects how well the description aligns with the image’s content. If the score falls below a certain threshold, the sound is skipped — keeping the results focused and relevant without needing manual curation.

Why It Works (for Now)

This tag-based comparison system is easy to implement, doesn’t rely on external APIs, and integrates cleanly into the current audio selection pipeline. It allows for quick iteration, which is key during the early design and testing stages. While it doesn’t offer the nuanced understanding of a full-scale LLM, it provides a surprisingly effective filter to catch mismatches between sounds and images.

In the future, I may revisit the idea of using larger models once a more stable or affordable setup is in place. But for this phase, the focus is on building a clear and functional base — and semantic tag matching gives just enough structure to support that.

Prototyping VII: Image Extender – Image sonification tool for immersive perception of sounds from images and new creation possibilities

Mixing of the automatically searched audio files into one combined stereo file:

In this latest update, I’ve implemented several new features to create the first layer of an automated sound mixing system for the object recognition tool. The tool now automatically adjusts pan values and applies attenuation to ensure a balanced stereo mix, while seamlessly handling multiple tracks. This helps avoid overload and guarantees smooth audio mixing.

check of the automatically searched and downloaded files + the automatically generated combined audiofile

A key new feature is the addition of a sound_pannings array, which holds unique panning values for each sound based on the position of the object’s bounding box within an image. This ensures that each sound associated with a recognized object gets an individualized panning, calculated from its horizontal position within the image, for a more dynamic and immersive experience.

display of the sound panning values [-1 left, 1 right]

I’ve also introduced a system to automatically download sound files directly into Google Colab’s file system. This eliminates the need for managing local folders. Users can now easily preview audio within the notebook, which adds interactivity and helps visualize the results instantly.

The sound downloading process has also been revamped. The filters for the search can now be saved via a buttonclick to apply for the search and download for the audiofile. Currently for each tags there are 10 sounds per tag preloaded, with each sound randomly selected to avoid duplication but ensure the use of multiple times of the same tag. A sound is only downloaded if it hasn’t been used before. If all sound options for a tag are exhausted, no sound will be downloaded for that tag.

Additionally, I’ve added the ability to create a ZIP file that includes all the downloaded sounds as well as the final mixed audio output. This makes it easy to download and share the files. To keep things organized, I’ve also introduced a delete button that removes all downloaded files once they are no longer needed. The interface now includes buttons for controlling the download, file cleanup, and audio playback, simplifying the process for users.

Looking ahead, I plan to continue refining the system by working on better mixing techniques, focusing on aspects like spectrum, frequency, and the overall importance of the sounds. Future updates will also look at integrating volume control and more far in the future an LLM Model that can check the correctness of the found file title.

Prototyping VI: Image Extender – Image sonification tool for immersive perception of sounds from images and new creation possibilities

New features in the object recognition and test run for images:

Since the initial freesound.org and GeminAI setup, I have added several improvements.
You can now choose between different object recognition models and adjust settings like the number of detected objects and the minimum confidence threshold.

GUI for the settings of the model

I also created a detailed testing matrix, using a wide range of images to evaluate detection accuracy. Due to that there might be the change of the model later on, because it seems the gemini api only has a very basic pool of tags and is also not a good training in every category.

Test of images for the object recognition

It is still reliable for these basic tags like “bird”, “car”, “tree”, etc. And for these tags it also doesn’t really matter if theres a lot of shadow, you only see half of the object or even if its blurry. But because of the lack of specific tags I will look into models or APIs that offer more fine-grained recognition.

Coming up: I’ll be working on whether to auto-play or download the selected audio files including layering sounds, adjusting volumes, experimenting with EQ and filtering — all to make the playback more natural and immersive. Also, I will think about categorization and moving the tags into a layer system. Beside that I am going to check for other object recognition models, but  I might stick to the gemini api for prototyping a bit more and change the model later.

Prototyping V: Image Extender – Image sonification tool for immersive perception of sounds from images and new creation possibilities

Integration of AI-Object Recognition in the automated audio file search process:

After setting up the initial interface for the freesound.org API and confirming everything works with test tags and basic search filters, the next major milestone is now in motion: AI-based object recognition using the GeminAI API.

The idea is to feed in an image (or a batch of them), let the AI detect what’s in it, and then use those recognized tags to trigger an automated search for corresponding sounds on freesound.org. The integration already loads the detected tags into an array, which is then automatically passed on to the sound search. This allows the system to dynamically react to the content of an image and search for matching audio files — no manual tagging needed anymore.

So far, the detection is working pretty reliably for general categories like “bird”, “car”, “tree”, etc. But I’m looking into models or APIs that offer more fine-grained recognition. For instance, instead of just “bird”, I’d like it to say “sparrow”, “eagle”, or even specific songbird species if possible. This would make the whole sound mapping feel much more tailored and immersive.

A list of test images will be prepared, but there’s already a testing matrix for different objects, situations, scenery and technical differences

On the freesound side, I’ve got the basic query parameters set up: tag search, sample rate, file type, license, and duration filters. There’s room to expand this with additional parameters like rating, bit depth, and maybe even a random selection toggle to avoid repetition when the same tag comes up multiple times.

Coming up: I’ll be working on whether to auto-play or download the selected audio files, and starting to test how the AI-generated tags influence the mood and quality of the soundscape. The long-term plan includes layering sounds, adjusting volumes, experimenting with EQ and filtering — all to make the playback more natural and immersive.

Prototyping IV: Image Extender – Image sonification tool for immersive perception of sounds from images and new creation possibilities

Tests on automated audio file search via freesound.org api:

For further use in the automated audio file search of the recognized objects I tested the freesound.org api and programmed the first interface for testing purposes. The first thing I had to do was request an API-Key by freesound.org. After that I noticed an interesting point to think about using it in my project: it is open for 5000 requests per year, but I will research on possibilities for using it more. For the testing 5000 is more than enough.

The current code already searches with a few testing tags and gives possibilities to filter the searches by samplerate, duration, licence and file type. There might be added more filter possibilities next like rating, bit depth, and maybe the possibility of random file selection so it won’t be always the same for each tag.

Next steps would also include to either download the file or just play it automatically. Then there will be tests on using the tags of the AI image recognition code for this automated search. And later in the process I have to figure out the playback of multiple files, volume staging and filtering or EQing methods for masking effects etc…

Test gui for automated sound searching via freesounds.org API

IRCAM Forum Workshops 2025 – Promenade Sonore

Sound Meets the City: Nadine Schütz’s Promenade Sonore Transforms a Footbridge into a Living Instrument

We first encountered Nadine Schütz’s fascinating work during her presentation at the IRCAM Forum Workshops 2025, where she introduced her project Promenade Sonore: Vent, Soleil, Pluie (“Wind, Sun, Rain”). The talk offered deep insights into her creative process and the technical and ecological thinking behind the installation.

In the heart of Saint-Denis, just north of Paris, Swiss sound artist Nadine Schütz has reimagined the way we move through and experience urban space. Her project Promenade Sonore: Vent, Soleil, Pluie (“Wind, Sun, Rain”) is not just a public art installation—it’s a multi-sensory experience that turns an ordinary walk across a footbridge into an acoustic encounter with the environment.

Commissioned by Plaine Commune and developed in close collaboration with architect-engineer Marc Mimram, the installation is located on the Pleyel footbridge, a key link between the neighborhoods of Pleyel and La Plaine. Rather than adding passive sound or music, Schütz has embedded three sculptural sound instruments directly into the architecture of the bridge, each one activated by a different natural element: wind, sun, and rain.

These instruments aren’t just symbolic; they actually respond to the environment in real time. Wind passes through a metal structure that produces soft, organ-like tones. When sunlight hits specific points, it activates solar-powered chimes or sound emitters. During rainfall, the structure becomes percussive, resonating with the rhythm of droplets. The bridge becomes a living, breathing instrument that reacts to weather conditions, turning nature into both performer and composer.

What makes Promenade Sonore truly compelling is how seamlessly it blends technology, ecology, and design. It’s not loud or intrusive—it doesn’t drown out the urban soundscape. Instead, it subtly enhances the auditory experience of the city, encouraging passersby to slow down and listen. It transforms a utilitarian space into a space of poetic reflection.

Schütz’s work is rooted in the idea that sound can deepen our connection to place. In this project, she brings attention to the sonic qualities of weather and architecture—things we often overlook in our fast-paced, screen-driven lives. The soundscape is never the same twice: it shifts with the wind, the angle of the sun, or the mood of the rain. Every walk across the bridge is a unique composition.

More than just an artistic gesture, Promenade Sonore is part of a broader vision of “land-sound” design—a practice Schütz has pioneered that treats sound as an essential component of landscape and urban planning. In doing so, she challenges traditional boundaries between art, science, and infrastructure.

Visit of the pleyel bridge

We had the chance to visit the Pleyel footbridge ourselves—and it was a one-of-a-kind experience. Walking across the bridge, immersed in the subtle interplay of environmental sound and sculptural form, was both meditative and inspiring. While on site, we also conducted our own field recordings to capture the dynamic soundscape as it unfolded in real time. Listening through headphones, the bridge became even more alive—each gust of wind, each shifting light pattern, each ambient tone weaving into a delicate, ever-changing composition.

IRCAM Forum Workshops 2025 – ACIDS

From 26 to 28th of March, we (the sound design master, second semester) had the incredible opportunity to visit IRCAM (Institut de Recherche et Coordination Acoustique/Musique) in Paris as part of a student excursion. For anyone passionate about sound, music technology, and AI, IRCAM is like stepping into new fields of research, discussion and seeing prototypes in action. One of my personal highlights was learning about the ACIDS team (Artificial Creatiive Intelligence and Data Science) and their research projects—RAVE (Real-time Audio Variational autoEncoder) and AFTER (Audio Features Transfer and Exploration in Real-time

ACIDS – Team

The ACIDS team is a multidisciplinary group of researchers working at the intersection of machine learning, sound synthesis, and real-time audio processing. Their name stands for Audio, Communication, Information, Data, and Sound, reflecting their broad focus on computational audio research. During our visit, they gave us an inside look at their latest developments, including demonstrations from the IRCAM Forum Workshop (March 26–28, 2025), where they showcased some of their most exciting advancements. Beside their really good and catchy (also a bit funny) presentation I want to showcase two projects.

RAVE (Real-Time Neural Audio Synthesis)

One of the most impressive projects we explored was RAVE (Real-time Audio Variational autoEncoder), a deep learning model for high-quality audio synthesis and transformation. Unlike traditional digital signal processing, RAVE uses a latent space representation of sound, allowing for intuitive and expressive real-time manipulation.

Overall architecture of the proposed approach. Blocks in blue are the only ones optimized,
while blocks in grey are fixed or frozen operations.

Key Innovations

  1. Two-Stage Training:
    • Stage 1: Learns compact latent representations using a spectral loss.
    • Stage 2: Fine-tunes the decoder with adversarial training for ultra-realistic audio.
  2. Blazing Speed:
    • Runs 20× faster than real-time on a laptop CPU, thanks to a multi-band decomposition technique.
  3. Precision Control:
    • Post-training latent space analysis balances reconstruction quality vs. compactness.
    • Enables timbre transfer and signal compression (2048:1 ratio).

Performance

  • Outperforms NSynth and SING in audio quality (MOS: 3.01 vs. 2.68/1.15) with fewer parameters (17.6M).
  • Handles polyphonic music and speech, unlike many restricted models.

You can explore RAVE’s code and research on their GitHub repository and learn more about its applications on the IRCAM website.

AFTER

While many AI audio tools focus on raw sound generation, what sets AFTER (Audio Foundation Transformer) apart is its sophisticated control mechanisms—a priority highlighted in recent research from the ACIDS team. As their paper states:

“Deep generative models now synthesize high-quality audio signals, shifting the critical challenge from audio quality to control capabilities. While text-to-music generation is popular, explicit control and example-based style transfer better capture the intents of artists.”

How AFTER Achieves Precision

The team’s breakthrough lies in separating local and global audio information:

  • Global (timbre/style): Captured from a reference sound (e.g., a vintage synth’s character).
  • Local (structure): Controlled via MIDI, text prompts, or another audio’s rhythm/melody.

This is enabled by a diffusion autoencoder that builds two disentangled representation spaces, enforced through:

  1. Adversarial training to prevent overlap between timbre and structure.
  2. A two-stage training strategy for stability.
Detailed overview of our method. Input signal(s) are passed to structure and timbre encoders, which provides
semantic encodings that are further disentangled through confusion maximization. These are used to condition a latent
diffusion model to generate the output signal. Input signals are identical during training and but distinct at inference.

Why Musicians Care

In tests, AFTER outperformed existing models in:

  • One-shot timbre transfer (e.g., making a piano piece sound like a harp).
  • MIDI-to-audio generation with precise stylistic control.
  • Full “cover version” generation—transforming a classical piece into jazz while preserving its melody.

Check out AFTER’s progress on GitHub and stay updated via IRCAM’s research page.

References

Caillon, Antoine, and Philippe Esling. “RAVE: A Variational Autoencoder for Fast and High-Quality Neural Audio Synthesis.” arXiv preprint arXiv:2111.05011 (2021). https://arxiv.org/abs/2111.05011.

Demerle, Nils, Philippe Esling, Guillaume Doras, and David Genova. “Combining Audio Control and Style Transfer Using Latent Diffusion.”