ChatPaper.aiChatPaper

TADA! Tuning Audio Diffusion Models through Activation Steering

February 12, 2026
Authors: Łukasz Staniszewski, Katarzyna Zaleska, Mateusz Modrzejewski, Kamil Deja
cs.AI

Abstract

Audio diffusion models can synthesize high-fidelity music from text, yet their internal mechanisms for representing high-level concepts remain poorly understood. In this work, we use activation patching to demonstrate that distinct semantic musical concepts, such as the presence of specific instruments, vocals, or genre characteristics, are controlled by a small, shared subset of attention layers in state-of-the-art audio diffusion architectures. Next, we demonstrate that applying Contrastive Activation Addition and Sparse Autoencoders in these layers enables more precise control over the generated audio, indicating a direct benefit of the specialization phenomenon. By steering activations of the identified layers, we can alter specific musical elements with high precision, such as modulating tempo or changing a track's mood.

PDF22February 17, 2026