ChatPaper.aiChatPaper

稀疏自編碼器在視覺語言模型中學習單語義特徵

Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models

April 3, 2025
作者: Mateusz Pach, Shyamgopal Karthik, Quentin Bouniot, Serge Belongie, Zeynep Akata
cs.AI

摘要

稀疏自編碼器(Sparse Autoencoders, SAEs)近期被證明能提升大型語言模型(Large Language Models, LLMs)的可解釋性與可控性。本研究將SAEs的應用擴展至視覺語言模型(Vision-Language Models, VLMs),如CLIP,並提出了一套全面評估視覺表徵單義性的框架。實驗結果顯示,在VLMs上訓練的SAEs顯著增強了單個神經元的單義性,同時展現出與專家定義結構(例如iNaturalist分類體系)良好對應的層次化表徵。尤為重要的是,我們證明了應用SAEs對CLIP視覺編碼器進行干預,能夠直接引導多模態LLMs(如LLaVA)的輸出,而無需對基礎模型進行任何修改。這些發現凸顯了SAEs作為一種無監督方法,在增強VLMs的可解釋性與控制力方面的實用性與有效性。
English
Sparse Autoencoders (SAEs) have recently been shown to enhance interpretability and steerability in Large Language Models (LLMs). In this work, we extend the application of SAEs to Vision-Language Models (VLMs), such as CLIP, and introduce a comprehensive framework for evaluating monosemanticity in vision representations. Our experimental results reveal that SAEs trained on VLMs significantly enhance the monosemanticity of individual neurons while also exhibiting hierarchical representations that align well with expert-defined structures (e.g., iNaturalist taxonomy). Most notably, we demonstrate that applying SAEs to intervene on a CLIP vision encoder, directly steer output from multimodal LLMs (e.g., LLaVA) without any modifications to the underlying model. These findings emphasize the practicality and efficacy of SAEs as an unsupervised approach for enhancing both the interpretability and control of VLMs.

Summary

AI-Generated Summary

PDF102April 4, 2025