SwimBird:在混合自回归多模态大模型中实现可切换推理模式的激发机制
SwimBird: Eliciting Switchable Reasoning Mode in Hybrid Autoregressive MLLMs
February 5, 2026
作者: Jintao Tong, Shilin Yan, Hongwei Xue, Xiaojun Tang, Kunyu Shi, Guannan Zhang, Ruixuan Li, Yixiong Zou
cs.AI
摘要
多模态大语言模型(MLLMs)通过桥接视觉与语言,在多模态感知与推理领域取得了显著进展。然而现有大多数MLLM主要依赖文本思维链进行推理,这限制了其在视觉密集型任务上的效能。近期研究尝试将固定数量的连续隐藏状态作为"视觉思维"注入推理过程以提升视觉性能,但往往以牺牲文本逻辑推理能力为代价。我们认为核心局限在于僵化的预定义推理模式无法根据用户查询自适应选择最佳思维模态。本文提出SwimBird——一种具备推理模式切换能力的MLLM,能够根据输入动态选择三种推理模式:(1)纯文本推理;(2)纯视觉推理(以连续隐藏状态作为视觉思维);(3)视觉-文本交错推理。为实现该能力,我们采用混合自回归框架,将文本思维的下一词元预测与视觉思维的下一嵌入向量预测相统一,并设计系统化的推理模式构建策略,创建了覆盖全部三种推理模式的多样化监督微调数据集SwimBird-SFT-92K。通过实现灵活的查询自适应模式选择,SwimBird在保持强大文本逻辑能力的同时,显著提升了视觉密集任务的性能。在涵盖文本推理与挑战性视觉理解的多样化基准测试中,实验表明SwimBird实现了最先进的性能,较之前固定模式的多模态推理方法获得稳健提升。
English
Multimodal Large Language Models (MLLMs) have made remarkable progress in multimodal perception and reasoning by bridging vision and language. However, most existing MLLMs perform reasoning primarily with textual CoT, which limits their effectiveness on vision-intensive tasks. Recent approaches inject a fixed number of continuous hidden states as "visual thoughts" into the reasoning process and improve visual performance, but often at the cost of degraded text-based logical reasoning. We argue that the core limitation lies in a rigid, pre-defined reasoning pattern that cannot adaptively choose the most suitable thinking modality for different user queries. We introduce SwimBird, a reasoning-switchable MLLM that dynamically switches among three reasoning modes conditioned on the input: (1) text-only reasoning, (2) vision-only reasoning (continuous hidden states as visual thoughts), and (3) interleaved vision-text reasoning. To enable this capability, we adopt a hybrid autoregressive formulation that unifies next-token prediction for textual thoughts with next-embedding prediction for visual thoughts, and design a systematic reasoning-mode curation strategy to construct SwimBird-SFT-92K, a diverse supervised fine-tuning dataset covering all three reasoning patterns. By enabling flexible, query-adaptive mode selection, SwimBird preserves strong textual logic while substantially improving performance on vision-dense tasks. Experiments across diverse benchmarks covering textual reasoning and challenging visual understanding demonstrate that SwimBird achieves state-of-the-art results and robust gains over prior fixed-pattern multimodal reasoning methods.