ChatPaper.aiChatPaper

DiG-Flow:基于差异引导的流匹配技术构建鲁棒性视觉语言模型

DiG-Flow: Discrepancy-Guided Flow Matching for Robust VLA Models

December 1, 2025
作者: Wanpeng Zhang, Ye Wang, Hao Luo, Haoqi Yuan, Yicheng Feng, Sipeng Zheng, Qin Jin, Zongqing Lu
cs.AI

摘要

基於流匹配训练的视觉-语言-动作模型在机器人操作任务中展现出卓越能力,但其性能在分布偏移和复杂多步任务下常出现退化,表明所学表征可能未能稳健捕捉任务相关语义。我们提出DiG-Flow——一种通过几何正则化增强VLA鲁棒性的原理性框架。核心洞见在于:观测与动作嵌入间的分布差异可提供几何信号——较低传输代价表征兼容性,而较高代价暗示潜在错位。DiG-Flow计算观测与动作嵌入经验分布间的差异度量,通过单调函数将其映射为调制权重,并在流匹配前对观测嵌入施加残差更新。关键在于,此干预作用于表征层面,无需修改流匹配路径或目标向量场。我们给出理论保证:差异引导的训练可证明降低训练目标,且引导推理优化具有收缩收敛性。实验表明,DiG-Flow能以可忽略的开销嵌入现有VLA架构,持续提升性能,尤其在复杂多步任务和有限训练数据场景下增益显著。
English
Vision-Language-Action (VLA) models trained with flow matching have demonstrated impressive capabilities on robotic manipulation tasks. However, their performance often degrades under distribution shift and on complex multi-step tasks, suggesting that the learned representations may not robustly capture task-relevant semantics. We introduce DiG-Flow, a principled framework that enhances VLA robustness through geometric regularization. Our key insight is that the distributional discrepancy between observation and action embeddings provides a meaningful geometric signal: lower transport cost indicates compatible representations, while higher cost suggests potential misalignment. DiG-Flow computes a discrepancy measure between empirical distributions of observation and action embeddings, maps it to a modulation weight via a monotone function, and applies residual updates to the observation embeddings before flow matching. Crucially, this intervention operates at the representation level without modifying the flow matching path or target vector field. We provide theoretical guarantees showing that discrepancy-guided training provably decreases the training objective, and that guided inference refinement converges with contraction. Empirically, DiG-Flow integrates into existing VLA architectures with negligible overhead and consistently improves performance, with particularly pronounced gains on complex multi-step tasks and under limited training data.
PDF81December 4, 2025