做什么?教导视觉-语言-动作模型拒绝不可能的任务
Do What? Teaching Vision-Language-Action Models to Reject the Impossible
August 22, 2025
作者: Wen-Han Hsieh, Elvis Hsieh, Dantong Niu, Trevor Darrell, Roei Herzig, David M. Chan
cs.AI
摘要
近期,视觉-语言-动作(VLA)模型在一系列机器人任务中展现了卓越的性能。这些模型依赖于多模态输入,其中语言指令扮演着关键角色——不仅在于预测动作,更在于即便在请求无法实现时,也能稳健地解读用户意图。本研究中,我们探讨了VLA模型如何识别、理解并响应基于错误前提的指令:即那些引用环境中不存在对象或条件的自然语言命令。我们提出了“指令-验证-执行”(IVA)这一统一框架,该框架能够:(i) 检测因错误前提导致指令无法执行的情况,(ii) 通过语言进行澄清或纠正,(iii) 将可行的替代方案与感知和行动相结合。为此,我们构建了一个大规模指令调优环境,包含结构化语言提示,并训练了一个能够同时处理准确与错误请求的VLA模型。我们的方法利用了一个上下文增强的半合成数据集,其中包含成对的正例与错误前提指令,从而实现了稳健的错误检测与自然语言纠正。实验结果表明,IVA在错误前提检测准确率上较基线提升了97.56%,同时在错误前提场景下的成功响应率提高了50.78%。
English
Recently, Vision-Language-Action (VLA) models have demonstrated strong
performance on a range of robotic tasks. These models rely on multimodal
inputs, with language instructions playing a crucial role -- not only in
predicting actions, but also in robustly interpreting user intent, even when
the requests are impossible to fulfill. In this work, we investigate how VLAs
can recognize, interpret, and respond to false-premise instructions: natural
language commands that reference objects or conditions absent from the
environment. We propose Instruct-Verify-and-Act (IVA), a unified framework that
(i) detects when an instruction cannot be executed due to a false premise, (ii)
engages in language-based clarification or correction, and (iii) grounds
plausible alternatives in perception and action. Towards this end, we construct
a large-scale instruction tuning setup with structured language prompts and
train a VLA model capable of handling both accurate and erroneous requests. Our
approach leverages a contextually augmented, semi-synthetic dataset containing
paired positive and false-premise instructions, enabling robust detection and
natural language correction. Our experiments show that IVA improves false
premise detection accuracy by 97.56% over baselines, while increasing
successful responses in false-premise scenarios by 50.78%.