科沃音频技术报告
Covo-Audio Technical Report
February 10, 2026
作者: Wenfu Wang, Chenxing Li, Liqiang Zhang, Yiyang Zhao, Yuxiang Zou, Hanzhao Li, Mingyu Cui, Hao Zhang, Kun Wei, Le Xu, Zikang Huang, Jiajun Xu, Jiliang Hu, Xiang He, Zeyu Xie, Jiawen Kang, Youjun Chen, Meng Yu, Dong Yu, Rilin Chen, Linlin Di, Shulin Feng, Na Hu, Yang Liu, Bang Wang, Shan Yang
cs.AI
摘要
本研究推出Covo-Audio——一个拥有70亿参数、可直接处理连续音频输入并在单一架构内生成音频的端到端语言增强音频模型(LALM)。通过大规模精选预训练与针对性后训练,该模型在语音-文本建模、口语对话、语音理解、音频理解及全双工语音交互等广泛任务中,实现了同规模模型中最先进或具有竞争力的性能。大量评估表明,该预训练基础模型在多个基准测试中展现出强大的语音-文本理解与语义推理能力,优于同规模代表性开源模型。此外,面向对话优化的变体Covo-Audio-Chat表现出卓越的口语对话能力,包括理解、情境推理、指令遵循及生成情境适配的共情回应,验证了其在现实对话助手场景的适用性。进阶版全双工模型Covo-Audio-Chat-FD在口语对话能力与全双工交互行为上均实现显著提升,展现出实际应用中的强健性。为降低端到端LALMs在自然对话系统中的部署成本,我们提出智能-语音解耦策略,将对话智能与语音渲染分离,仅需少量文本转语音(TTS)数据即可实现灵活语音定制,同时保持对话性能。总体而言,我们的成果彰显了70亿参数模型融合精密音频智能与高层语义推理的巨大潜力,为构建更强大、通用的LALMs提供了可扩展路径。
English
In this work, we present Covo-Audio, a 7B-parameter end-to-end LALM that directly processes continuous audio inputs and generates audio outputs within a single unified architecture. Through large-scale curated pretraining and targeted post-training, Covo-Audio achieves state-of-the-art or competitive performance among models of comparable scale across a broad spectrum of tasks, including speech-text modeling, spoken dialogue, speech understanding, audio understanding, and full-duplex voice interaction. Extensive evaluations demonstrate that the pretrained foundation model exhibits strong speech-text comprehension and semantic reasoning capabilities on multiple benchmarks, outperforming representative open-source models of comparable scale. Furthermore, Covo-Audio-Chat, the dialogue-oriented variant, demonstrates strong spoken conversational abilities, including understanding, contextual reasoning, instruction following, and generating contextually appropriate and empathetic responses, validating its applicability to real-world conversational assistant scenarios. Covo-Audio-Chat-FD, the evolved full-duplex model, achieves substantially superior performance on both spoken dialogue capabilities and full-duplex interaction behaviors, demonstrating its competence in practical robustness. To mitigate the high cost of deploying end-to-end LALMs for natural conversational systems, we propose an intelligence-speaker decoupling strategy that separates dialogue intelligence from voice rendering, enabling flexible voice customization with minimal text-to-speech (TTS) data while preserving dialogue performance. Overall, our results highlight the strong potential of 7B-scale models to integrate sophisticated audio intelligence with high-level semantic reasoning, and suggest a scalable path toward more capable and versatile LALMs.