ChatPaper.aiChatPaper

Jan-nano技术报告

Jan-nano Technical Report

June 28, 2025
作者: Alan Dao, Dinh Bach Vu
cs.AI

摘要

大多数语言模型都面临着一个根本性的权衡:强大的能力需要大量的计算资源。我们通过Jan-nano打破了这一限制,这是一个拥有40亿参数的语言模型,它通过极致的专业化重新定义了效率:与其试图无所不知,不如精通即时查找任何信息的艺术。Jan-nano基于Qwen3-4B模型,采用我们创新的多阶段RLVR系统进行微调,完全摒弃了对下一词预测训练(SFT)的依赖,在集成MCP的情况下,于SimpleQA基准测试中取得了83.2%的成绩,且能在消费级硬件上运行。凭借128K的上下文长度,Jan-nano证明了智能不在于规模,而在于策略。
English
Most language models face a fundamental tradeoff where powerful capabilities require substantial computational resources. We shatter this constraint with Jan-nano, a 4B parameter language model that redefines efficiency through radical specialization: instead of trying to know everything, it masters the art of finding anything instantly. Fine-tuned from Qwen3-4B using our novel multi-stage RLVR system that completely eliminates reliance on next token prediction training (SFT), Jan-nano achieves 83.2% on SimpleQA benchmark with MCP integration while running on consumer hardware. With 128K context length, Jan-nano proves that intelligence isn't about scale, it's about strategy.
PDF41July 1, 2025