ChatPaper.aiChatPaper

Jan-奈米技術報告

Jan-nano Technical Report

June 28, 2025
作者: Alan Dao, Dinh Bach Vu
cs.AI

摘要

多數語言模型面臨一個根本性的取捨,即強大的能力需要消耗大量的計算資源。我們通過Jan-nano打破了這一限制,這是一個擁有40億參數的語言模型,它通過極致的專業化重新定義了效率:不再試圖無所不知,而是精通於即時查找任何信息的藝術。Jan-nano基於Qwen3-4B模型,利用我們新穎的多階段RLVR系統進行微調,該系統完全摒棄了對下一詞預測訓練(SFT)的依賴,在集成MCP的情況下,於SimpleQA基準測試中達到了83.2%的成績,且能在消費級硬件上運行。憑藉128K的上下文長度,Jan-nano證明了智能不在於規模,而在於策略。
English
Most language models face a fundamental tradeoff where powerful capabilities require substantial computational resources. We shatter this constraint with Jan-nano, a 4B parameter language model that redefines efficiency through radical specialization: instead of trying to know everything, it masters the art of finding anything instantly. Fine-tuned from Qwen3-4B using our novel multi-stage RLVR system that completely eliminates reliance on next token prediction training (SFT), Jan-nano achieves 83.2% on SimpleQA benchmark with MCP integration while running on consumer hardware. With 128K context length, Jan-nano proves that intelligence isn't about scale, it's about strategy.
PDF41July 1, 2025