ChatPaper.aiChatPaper

T-pro 2.0:高效的俄羅斯混合推理模型與實驗平台

T-pro 2.0: An Efficient Russian Hybrid-Reasoning Model and Playground

December 11, 2025
作者: Dmitrii Stoianov, Danil Taranets, Olga Tsymboi, Ramil Latypov, Almaz Dautov, Vladislav Kruglikov, Nikita Surkov, German Abramov, Pavel Gein, Dmitry Abulkhanov, Mikhail Gashkov, Viktor Zelenkovskiy, Artem Batalov, Aleksandr Medvedev, Anatolii Potapov
cs.AI

摘要

我們推出 T-pro 2.0——一個具備混合推理與高效推論能力的俄語開源權重大型語言模型。該模型支援直接問答與推理軌跡生成,採用西里爾字母密集型分詞器,並配備改進版 EAGLE 預測解碼流水線以降低延遲。為實現可復現與可擴展的研究,我們在 Hugging Face 平台公開模型權重、T-Wix 50萬條指令數據集、T-Math 推理基準測試集及 EAGLE 權重。這些資源可幫助研究者探索俄語推理機制,並對模型與推論流水線進行擴展適配。公開網頁演示版同步展示推理與非推理模式,直觀呈現我們推論架構在多領域實現的加速效果。T-pro 2.0 由此成為構建與評估高效實用俄語大型語言模型應用的開放式基礎平台。
English
We introduce T-pro 2.0, an open-weight Russian LLM for hybrid reasoning and efficient inference. The model supports direct answering and reasoning-trace generation, using a Cyrillic-dense tokenizer and an adapted EAGLE speculative-decoding pipeline to reduce latency. To enable reproducible and extensible research, we release the model weights, the T-Wix 500k instruction corpus, the T-Math reasoning benchmark, and the EAGLE weights on Hugging Face. These resources allow users to study Russian-language reasoning and to extend or adapt both the model and the inference pipeline. A public web demo exposes reasoning and non-reasoning modes and illustrates the speedups achieved by our inference stack across domains. T-pro 2.0 thus serves as an accessible open system for building and evaluating efficient, practical Russian LLM applications.
PDF601December 13, 2025