ChatPaper.aiChatPaper

语言建模中的代理压缩技术

Proxy Compression for Language Modeling

February 4, 2026
作者: Lin Zheng, Xinyu Li, Qian Liu, Xiachong Feng, Lingpeng Kong
cs.AI

摘要

现代语言模型几乎完全基于固定分词器产生的标记序列进行训练,这种外部无损压缩器通常作用于UTF-8字节序列,从而将模型与该压缩器耦合。本研究提出代理压缩方法——一种替代性训练方案,既能保持压缩输入的效率优势,又能在推理时提供端到端的原始字节接口。训练过程中,语言模型通过联合学习原始字节序列和外部压缩器生成的压缩视图,逐步建立压缩序列与原始字节的内部对齐机制。这种对齐实现了两种格式间的强效迁移,即使主要使用推理时被丢弃的压缩输入进行训练。在代码语言建模上的大量实验表明,代理压缩显著提升了训练效率,并在固定计算预算下明显优于纯字节级基线方法。随着模型规模扩大,这些收益愈加显著:代理训练模型最终达到或媲美分词器方法的性能,同时仅操作原始字节并保留了字节级建模固有的鲁棒性。
English
Modern language models are trained almost exclusively on token sequences produced by a fixed tokenizer, an external lossless compressor often over UTF-8 byte sequences, thereby coupling the model to that compressor. This work introduces proxy compression, an alternative training scheme that preserves the efficiency benefits of compressed inputs while providing an end-to-end, raw-byte interface at inference time. During training, one language model is jointly trained on raw byte sequences and compressed views generated by external compressors; through the process, the model learns to internally align compressed sequences and raw bytes. This alignment enables strong transfer between the two formats, even when training predominantly on compressed inputs which are discarded at inference. Extensive experiments on code language modeling demonstrate that proxy compression substantially improves training efficiency and significantly outperforms pure byte-level baselines given fixed compute budgets. As model scale increases, these gains become more pronounced, and proxy-trained models eventually match or rival tokenizer approaches, all while operating solely on raw bytes and retaining the inherent robustness of byte-level modeling.
PDF11February 6, 2026