语言建模中的代理压缩技术
Proxy Compression for Language Modeling
February 4, 2026
作者: Lin Zheng, Xinyu Li, Qian Liu, Xiachong Feng, Lingpeng Kong
cs.AI
摘要
现代语言模型几乎完全基于固定分词器产生的标记序列进行训练,这种外部无损压缩器通常作用于UTF-8字节序列,从而将模型与该压缩器耦合。本研究提出代理压缩方法——一种替代性训练方案,既能保持压缩输入带来的效率优势,又能在推理时提供端到端的原始字节接口。训练过程中,语言模型通过联合学习原始字节序列和外部压缩器生成的压缩视图,逐步建立内部对齐机制以实现两种格式的相互映射。这种对齐机制使得模型即使在主要使用压缩数据训练(推理时弃用)的情况下,仍能实现两种格式间的强效迁移。在代码语言建模上的大量实验表明,代理压缩方法在提升训练效率的同时,在固定计算预算下显著优于纯字节级基线模型。随着模型规模扩大,这些优势愈加明显:代理训练模型最终达到或媲美分词器方法的性能,且全程仅处理原始字节,保留了字节级建模固有的鲁棒性。
English
Modern language models are trained almost exclusively on token sequences produced by a fixed tokenizer, an external lossless compressor often over UTF-8 byte sequences, thereby coupling the model to that compressor. This work introduces proxy compression, an alternative training scheme that preserves the efficiency benefits of compressed inputs while providing an end-to-end, raw-byte interface at inference time. During training, one language model is jointly trained on raw byte sequences and compressed views generated by external compressors; through the process, the model learns to internally align compressed sequences and raw bytes. This alignment enables strong transfer between the two formats, even when training predominantly on compressed inputs which are discarded at inference. Extensive experiments on code language modeling demonstrate that proxy compression substantially improves training efficiency and significantly outperforms pure byte-level baselines given fixed compute budgets. As model scale increases, these gains become more pronounced, and proxy-trained models eventually match or rival tokenizer approaches, all while operating solely on raw bytes and retaining the inherent robustness of byte-level modeling.