Nacrith:基于集成上下文建模与高精度CDF编码的神经无损压缩
Nacrith: Neural Lossless Compression via Ensemble Context Modeling and High-Precision CDF Coding
February 23, 2026
作者: Roberto Tacconelli
cs.AI
摘要
我们推出Nacrith无损压缩系统,该系统融合了1.35亿参数Transformer语言模型(SmolLM2-135M)、轻量级在线预测器集成与32位算术编码器。在基础LLM+算术编码范式之上,Nacrith实现了八大创新:(1)将CDF精度从2^16提升至2^24,消除大词表中因最小概率阈值导致的约75%量化开销;(2)基于令牌级N-gram模型实现快速局部预测;(3)通过在线梯度下降的自适应对数空间偏置头校正单文档LLM误差;(4)基于置信度的LLM跳过机制加速高可预测令牌处理;(5)混合二进制格式NC06将神经压缩扩展至任意二进制文件——据我们所知这是LLM压缩器的首创;(6)llama.cpp推理后端实现比PyTorch快约7倍的单令牌解码;(7)支持最多8工作节点的并行多GPU压缩;(8)原生KV缓存滑动窗口将单滑动计算成本降低约37倍。该系统仅需约500MB GGUF权重文件,每工作节点占用约1.2GB显存,可在消费级GPU上运行。
在alice29.txt(坎特伯雷语料库,152KB)测试中,Nacrith实现0.918比特/字节的压缩率——较gzip提升3.1倍,较bzip2提升2.5倍,较CMIX v21提升44%,较ts_zip提升20%,同时突破零阶、一阶及二阶字节级香农熵下界。在enwik8(100MB)测试中达到0.9389 bpb(11.74%),以60倍更小模型且无需微调的条件下,较ts_zip(约1.11 bpb)提升15%,较FineZip(1.024 bpb)提升8%。针对模型训练截止后发布文档的分布外评估证实这些增益非记忆伪影,在未见过文本上实现0.723 bpb的压缩率。
English
We present Nacrith, a lossless compression system that combines a 135M-parameter transformer language model (SmolLM2-135M) with an ensemble of lightweight online predictors and a 32-bit arithmetic coder. Beyond the base LLM-plus-arithmetic-coding paradigm, Nacrith introduces several contributions: (1) a CDF precision upgrade from 2^16 to 2^24 that eliminates ~75% of quantization overhead caused by minimum-probability floors in large vocabularies; (2) a token-level N-gram model for fast local predictions; (3) an adaptive log-space bias head correcting per-document LLM errors via online gradient descent; (4) confidence-based LLM skip for accelerating highly predictable tokens; (5) a hybrid binary format (NC06) extending neural compression to arbitrary binary files--to our knowledge a first among LLM-based compressors; (6) a llama.cpp inference backend achieving ~7x faster single-token decode than PyTorch; (7) parallel multi-GPU compression across up to 8 workers; and (8) native KV cache sliding window reducing per-slide cost by ~37x. The system requires only ~500 MB of GGUF weights and ~1.2 GB VRAM per worker, running on consumer GPUs.
On alice29.txt (Canterbury Corpus, 152 KB), Nacrith achieves 0.918 bits per byte (bpb)--outperforming gzip by 3.1x, bzip2 by 2.5x, CMIX v21 by 44%, and ts_zip by 20%, while compressing below the 0th-, 1st-, and 2nd-order byte-level Shannon entropy bounds. On enwik8 (100 MB), Nacrith achieves 0.9389 bpb (11.74%), surpassing ts_zip (~1.11 bpb) by 15% and FineZip (1.024 bpb) by 8% despite using a 60x smaller model with no fine-tuning. An out-of-distribution evaluation on a document published after the model's training cutoff confirms these gains are not memorization artifacts, achieving 0.723 bpb on unseen text.