ChatPaper.aiChatPaper

Eagle e Finch: RWKV com Estados Matriciais e Recorrência Dinâmica

Eagle and Finch: RWKV with Matrix-Valued States and Dynamic Recurrence

April 8, 2024
Autores: Bo Peng, Daniel Goldstein, Quentin Anthony, Alon Albalak, Eric Alcaide, Stella Biderman, Eugene Cheah, Teddy Ferdinan, Haowen Hou, Przemysław Kazienko, Kranthi Kiran GV, Jan Kocoń, Bartłomiej Koptyra, Satyapriya Krishna, Ronald McClelland Jr., Niklas Muennighoff, Fares Obeid, Atsushi Saito, Guangyu Song, Haoqin Tu, Stanisław Woźniak, Ruichong Zhang, Bingchen Zhao, Qihang Zhao, Peng Zhou, Jian Zhu, Rui-Jie Zhu
cs.AI

Resumo

Apresentamos Eagle (RWKV-5) e Finch (RWKV-6), modelos de sequência que aprimoram a arquitetura RWKV (RWKV-4). Nossos avanços no design arquitetônico incluem estados matriciais multi-head e um mecanismo de recorrência dinâmica que melhoram a expressividade enquanto mantêm as características de eficiência de inferência das RNNs. Introduzimos um novo corpus multilíngue com 1,12 trilhão de tokens e um tokenizador rápido baseado em correspondência gananciosa para aprimorar a multilingualidade. Treinamos quatro modelos Eagle, variando de 0,46 a 7,5 bilhões de parâmetros, e dois modelos Finch com 1,6 e 3,1 bilhões de parâmetros, e constatamos que eles alcançam desempenho competitivo em uma ampla variedade de benchmarks. Disponibilizamos todos os nossos modelos no HuggingFace sob a licença Apache 2.0. Modelos em: https://huggingface.co/RWKV Código de treinamento em: https://github.com/RWKV/RWKV-LM Código de inferência em: https://github.com/RWKV/ChatRWKV Código de treinamento paralelo no tempo em: https://github.com/RWKV/RWKV-infctx-trainer
English
We present Eagle (RWKV-5) and Finch (RWKV-6), sequence models improving upon the RWKV (RWKV-4) architecture. Our architectural design advancements include multi-headed matrix-valued states and a dynamic recurrence mechanism that improve expressivity while maintaining the inference efficiency characteristics of RNNs. We introduce a new multilingual corpus with 1.12 trillion tokens and a fast tokenizer based on greedy matching for enhanced multilinguality. We trained four Eagle models, ranging from 0.46 to 7.5 billion parameters, and two Finch models with 1.6 and 3.1 billion parameters and find that they achieve competitive performance across a wide variety of benchmarks. We release all our models on HuggingFace under the Apache 2.0 license. Models at: https://huggingface.co/RWKV Training code at: https://github.com/RWKV/RWKV-LM Inference code at: https://github.com/RWKV/ChatRWKV Time-parallel training code at: https://github.com/RWKV/RWKV-infctx-trainer
PDF381December 15, 2024