Gemma: Gemini研究と技術に基づくオープンモデル
Gemma: Open Models Based on Gemini Research and Technology
March 13, 2024
著者: Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, Pouya Tafti, Léonard Hussenot, Aakanksha Chowdhery, Adam Roberts, Aditya Barua, Alex Botev, Alex Castro-Ros, Ambrose Slone, Amélie Héliou, Andrea Tacchetti, Anna Bulanova, Antonia Paterson, Beth Tsai, Bobak Shahriari, Charline Le Lan, Christopher A. Choquette-Choo, Clément Crepy, Daniel Cer, Daphne Ippolito, David Reid, Elena Buchatskaya, Eric Ni, Eric Noland, Geng Yan, George Tucker, George-Christian Muraru, Grigory Rozhdestvenskiy, Henryk Michalewski, Ian Tenney, Ivan Grishchenko, Jacob Austin, James Keeling, Jane Labanowski, Jean-Baptiste Lespiau, Jeff Stanway, Jenny Brennan, Jeremy Chen, Johan Ferret, Justin Chiu, Justin Mao-Jones, Katherine Lee, Kathy Yu, Katie Millican, Lars Lowe Sjoesund, Lisa Lee, Lucas Dixon, Machel Reid, Maciej Mikuła, Mateo Wirth, Michael Sharman, Nikolai Chinaev, Nithum Thain, Olivier Bachem, Oscar Chang, Oscar Wahltinez, Paige Bailey, Paul Michel, Petko Yotov, Pier Giuseppe Sessa, Rahma Chaabouni, Ramona Comanescu, Reena Jana, Rohan Anil, Ross McIlroy, Ruibo Liu, Ryan Mullins, Samuel L Smith, Sebastian Borgeaud, Sertan Girgin, Sholto Douglas, Shree Pandya, Siamak Shakeri, Soham De, Ted Klimenko, Tom Hennigan, Vlad Feinberg, Wojciech Stokowiec, Yu-hui Chen, Zafarali Ahmed, Zhitao Gong, Tris Warkentin, Ludovic Peran, Minh Giang, Clément Farabet, Oriol Vinyals, Jeff Dean, Koray Kavukcuoglu, Demis Hassabis, Zoubin Ghahramani, Douglas Eck, Joelle Barral, Fernando Pereira, Eli Collins, Armand Joulin, Noah Fiedel, Evan Senter, Alek Andreev, Kathleen Kenealy
cs.AI
要旨
本論文では、Geminiモデルの作成に用いられた研究と技術を基に構築された、軽量で最先端のオープンモデル群であるGemmaを紹介する。Gemmaモデルは、言語理解、推論、安全性に関する学術的ベンチマークにおいて強力な性能を示す。我々は2つのサイズのモデル(20億パラメータと70億パラメータ)をリリースし、事前学習済みおよびファインチューニング済みのチェックポイントを提供する。Gemmaは、18のテキストベースタスクのうち11において、同規模のオープンモデルを上回る性能を発揮し、モデルの安全性と責任性に関する包括的な評価を提示するとともに、モデル開発の詳細な説明を提供する。大規模言語モデル(LLM)の責任あるリリースは、最先端モデルの安全性向上と、次世代のLLMイノベーションを可能にするために極めて重要であると我々は考えている。
English
This work introduces Gemma, a family of lightweight, state-of-the art open
models built from the research and technology used to create Gemini models.
Gemma models demonstrate strong performance across academic benchmarks for
language understanding, reasoning, and safety. We release two sizes of models
(2 billion and 7 billion parameters), and provide both pretrained and
fine-tuned checkpoints. Gemma outperforms similarly sized open models on 11 out
of 18 text-based tasks, and we present comprehensive evaluations of safety and
responsibility aspects of the models, alongside a detailed description of model
development. We believe the responsible release of LLMs is critical for
improving the safety of frontier models, and for enabling the next wave of LLM
innovations.Summary
AI-Generated Summary