Shakti-VLMs:面向企業AI的可擴展視覺語言模型
Shakti-VLMs: Scalable Vision-Language Models for Enterprise AI
February 24, 2025
作者: Syed Abdul Gaffar Shakhadri, Kruthika KR, Kartik Basavaraj Angadi
cs.AI
摘要
我們推出Shakti VLM系列,這是一組參數量分別為10億和40億的視覺語言模型,旨在解決多模態學習中的數據效率挑戰。儘管近期的視覺語言模型通過大量訓練數據取得了優異性能,Shakti模型則通過架構創新,以更少的token實現了競爭力的結果。關鍵進展包括用於注意力穩定性的QK正規化、混合正規化技術,以及增強的位置編碼。三階段訓練策略進一步優化了學習效率。評估顯示,Shakti-VLM-1B和Shakti-VLM-4B在文檔理解、視覺推理、OCR提取及通用多模態推理方面表現卓越。我們的結果表明,高性能可以通過模型設計和訓練策略而非單純的數據量來實現,這使得Shakti成為企業級多模態任務的高效解決方案。
English
We introduce Shakti VLM, a family of vision-language models in the capacity
of 1B and 4B parameters designed to address data efficiency challenges in
multimodal learning. While recent VLMs achieve strong performance through
extensive training data, Shakti models leverage architectural innovations to
attain competitive results with fewer tokens. Key advancements include
QK-Normalization for attention stability, hybrid normalization techniques, and
enhanced positional encoding. A three-stage training strategy further optimizes
learning efficiency. Evaluations show that Shakti-Shakti-VLM-1B and
Shakti-VLM-4B excel in document understanding, Visual Reasoning, OCR
extraction, and general multimodal reasoning. Our results highlight that high
performance can be achieved through model design and training strategy rather
than sheer data volume, making Shakti an efficient solution for
enterprise-scale multimodal tasks.Summary
AI-Generated Summary