ChatPaper.aiChatPaper

TAG:一個去中心化的多智能體分層強化學習框架

TAG: A Decentralized Framework for Multi-Agent Hierarchical Reinforcement Learning

February 21, 2025
作者: Giuseppe Paolo, Abdelhakim Benechehab, Hamza Cherkaoui, Albert Thomas, Balázs Kégl
cs.AI

摘要

層級化組織是生物系統與人類社會的基礎,然而人工智慧系統往往依賴於單一架構,這限制了其適應性和可擴展性。當前的層級強化學習(HRL)方法通常將層級限制在兩層或需要集中式訓練,從而限制了其實際應用性。我們提出了TAME Agent Framework (TAG),這是一個用於構建完全去中心化的層級多代理系統的框架。TAG通過創新的LevelEnv概念,使得任意深度的層級結構成為可能,該概念將每個層級抽象為其上層代理的環境。這種方法在保持鬆散耦合的同時,標準化了層級間的信息流,從而實現了多樣化代理類型的無縫整合。我們通過在標準基準測試中實現結合不同層級強化學習代理的層級架構,展示了TAG的有效性,並在性能上超越了傳統的多代理強化學習基線。我們的結果表明,去中心化的層級組織不僅提升了學習速度,也提高了最終性能,這使得TAG成為可擴展多代理系統的一個有前景的方向。
English
Hierarchical organization is fundamental to biological systems and human societies, yet artificial intelligence systems often rely on monolithic architectures that limit adaptability and scalability. Current hierarchical reinforcement learning (HRL) approaches typically restrict hierarchies to two levels or require centralized training, which limits their practical applicability. We introduce TAME Agent Framework (TAG), a framework for constructing fully decentralized hierarchical multi-agent systems.TAG enables hierarchies of arbitrary depth through a novel LevelEnv concept, which abstracts each hierarchy level as the environment for the agents above it. This approach standardizes information flow between levels while preserving loose coupling, allowing for seamless integration of diverse agent types. We demonstrate the effectiveness of TAG by implementing hierarchical architectures that combine different RL agents across multiple levels, achieving improved performance over classical multi-agent RL baselines on standard benchmarks. Our results show that decentralized hierarchical organization enhances both learning speed and final performance, positioning TAG as a promising direction for scalable multi-agent systems.

Summary

AI-Generated Summary

PDF92February 25, 2025