Decentralized multi-agent systems have shown promise in enabling autonomous collaboration among LLM-based agents. While AgentNet demonstrated the feasibility of fully decentralized coordination through dynamic DAG topologies, several limitations remain: scalability challenges with large agent populations, communication overhead, lack of privacy guarantees, and suboptimal resource allocation. We propose AgentNet++, a hierarchical decentralized framework that extends AgentNet with multilevel agent organization, privacy-preserving knowledge sharing via differential privacy and secure aggregation, adaptive resource management, and theoretical convergence guarantees. Our approach introduces cluster-based hierarchies where agents self-organize into specialized groups, enabling efficient task routing and knowledge distillation while maintaining full decentralization. We provide formal analysis of convergence properties and privacy bounds, and demonstrate through extensive experiments on complex multi-agent tasks that AgentNet++ achieves 23% higher task completion rates, 40% reduction in communication overhead, and maintains strong privacy guarantees compared to AgentNet and other baselines. Our framework scales effectively to 1000+ agents while preserving the emergent intelligence properties of the original AgentNet.
翻译:去中心化多智能体系统在实现基于LLM的智能体间自主协作方面展现出潜力。尽管AgentNet通过动态DAG拓扑结构证明了完全去中心化协同的可行性,但仍存在若干局限性:大规模智能体集群的可扩展性挑战、通信开销过高、缺乏隐私保障机制以及次优资源分配。本文提出AgentNet++,一种分层去中心化框架,通过多级智能体组织架构、基于差分隐私与安全聚合的隐私保护知识共享、自适应资源管理以及理论收敛性保证,对AgentNet进行了系统性扩展。该方法引入基于集群的分层结构,使智能体自组织为专业化群组,在保持完全去中心化的同时实现高效任务路由与知识蒸馏。我们提供了收敛特性与隐私边界的理论分析,并通过复杂多智能体任务的广泛实验证明:相较于AgentNet及其他基线方法,AgentNet++实现了23%的任务完成率提升、40%的通信开销降低,并保持强大的隐私保障能力。该框架可有效扩展至1000+智能体规模,同时保留了原始AgentNet的涌现智能特性。