Recent advances in AI call for a paradigm shift from bit-centric communication to goal- and semantics-oriented architectures, paving the way for AI-native 6G networks. In this context, we address a key open challenge: enabling heterogeneous AI agents to exchange compressed latent-space representations while mitigating semantic noise and preserving task-relevant meaning. We cast this challenge as learning both the communication topology and the alignment maps that govern information exchange among agents, yielding a learned network sheaf equipped with orthogonal maps. This learning process is further supported by a semantic denoising end compression module that constructs a shared global semantic space and derives sparse, structured representations of each agent's latent space. This corresponds to a nonconvex dictionary learning problem solved iteratively with closed-form updates. Experiments with mutiple AI agents pre-trained on real image data show that the semantic denoising and compression facilitates AI agents alignment and the extraction of semantic clusters, while preserving high accuracy in downstream task. The resulting communication network provides new insights about semantic heterogeneity across agents, highlighting the interpretability of our methodology.
翻译:人工智能的最新进展呼吁从以比特为中心的通信范式转向以目标和语义为导向的架构,为AI原生的6G网络铺平道路。在此背景下,我们解决了一个关键的开放挑战:使异构AI智能体能够交换压缩的潜在空间表示,同时减轻语义噪声并保留任务相关的含义。我们将这一挑战转化为学习通信拓扑结构以及管理智能体间信息交换的对齐映射,从而得到一个配备正交映射的学习网络层。这一学习过程进一步得到语义去噪和压缩模块的支持,该模块构建了一个共享的全局语义空间,并推导出每个智能体潜在空间的稀疏结构化表示。这对应于一个非凸字典学习问题,通过闭式更新迭代求解。在真实图像数据上预训练的多个AI智能体实验表明,语义去噪和压缩促进了AI智能体的对齐和语义簇的提取,同时在下游任务中保持了高精度。由此产生的通信网络为跨智能体的语义异质性提供了新的见解,突显了我们方法的可解释性。