Network tomography aims to infer hidden network states, such as link performance, traffic load, and topology, from external observations. Most existing methods solve these problems separately and depend on limited task-specific signals, which limits generalization and interpretability. We present PLATONT, a unified framework that models different network indicators (e.g., delay, loss, bandwidth) as projections of a shared latent network state. Guided by the Platonic Representation Hypothesis, PLATONT learns this latent state through multimodal alignment and contrastive learning. By training multiple tomography tasks within a shared latent space, it builds compact and structured representations that improve cross-task generalization. Experiments on synthetic and real-world datasets show that PLATONT consistently outperforms existing methods in link estimation, topology inference, and traffic prediction, achieving higher accuracy and stronger robustness under varying network conditions.
翻译:网络层析旨在通过外部观测推断隐藏的网络状态,如链路性能、流量负载和拓扑结构。现有方法大多独立解决这些问题,并依赖于有限的任务特定信号,这限制了其泛化能力和可解释性。本文提出PLATONT,一个统一框架,将不同网络指标(如时延、丢包、带宽)建模为共享潜在网络状态的投影。在柏拉图式表示假说的指导下,PLATONT通过多模态对齐和对比学习来学习这一潜在状态。通过在共享潜在空间中训练多个层析任务,它构建了紧凑且结构化的表示,从而提升了跨任务泛化能力。在合成和真实数据集上的实验表明,PLATONT在链路估计、拓扑推断和流量预测方面均优于现有方法,在不同网络条件下实现了更高的准确性和更强的鲁棒性。