For deep learning problems on graph-structured data, pooling layers are important for down sampling, reducing computational cost, and to minimize overfitting. We define a pooling layer, nervePool, for data structured as simplicial complexes, which are generalizations of graphs that include higher-dimensional simplices beyond vertices and edges; this structure allows for greater flexibility in modeling higher-order relationships. The proposed simplicial coarsening scheme is built upon partitions of vertices, which allow us to generate hierarchical representations of simplicial complexes, collapsing information in a learned fashion. NervePool builds on the learned vertex cluster assignments and extends to coarsening of higher dimensional simplices in a deterministic fashion. While in practice the pooling operations are computed via a series of matrix operations, the topological motivation is a set-theoretic construction based on unions of stars of simplices and the nerve complex.
翻译:针对图结构数据的深度学习问题,池化层对于下采样、降低计算成本以及最小化过拟合至关重要。我们定义了一种名为nervePool的池化层,适用于以单纯复形结构化的数据;单纯复形是图的推广,包含超越顶点和边的高维单纯形,这种结构为建模高阶关系提供了更大的灵活性。所提出的单纯粗化方案基于顶点划分构建,使我们能够生成单纯复形的层次化表示,并以学习方式折叠信息。NervePool基于学习得到的顶点聚类分配,并以确定性方式扩展到高维单纯形的粗化。尽管在实践中池化操作通过一系列矩阵运算实现,但其拓扑动机源于基于单纯形星形并集与神经复形的集合论构造。