深层图注意力对抗变分自动编码器

DEEP GRAPH ATTENTION ADVERSARIAL VARIATIONAL AUTOENCODER

  • 摘要: 现有的图自动编码器忽视了图邻居节点的差异和图潜在的数据分布。为了提高图自动编码器嵌入能力,提出图注意力对抗变分自动编码器(AAVGA-d),该方法将注意力引入编码器,并在嵌入训练中使用对抗机制。图注意力编码器实现了对邻居节点权重的自适应分配,对抗正则化使编码器生成的嵌入向量分布接近数据的真实分布。为了加深图注意力层数,设计一种针对注意力网络的随机边删除技术(RDEdge),减少了层数过深引起的过平滑信息丢失。实验结果表明,AAVGA-d的图嵌入能力与目前流行的图自动编码器相比具有竞争优势。

     

    Abstract: The existing graph autoencoder ignores the difference between the neighbor nodes of the graph and the potential data distribution of the graph. In order to improve the embedding ability of the graph autoencoder, the graph attention adversarial variational autoencoder (AAVGA-d) is proposed. This method introduced attention to the encoder and used an adversarial mechanism in the embedding training. The graph attention encoder realized the adaptive allocation of the weights of neighbor nodes, and the adversarial regularization made the distribution of the embedding vector generated by the encoder close to the true distribution of the data. In order to deepen the number of graph attention layers, a random edge deletion technology (RDEdge) for attention networks was designed to reduce the loss of over-smooth information caused by excessively deep layers. The experimental results prove that the graph embedding capability of AAVAG-d has a competitive advantage compared with the current popular graph autoencoders.

     

/

返回文章
返回