![]() ![]() Contrastive multi-view representation learning on graphs. Kaveh Hassani and Amir Hosein Khasahmadi.Large-scale representation learning on graphs via bootstrapping. Shantanu Thakoor, Corentin Tallec, Mohammad Gheshlaghi Azar, RĂ©mi Munos, Petar Veli?kovi?, and Michal Valko.Bring your own view: Graph neural net- works for link prediction with personalized subgraph selection. Qiaoyu Tan, Xin Zhang, Ninghao Liu, Daochen Zha, Li Li, Rui Chen, Soo-Hyun Choi, and Xia Hu.Self-supervised learning on graphs: Contrastive, generative, or predictive. ![]() ![]() Lirong Wu, Haitao Lin, Cheng Tan, Zhangyang Gao, and Stan Z Li.Gcc: Graph contrastive coding for graph neural network pre-training. Jiezhong Qiu, Qibin Chen, Yuxiao Dong, Jing Zhang, Hongxia Yang, Ming Ding, Kuansan Wang, and Jie Tang.Graph contrastive learning with augmentations. Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen.Adversarially regularized graph autoencoder for graph embedding. Shirui Pan, Ruiqi Hu, Guodong Long, Jing Jiang, Lina Yao, and Chengqi Zhang.A simple framework for contrastive learning of visual representations. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton.Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.This study demonstrates the potential of GAE as a universal representation learner on graphs. The results validate the superiority of S2GAE against state-of-the-art generative and contrastive methods. Empirically, we conduct extensive experiments on 21 benchmark datasets across link prediction and node & graph classification tasks. Moreover, we theoretically prove that S2GAE could be regarded as an edge-level contrastive learning framework, providing insights into why it generalizes well. Specifically, instead of reconstructing the whole input structure, we randomly mask a portion of edges and learn to reconstruct these missing edges with an effective masking strategy and an expressive decoder network. Our proposal is called Self-Supervised Graph Autoencoder-S2GAE, which unleashes the power of GAEs with minimal yet nontrivial efforts. In this paper, for the first time, we show that GAE can generalize well to both link prediction and classification scenarios, including node-level and graph-level tasks, by redesigning its critical building blocks from the graph masking perspective. This limitation casts doubt on the generalizability and adoption of GAE. However, recent studies show that existing GAE methods could only perform well on link prediction tasks, while their performance on classification tasks is rather limited. Graph Autoencoder (GAE), an increasingly popular SSL approach on graphs, has been widely explored to learn node representations without ground-truth labels. Self-supervised learning (SSL) has been demonstrated to be effective in pre-training models that can be generalized to various downstream tasks. ![]()
0 Comments
Leave a Reply. |