今天介绍的是GitHub上的一个图对抗学习资源项目,由中山大学图学习团队维护。项目聚焦于图上的安全问题,包含图对抗、图防御、鲁棒性验证等相关领域论文集合。基于该项目,该团队在GitHub上也公开了一篇图对抗学习综述。

 

项目首页根据图攻击,图防御、鲁棒性验证、模型稳定性以及论文发表年份排序。同时,项目很贴心地提供了不同分类的列表,包括:

  • 根据论文名顺序排序
  • 根据所有论文发表年份
  • 根据论文发表地
  • 有公开代码的论文

同时,为保证关注该项目的用户能够实时追踪更新,项目还提供了一个30天内更新的论文列表。

3⚔Attack

3.12022

  • Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem, 📝WSDM代码链接
  • Inference Attacks Against Graph Neural Networks, 📝USENIX Security代码链接

3.22021

  • Stealing Links from Graph Neural Networks, 📝USENIX Security
  • PATHATTACK: Attacking Shortest Paths in Complex Networks, 📝arXiv
  • Structack: Structure-based Adversarial Attacks on Graph Neural Networks, 📝ACM Hypertext代码链接
  • Optimal Edge Weight Perturbations to Attack Shortest Paths, 📝arXiv
  • GReady for Emerging Threats to Recommender Systems? A Graph Convolution-based Generative Shilling Attack, 📝Information Sciences
  • Graph Adversarial Attack via Rewiring, 📝KDD代码链接
  • Membership Inference Attack on Graph Neural Networks, 📝arXiv
  • BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly Detection, 📝arXiv
  • Graph Backdoor, 📝USENIX Security
  • TDGIA: Effective Injection Attacks on Graph Neural Networks, 📝KDD代码链接
  • Adversarial Attack Framework on Graph Embedding Models with Limited Knowledge, 📝arXiv
  • Adversarial Attack on Large Scale Graph, 📝TKDE代码链接
  • Black-box Gradient Attack on Graph Neural Networks: Deeper Insights in Graph-based Attack and Defense, 📝arXiv
  • Joint Detection and Localization of Stealth False Data Injection Attacks in Smart Grids using Graph Neural Networks, 📝arXiv
  • Universal Spectral Adversarial Attacks for Deformable Shapes, 📝CVPR
  • SAGE: Intrusion Alert-driven Attack Graph Extractor, 📝KDD Workshop代码链接
  • Adversarial Diffusion Attacks on Graph-based Traffic Prediction Models, 📝arXiv代码链接
  • VIKING: Adversarial Attack on Network Embeddings via Supervised Network Poisoning, 📝PAKDD代码链接
  • Explainability-based Backdoor Attacks Against Graph Neural Networks, 📝arXiv
  • GraphAttacker: A General Multi-Task GraphAttack Framework, 📝arXiv代码链接
  • Attacking Graph Neural Networks at Scale, 📝AAAI workshop
  • Node-Level Membership Inference Attacks Against Graph Neural Networks, 📝arXiv
  • Reinforcement Learning For Data Poisoning on Graph Neural Networks, 📝arXiv
  • DeHiB: Deep Hidden Backdoor Attack on Semi-Supervised Learning via Adversarial Perturbation, 📝AAAI
  • Graphfool: Targeted Label Adversarial Attack on Graph Embedding, 📝arXiv
  • Towards Revealing Parallel Adversarial Attack on Politician Socialnet of Graph Structure, 📝Security and Communication Networks
  • Network Embedding Attack: An Euclidean Distance Based Method, 📝MDATA
  • Preserve, Promote, or Attack? GNN Explanation via Topology Perturbation, 📝arXiv
  • Jointly Attacking Graph Neural Network and its Explanations, 📝arXiv
  • Graph Stochastic Neural Networks for Semi-supervised Learning, 📝arXiv代码链接
  • Iterative Deep Graph Learning for Graph Neural Networks: Better and Robust Node Embeddings, 📝arXiv代码链接
  • Single-Node Attack for Fooling Graph Neural Networks, 📝KDD Workshop代码链接
  • The Robustness of Graph k-shell Structure under Adversarial Attacks, 📝arXiv
  • Poisoning Knowledge Graph Embeddings via Relation Inference Patterns, 📝ACL代码链接
  • A Hard Label Black-box Adversarial Attack Against Graph Neural Networks, 📝CCS
  • GNNUnlock: Graph Neural Networks-based Oracle-less Unlocking Scheme for Provably Secure Logic Locking, 📝DATE Conference
  • Single Node Injection Attack against Graph Neural Networks, 📝CIKM代码链接
  • Spatially Focused Attack against Spatiotemporal Graph Neural Networks, 📝arXiv
  • Derivative-free optimization adversarial attacks for graph convolutional networks, 📝PeerJ
  • Projective Ranking: A Transferable Evasion Attack Method on Graph Neural Networks, 📝CIKM
  • Query-based Adversarial Attacks on Graph with Fake Nodes, 📝arXiv
  • Time-aware Gradient Attack on Dynamic Network Link Prediction, 📝TKDE
  • Graph-Fraudster: Adversarial Attacks on Graph Neural Network Based Vertical Federated Learning, 📝arXiv
  • Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications, 📝arXiv
  • Surrogate Representation Learning with Isometric Mapping for Gray-box Graph Adversarial Attacks, 📝arXiv
  • Watermarking Graph Neural Networks based on Backdoor Attacks, 📝arXiv
  • Robustness of Graph Neural Networks at Scale, 📝NeurIPS代码链接
  • Generalization of Neural Combinatorial Solvers Through the Lens of Adversarial Robustness, 📝NeurIPS
  • Graph Structural Attack by Spectral Distance, 📝arXiv
  • Graph Universal Adversarial Attacks: A Few Bad Actors Ruin Graph Learning Models, 📝IJCAI代码链接
  • Adversarial Attacks on Graph Classification via Bayesian Optimisation, 📝NeurIPS代码链接
  • Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods, 📝EMNLP代码链接
  • COREATTACK: Breaking Up the Core Structure of Graphs, 📝arXiv
  • UNTANGLE: Unlocking Routing and Logic Obfuscation Using Graph Neural Networks-based Link Prediction, 📝ICCAD代码链接

3.32020

  • A Graph Matching Attack on Privacy-Preserving Record Linkage, 📝CIKM
  • Semantic-preserving Reinforcement Learning Attack Against Graph Neural Networks for Malware Detection, 📝arXiv
  • Adaptive Adversarial Attack on Graph Embedding via GAN, 📝SocialSec
  • Scalable Adversarial Attack on Graph Neural Networks with Alternating Direction Method of Multipliers, 📝arXiv
  • One Vertex Attack on Graph Neural Networks-based Spatiotemporal Forecasting, 📝ICLR OpenReview
  • Near-Black-Box Adversarial Attacks on Graph Neural Networks as An Influence Maximization Problem, 📝ICLR OpenReview
  • Adversarial Attacks on Deep Graph Matching, 📝NeurIPS
  • Attacking Graph-Based Classification without Changing Existing Connections, 📝ACSAC
  • Cross Entropy Attack on Deep Graph Infomax, 📝IEEE ISCAS
  • Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization, 📝arXiv
  • Learning to Deceive Knowledge Graph Augmented Models via Targeted Perturbation, 📝ICLR代码链接
  • Towards More Practical Adversarial Attacks on Graph Neural Networks, 📝NeurIPS代码链接
  • Adversarial Label-Flipping Attack and Defense for Graph Neural Networks, 📝ICDM代码链接
  • Exploratory Adversarial Attacks on Graph Neural Networks, 📝ICDM代码链接
  • A Targeted Universal Attack on Graph Convolutional Network, 📝arXiv代码链接
  • Query-free Black-box Adversarial Attacks on Graphs, 📝arXiv
  • Reinforcement Learning-based Black-Box Evasion Attacks to Link Prediction in Dynamic Graphs, 📝arXiv
  • Efficient Evasion Attacks to Graph Neural Networks via Influence Function, 📝arXiv
  • Backdoor Attacks to Graph Neural Networks, 📝ICLR OpenReview
  • Link Prediction Adversarial Attack Via Iterative Gradient Attack, 📝IEEE Trans
  • Adversarial Attack on Hierarchical Graph Pooling Neural Networks, 📝arXiv
  • Adversarial Attack on Community Detection by Hiding Individuals, 📝WWW代码链接
  • Manipulating Node Similarity Measures in Networks, 📝AAMAS
  • A Restricted Black-box Adversarial Framework Towards Attacking Graph Embedding Models, 📝AAAI代码链接
  • Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks, 📝BigData
  • Non-target-specific Node Injection Attacks on Graph Neural Networks: A Hierarchical Reinforcement Learning Approach, 📝WWW
  • An Efficient Adversarial Attack on Graph Structured Data, 📝IJCAI Workshop
  • Practical Adversarial Attacks on Graph Neural Networks, 📝ICML Workshop
  • Adversarial Attacks on Graph Neural Networks: Perturbations and their Patterns, 📝TKDD
  • Adversarial Attacks on Link Prediction Algorithms Based on Graph Neural Networks, 📝Asia CCS
  • Scalable Attack on Graph Data by Injecting Vicious Nodes, 📝ECML-PKDD
  • Attackability Characterization of Adversarial Evasion Attack on Discrete Data, 📝KDD
  • MGA: Momentum Gradient Attack on Network, 📝arXiv
  • Adversarial Attacks to Scale-Free Networks: Testing the Robustness of Physical Criteria, 📝arXiv
  • Adversarial Perturbations of Opinion Dynamics in Networks, 📝arXiv
  • Network disruption: maximizing disagreement and polarization in social networks, 📝arXiv代码链接
  • Adversarial attack on BC classification for scale-free networks, 📝AIP Chaos

3.42019

  • Attacking Graph Convolutional Networks via Rewiring, 📝arXiv
  • Unsupervised Euclidean Distance Attack on Network Embedding, 📝arXiv
  • Structured Adversarial Attack Towards General Implementation and Better Interpretability, 📝ICLR代码链接
  • Generalizable Adversarial Attacks with Latent Variable Perturbation Modelling, 📝arXiv
  • Vertex Nomination, Consistent Estimation, and Adversarial Modification, 📝arXiv
  • PeerNets Exploiting Peer Wisdom Against Adversarial Attacks, 📝ICLR代码链接
  • Network Structural Vulnerability A Multi-Objective Attacker Perspective, 📝IEEE Trans
  • Multiscale Evolutionary Perturbation Attack on Community Detection, 📝arXiv
  • αCyber: Enhancing Robustness of Android Malware Detection System against Adversarial Attacks on Heterogeneous Graph based Model, 📝CIKM
  • Adversarial Attacks on Node Embeddings via Graph Poisoning, 📝ICML代码链接
  • GA Based Q-Attack on Community Detection, 📝TCSS
  • Data Poisoning Attack against Knowledge Graph Embedding, 📝IJCAI
  • Adversarial Attacks on Graph Neural Networks via Meta Learning, 📝ICLR代码链接
  • Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective, 📝IJCAI代码链接
  • Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, 📝IJCAI代码链接
  • A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning, 📝NeurIPS代码链接
  • Attacking Graph-based Classification via Manipulating the Graph Structure, 📝CCS

3.52018

  • Fake Node Attacks on Graph Convolutional Networks, 📝arXiv
  • Data Poisoning Attack against Unsupervised Node Embedding Methods, 📝arXiv
  • Fast Gradient Attack on Network Embedding, 📝arXiv
  • Attack Tolerance of Link Prediction Algorithms: How to Hide Your Relations in a Social Network, 📝arXiv
  • Adversarial Attacks on Neural Networks for Graph Data, 📝KDD代码链接
  • Hiding Individuals and Communities in a Social Network, 📝Nature Human Behavior
  • Attacking Similarity-Based Link Prediction in Social Networks, 📝AAMAS
  • Adversarial Attack on Graph Structured Data, 📝ICML代码链接

3.62017

  • Practical Attacks Against Graph-based Clustering, 📝CCS
  • Adversarial Sets for Regularising Neural Link Predictors, 📝UAI代码链接

4🛡Defense

4.12021

  • Learning to Drop: Robust Graph Neural Network via Topological Denoising, 📝WSDM代码链接
  • How effective are Graph Neural Networks in Fraud Detection for Network Data?, 📝arXiv
  • Graph Sanitation with Application to Node Classification, 📝arXiv
  • Understanding Structural Vulnerability in Graph Convolutional Networks, 📝IJCAI代码链接
  • A Robust and Generalized Framework for Adversarial Graph Embedding, 📝arXiv代码链接
  • Integrated Defense for Resilient Graph Matching, 📝ICML
  • Unveiling Anomalous Nodes Via Random Sampling and Consensus on Graphs, 📝ICASSP
  • Robust Network Alignment via Attack Signal Scaling and Adversarial Perturbation Elimination, 📝WWW
  • Adversarial Graph Augmentation to Improve Graph Contrastive Learning, 📝arXiv
  • Information Obfuscation of Graph Neural Network, 📝ICML代码链接
  • Improving Robustness of Graph Neural Networks with Heterophily-Inspired Designs, 📝arXiv
  • On Generalization of Graph Autoencoders with Adversarial Training, 📝ECML
  • DeepInsight: Interpretability Assisting Detection of Adversarial Samples on Graphs, 📝ECML
  • Elastic Graph Neural Networks, 📝ICML代码链接
  • Robust Counterfactual Explanations on Graph Neural Networks, 📝arXiv
  • Node Similarity Preserving Graph Convolutional Networks, 📝WSDM代码链接
  • Enhancing Robustness and Resilience of Multiplex Networks Against Node-Community Cascading Failures, 📝IEEE TSMC
  • NetFense: Adversarial Defenses against Privacy Attacks on Neural Networks for Graph Data, 📝TKDE代码链接
  • Robust Graph Learning Under Wasserstein Uncertainty, 📝arXiv
  • Towards Robust Graph Contrastive Learning, 📝arXiv
  • Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks, 📝ICML
  • UAG: Uncertainty-Aware Attention Graph Neural Network for Defending Adversarial Attacks, 📝AAAI
  • Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks, 📝AAAI
  • Power up! Robust Graph Convolutional Network against Evasion Attacks based on Graph Powering, 📝AAAI代码链接
  • Personalized privacy protection in social networks through adversarial modeling, 📝AAAI
  • Interpretable Stability Bounds for Spectral Graph Filters, 📝arXiv
  • Randomized Generation of Adversary-Aware Fake Knowledge Graphs to Combat Intellectual Property Theft, 📝AAAI
  • Unified Robust Training for Graph NeuralNetworks against Label Noise, 📝arXiv
  • An Introduction to Robust Graph Convolutional Networks, 📝arXiv
  • E-GraphSAGE: A Graph Neural Network based Intrusion Detection System, 📝arXiv
  • Spatio-Temporal Sparsification for General Robust Graph Convolution Networks, 📝arXiv
  • Robust graph convolutional networks with directional graph adversarial training, 📝Applied Intelligence
  • Detection and Defense of Topological Adversarial Attacks on Graphs, 📝AISTATS
  • Unveiling the potential of Graph Neural Networks for robust Intrusion Detection, 📝arXiv代码链接
  • Adversarial Robustness of Probabilistic Network Embedding for Link Prediction, 📝arXiv
  • EGC2: Enhanced Graph Classification with Easy Graph Compression, 📝arXiv
  • LinkTeller: Recovering Private Edges from Graph Neural Networks via Influence Analysis, 📝arXiv
  • Structure-Aware Hierarchical Graph Pooling using Information Bottleneck, 📝IJCNN
  • Mal2GCN: A Robust Malware Detection Approach Using Deep Graph Convolutional Networks With Non-Negative Weights, 📝arXiv
  • CoG: a Two-View Co-training Framework for Defending Adversarial Attacks on Graph, 📝arXiv
  • Releasing Graph Neural Networks with Differential Privacy Guarantees, 📝arXiv
  • Speedup Robust Graph Structure Learning with Low-Rank Information, 📝CIKM
  • A Lightweight Metric Defence Strategy for Graph Neural Networks Against Poisoning Attacks, 📝ICICS代码链接
  • Node Feature Kernels Increase Graph Convolutional Network Robustness, 📝arXiv代码链接
  • On the Relationship between Heterophily and Robustness of Graph Neural Networks, 📝arXiv
  • Distributionally Robust Semi-Supervised Learning Over Graphs, 📝ICLR
  • Robustness of Graph Neural Networks at Scale, 📝NeurIPS代码链接
  • Graph Transplant: Node Saliency-Guided Graph Mixup with Local Structure Preservation, 📝arXiv
  • Not All Low-Pass Filters are Robust in Graph Convolutional Networks, 📝NeurIPS代码链接
  • Towards Robust Reasoning over Knowledge Graphs, 📝arXiv

4.22020

  • Ricci-GNN: Defending Against Structural Attacks Through a Geometric Approach, 📝ICLR OpenReview
  • Provable Overlapping Community Detection in Weighted Graphs, 📝NeurIPS
  • Variational Inference for Graph Convolutional Networks in the Absence of Graph Data and Adversarial Settings, 📝NeurIPS代码链接
  • Graph Random Neural Networks for Semi-Supervised Learning on Graphs, 📝NeurIPS代码链接
  • Reliable Graph Neural Networks via Robust Aggregation, 📝NeurIPS代码链接
  • Towards Robust Graph Neural Networks against Label Noise, 📝ICLR OpenReview
  • Graph Adversarial Networks: Protecting Information against Adversarial Attacks, 📝ICLR OpenReview代码链接
  • A Novel Defending Scheme for Graph-Based Classification Against Graph Structure Manipulating Attack, 📝SocialSec
  • Iterative Deep Graph Learning for Graph Neural Networks: Better and Robust Node Embeddings, 📝NeurIPS代码链接
  • Node Copying for Protection Against Graph Neural Network Topology Attacks, 📝arXiv
  • Community detection in sparse time-evolving graphs with a dynamical Bethe-Hessian, 📝NeurIPS
  • Unsupervised Adversarially-Robust Representation Learning on Graphs, 📝arXiv
  • A Feature-Importance-Aware and Robust Aggregator for GCN, 📝CIKM代码链接
  • Anti-perturbation of Online Social Networks by Graph Label Transition, 📝arXiv
  • Graph Information Bottleneck, 📝NeurIPS代码链接
  • Adversarial Detection on Graph Structured Data, 📝PPMLP
  • Graph Contrastive Learning with Augmentations, 📝NeurIPS代码链接
  • Learning Graph Embedding with Adversarial Training Methods, 📝IEEE Transactions on Cybernetics
  • I-GCN: Robust Graph Convolutional Network via Influence Mechanism, 📝arXiv
  • Adversary for Social Good: Protecting Familial Privacy through Joint Adversarial Attacks, 📝AAAI
  • Smoothing Adversarial Training for GNN, 📝IEEE TCSS
  • Graph Structure Reshaping Against Adversarial Attacks on Graph Neural Networks, 📝None代码链接
  • RoGAT: a robust GNN combined revised GAT with adjusted graphs, 📝arXiv
  • ResGCN: Attention-based Deep Residual Modeling for Anomaly Detection on Attributed Networks, 📝arXiv
  • Adversarial Perturbations of Opinion Dynamics in Networks, 📝arXiv
  • Adversarial Privacy Preserving Graph Embedding against Inference Attack, 📝arXiv代码链接
  • Robust Graph Learning From Noisy Data, 📝IEEE Trans
  • GNNGuard: Defending Graph Neural Networks against Adversarial Attacks, 📝NeurIPS代码链接
  • Transferring Robustness for Graph Neural Network Against Poisoning Attacks, 📝WSDM代码链接
  • All You Need Is Low (Rank): Defending Against Adversarial Attacks on Graphs, 📝WSDM代码链接
  • How Robust Are Graph Neural Networks to Structural Noise?, 📝DLGMA
  • Robust Detection of Adaptive Spammers by Nash Reinforcement Learning, 📝KDD代码链接
  • Graph Structure Learning for Robust Graph Neural Networks, 📝KDD代码链接
  • On The Stability of Polynomial Spectral Graph Filters, 📝ICASSP代码链接
  • On the Robustness of Cascade Diffusion under Node Attacks, 📝WWW代码链接
  • Friend or Faux: Graph-Based Early Detection of Fake Accounts on Social Networks, 📝WWW
  • Towards an Efficient and General Framework of Robust Training for Graph Neural Networks, 📝ICASSP
  • Robust Graph Representation Learning via Neural Sparsification, 📝ICML
  • Robust Training of Graph Convolutional Networks via Latent Perturbation, 📝ECML-PKDD
  • Robust Collective Classification against Structural Attacks, 📝Preprint
  • Enhancing Graph Neural Network-based Fraud Detectors against Camouflaged Fraudsters, 📝CIKM代码链接
  • Topological Effects on Attacks Against Vertex Classification, 📝arXiv
  • Tensor Graph Convolutional Networks for Multi-relational and Robust Learning, 📝arXiv
  • DefenseVGAE: Defending against Adversarial Attacks on Graph Data via a Variational Graph Autoencoder, 📝arXiv代码链接
  • Dynamic Knowledge Graph-based Dialogue Generation with Improved Adversarial Meta-Learning, 📝arXiv
  • AANE: Anomaly Aware Network Embedding For Anomalous Link Detection, 📝ICDM
  • Provably Robust Node Classification via Low-Pass Message Passing, 📝ICDM

4.32019

  • Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure, 📝TKDE代码链接
  • Bayesian graph convolutional neural networks for semi-supervised classification, 📝AAAI代码链接
  • Target Defense Against Link-Prediction-Based Attacks via Evolutionary Perturbations, 📝arXiv
  • Examining Adversarial Learning against Graph-based IoT Malware Detection Systems, 📝arXiv
  • Adversarial Embedding: A robust and elusive Steganography and Watermarking technique, 📝arXiv
  • Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning, 📝arXiv代码链接
  • Adversarial Defense Framework for Graph Neural Network, 📝arXiv
  • GraphSAC: Detecting anomalies in large-scale graphs, 📝arXiv
  • Edge Dithering for Robust Adaptive Graph Convolutional Networks, 📝arXiv
  • Can Adversarial Network Attack be Defended?, 📝arXiv
  • GraphDefense: Towards Robust Graph Convolutional Networks, 📝arXiv
  • Adversarial Training Methods for Network Embedding, 📝WWW代码链接
  • Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, 📝IJCAI代码链接
  • Improving Robustness to Attacks Against Vertex Classification, 📝MLG@KDD
  • Adversarial Robustness of Similarity-Based Link Prediction, 📝ICDM
  • αCyber: Enhancing Robustness of Android Malware Detection System against Adversarial Attacks on Heterogeneous Graph based Model, 📝CIKM
  • Batch Virtual Adversarial Training for Graph Convolutional Networks, 📝ICML代码链接
  • Latent Adversarial Training of Graph Convolution Networks, 📝LRGSD@ICML代码链接
  • Characterizing Malicious Edges targeting on Graph Neural Networks, 📝ICLR OpenReview代码链接
  • Comparing and Detecting Adversarial Attacks for Graph Deep Learning, 📝RLGM@ICLR
  • Virtual Adversarial Training on Graph Convolutional Networks in Node Classification, 📝PRCV
  • Robust Graph Convolutional Networks Against Adversarial Attacks, 📝KDD代码链接
  • Investigating Robustness and Interpretability of Link Prediction via Adversarial Modifications, 📝NAACL代码链接
  • Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective, 📝IJCAI代码链接
  • Robust Graph Data Learning via Latent Graph Convolutional Representation, 📝arXiv

4.42018

  • Adversarial Personalized Ranking for Recommendation, 📝SIGIR代码链接

4.52017

  • Adversarial Sets for Regularising Neural Link Predictors, 📝UAI代码链接

5🔐Certification

  • Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation, 📝KDD'2021代码链接
  • Collective Robustness Certificates, 📝ICLR'2021
  • Adversarial Immunization for Improving Certifiable Robustness on Graphs, 📝WSDM'2021
  • Certifying Robustness of Graph Laplacian Based Semi-Supervised Learning, 📝ICLR OpenReview'2021
  • Robust Certification for Laplace Learning on Geometric Graphs, 📝MSML’2021
  • Improving the Robustness of Wasserstein Embedding by Adversarial PAC-Bayesian Learning, 📝AAAI'2020
  • Certified Robustness of Graph Convolution Networks for Graph Classification under Topological Attacks, 📝NeurIPS'2020代码链接
  • Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing, 📝WWW'2020
  • Efficient Robustness Certificates for Discrete Data: Sparsity - Aware Randomized Smoothing for Graphs, Images and More, 📝ICML'2020代码链接
  • Abstract Interpretation based Robustness Certification for Graph Convolutional Networks, 📝ECAI'2020
  • Certifiable Robustness of Graph Convolutional Networks under Structure Perturbation, 📝KDD'2020代码链接
  • Certified Robustness of Graph Classification against Topology Attack with Randomized Smoothing, 📝GLOBECOM'2020
  • Certifiable Robustness and Robust Training for Graph Convolutional Networks, 📝KDD'2019代码链接
  • Certifiable Robustness to Graph Perturbations, 📝NeurIPS'2019代码链接

6⚖Stability

  • Shift-Robust GNNs: Overcoming the Limitations of Localized Graph Training data, 📝arXiv'2021
  • Stability of Graph Convolutional Neural Networks to Stochastic Perturbations, 📝arXiv'2021
  • Graph and Graphon Neural Network Stability, 📝arXiv'2020
  • On the Stability of Graph Convolutional Neural Networks under Edge Rewiring, 📝arXiv'2020
  • Stability of Graph Neural Networks to Relative Perturbations, 📝ICASSP'2020
  • Graph Neural Networks: Architectures, Stability and Transferability, 📝arXiv'2020
  • Should Graph Convolution Trust Neighbors? A Simple Causal Inference Method, 📝arXiv'2020
  • Stability Properties of Graph Neural Networks, 📝arXiv'2019
  • Stability and Generalization of Graph Convolutional Neural Networks, 📝KDD'2019代码链接
  • When Do GNNs Work: Understanding and Improving Neighborhood Aggregation, 📝IJCAI Workshop'2019代码链接
  • Towards a Unified Framework for Fair and Stable Graph Representation Learning, 📝UAI'2021代码链接
  • Training Stable Graph Neural Networks Through Constrained Learning, 📝arXiv'2021

7🚀Others

  • Perturbation Sensitivity of GNNs, 📝cs224w'2019
  • Generating Adversarial Examples with Graph Neural Networks, 📝UAI'2021
  • SIGL: Securing Software Installations Through Deep Graph Learning, 📝USENIX'2021
  • FLAG: Adversarial Data Augmentation for Graph Neural Networks, 📝arXiv'2020代码链接
  • Dynamic Knowledge Graph-based Dialogue Generation with Improved Adversarial Meta-Learning, 📝arXiv'2020
  • Watermarking Graph Neural Networks by Random Graphs, 📝arXiv'2020
  • Training Robust Graph Neural Network by Applying Lipschitz Constant Constraint, 📝CentraleSupélec'2020代码链接
  • CAP: Co-Adversarial Perturbation on Weights and Features for Improving Generalization of Graph Neural Networks, 📝arXiv'2021

8📃Survey

  • Graph Neural Networks Methods, Applications, and Opportunities, 📝arXiv'2021
  • Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical Studies, 📝SIGKDD Explorations'2021
  • Deep Graph Structure Learning for Robust Representations: A Survey, 📝IJCAI Survey track'2021
  • Graph Neural Networks Taxonomy, Advances and Trends, 📝arXiv'2020
  • A Survey of Adversarial Learning on Graph, 📝arXiv'2020
  • Adversarial Attacks and Defenses in Images, Graphs and Text: A Review, 📝arXiv'2019
  • Adversarial Attack and Defense on Graph Data: A Survey, 📝arXiv'2018
  • Deep Learning on Graphs A Survey, 📝arXiv'2018

9⚙Toolbox

  • DeepRobust: a Platform for Adversarial Attacks and Defenses, 📝AAAI’2021, DeepRobust
  • GraphWar: A graph adversarial learning toolbox based on PyTorch and DGL, 📝arXiv’2022, GraphWar
  • Evaluating Graph Vulnerability and Robustness using TIGER, 📝arXiv‘2021, TIGER
  • Graph Robustness Benchmark: Rethinking and Benchmarking Adversarial Robustness of Graph Neural Networks, 📝NeurIPS Openreview ’2021, Graph Robustness Benchmark (GRB)

10🔗Resource

  • Awesome Adversarial Learning on Recommender System Link
  • Awesome Graph Attack and Defense Papers Link
  • Graph Adversarial Learning Literature Link
  • A Complete List of All (arXiv) Adversarial Example Papers 🌐Link
  • Adversarial Attacks and Defenses Frontiers, Advances and Practice

内容中包含的图片若涉及版权问题,请及时与我们联系删除