Deep Learning on Graphs and Graph Representation Learning

Graphs are essential representations for many real-world data such as social networks, the World Wide Web, and molecular graphs. To facilitate downstream tasks on graph-structured data such as node classification, link prediction, graph classification, and graph generation, it is of great importance to developing advanced algorithms for both node-level and graph-level representation learning on graphs. Given the great representation learning power of deep learning techniques, it is extremely promising to also adopt them for graphs. However, graphs are inherently different from regular grid-like data, since nodes in graphs are unordered and each of them may connect to different numbers of other nodes. Efforts have been made to generalize deep learning techniques to graphs. Among them, Graph Neural Networks (GNNs), are the most popular methods, which have demonstrated their effectiveness in many areas. In this project, we conduct research about deep learning on graphs with a specific focus on GNNs. In particular, we develop novel algorithms to fundamentally push the area forward. We also apply these algorithms to many applications such as recommendations to make practical impacts. In addition, we have published a book named Deep Learning on Graphs, which covers comprehensive contents on this topic, from fundamentals, methodologies, applications to advances.

Adversarial Attacks and Defenses

Recent studies show that machine learning models can be easily fooled by manually crafted adversarial examples. This will bring huge concern when people adopt machine learning models on safety-critical tasks. Our group is interested in studying the behavior of adversarial attacks and their potential risk on various machine learning scenarios, as well as reliable defending algorithms which can improve the model safety and robustness against adversarial attacks. For a further look at the related works in this field, please refer to our surveys, Adversarial Attacks and Defenses in Images, Graphs and Text: A Review" and Adversarial Attacks and Defenses on Graphs: A Review and Empirical Study, GitHub repository and the KDD tutorial, which are build by our group members Han Xu, Xiaorui Liu, Yaxin Li, Wei Jin and Jiliang Tang.

Recommendations

Recommender systems are intelligent E-commerce applications. These systems assist users in their information-seeking tasks by suggesting items (products, services, or information) that best fit their needs and preferences. Recommender systems have become increasingly popular in recent years, and have been utilized in a variety of domains including movies, music, books, search queries, and social tags. Typically, a recommendation procedure can be modeled as interactions between users and recommender system, which consists of two phases: user model construction and recommendation generation. During the interaction, the recommender agent builds a user model to learn users’ preferences based on users’ personal information or historical behaviors. Then, the recommender agent generates a list of items that best match users’ preferences. In our recommendation project, we employ advanced techniques from academic and industrial communities, including reinforcement learning, automated machine learning and graph neural networks, to address the challenges in real-world recommender systems. We have collected well known recommendation datasets and created a condensed respository for research purposes. This Respository contains a list of public and compatible datasets, noting other major repositories containing newer, and popular real-world datasets that are available, along with reference of sample code for respective recomendation tasks.

Fairness in Machine Learning and AI

The elimination of discrimination is an important issue that our modern-day society is facing. Learning from human behaviors, AI systems have been shown to inherit the prejudices from humans. Since such AI systems have been integrated in many human-related applications, it is important to ensure that they do not make biased decisions against a particular group of people or individuals. In this research direction, we work on detecting bias in machine learning model as well as developing fair algorithms. Our approaches are applicable to various domains, such as natural language processing, recommender systems and education.

Learning from Small Data

The great successes achieved by deep learning models in recent years should partially be contributed by the existence of various kinds of large-scale labeled data sets, as many deep learning models require sufficient labeled data to train their complex network architecture containing tens of thousands of parameters in a supervised way. However, due to practical reasons such as budgets and resources, obtaining such large-scale labeled training data sets may be impossible in many real-world applications, like education and health-care. Without sufficient training data, supervised deep learning models will easily run into over-fitting problems and lead to sub-optimal solutions. Hence, both academia and industry paid more and more attention to the small data learning problem, where they aim to either explore applicable ways to tackle the shortage problem of labeled training data or designing effective learning models that can achieve good performance based on small amount of training data. Our group follows with great interest in the small data learning problem for a long time and proposed several effective solutions to mitigate the negative impacts caused by small amount of available data. For example, we present diverse novel generative models based on generative adversarial nets (GANs) or variational autoencoders (VAEs) to produce different kinds of realistic synthetic labeled data for enriching the training data under different scenario settings, such as imbalanced data setting or incomplete data setting; on the other hand, we developed effective deep models to learn from small amount of data with crowdsourced labels directly, which can handle the small available data case and noisy labels case simultaneously.

Distributed Optimization

Modern machine learning and data science applications heavily rely on large-scale training, which highlights the importance of distributed optimization. Our research in this direction focus on distributed optimization under the following challenges: 1) Large-scale and heterogenous data; 2) General communication topology; and 3) Limited communication bandwidth. We not only provide novel algorithm design but also rigorous theoretical analysis for performance guarantees. Moreover, we aim to implement practical system architectures for large-scale applications in graph neural network, information retrieval and security machine learning, etc. Some existing works include 1) A Double Residual Compression Algorithm for Efficient Distributed Learning, and 2) Linear Convergent Decentralized Optimization with Compression.