Representation and Learning in Graph Neural Networks
Graph Neural Networks (GNNs) have become a popular tool for learning representations of graph-structured inputs, with applications in computational chemistry, recommendation, pharmacy, reasoning, and many other areas.
After a brief introduction to Graph Neural Networks, this talk will show recent results on representational power and learning in GNNs. First, we study the discriminative power of message passing type networks as a function of architecture choices, and, in the process, find that some popular architectures cannot learn to distinguish certain simple graph structures. Second, while many network architectures can represent a task, some learn it better than others. At the example of reasoning tasks, We formalize the interaction of the network architecture and the structure of the task, and probe its effect on learning. Third, we analyze learning via new generalization bounds for GNNs.
Bio: Stefanie Jegelka is an X-Window Consortium Career Development Associate Professor in the Department of EECS at MIT, and a member of the Computer Science and AI Lab (CSAIL). Before joining MIT, she was a postdoctoral researcher at UC Berkeley, and obtained her PhD from ETH Zurich and the Max Planck Institute for Intelligent Systems. Stefanie has received a Sloan Research Fellowship, an NSF CAREER Award, a DARPA Young Faculty Award, the German Pattern Recognition Award and a Best Paper Award at ICML. Her research interests span the theory and practice of algorithmic machine learning, including discrete and continuous optimization, discrete probability, and learning with structured data.