KDD 2022 Deep Learning Day

KDD 2022 Deep Learning Day

The impact of deep learning on the world has been nothing short of transformative. Powered by the surge in modern compute capacities, widespread data availability, and advances in coding frameworks, deep neural networks are ubiquitous. Deep methods yield state-of-the-art performance in various domains (e.g., computer vision, speech recognition and generation, natural language processing and understanding, recommendation systems), and are still widening their lead as more advanced methods are developed and new applications emerge on a regular basis. Currently, there is interest in both theoretical and practical aspects of deep learning, including interpretable and transparent theory and models that can further help understand the great empirical successes that many real-world applications have been enjoying, as well as data and compute efficiency, reliability, robustness and safety, privacy and ethical considerations, and more.

At KDD 2022, the Deep Learning Day is a key event, specifically dedicated to the impact of deep learning on data science. The goal will be to provide a broad overview of recent developments in deep learning, including emerging topics that deserve more attention. The Deep Learning Day will include exciting plenary keynotes by thought leaders and short presentations by junior researchers from the deep learning and data mining communities, as well as half-day workshops focusing on emerging deep learning topics in KDD.

The KDD Deep Learning Day 2022 will take place on Aug 15, 8am-5pm EST. On behalf of the Deep Learning Day and KDD 2022 organizing committee, we welcome you all to attend this event!

Final Schedule

  • 8:00AM - Opening remarks
  • 8:05AM - Jennifer Neville (Microsoft Research / Purdue University), TBD
  • 8:40AM - George Karypis (AWS AI / University of Minnesota), Graph Neural Network Research at AWS AI
  • 9:15AM - Junior researcher spotlights by
    • Mengdi Huai (Iowa State University), Building Trust in Machine Learning via Automatic and Robust Explanations
    • Han Xu (Michigan State University), Towards Fairness in Adversarial Robust DNNs
    • Lu Lin (Penn State University), Trustworthy Machine Learning On Graph-Structured Data
  • 9:30AM - Coffee Break
  • 10:00AM - Panel
    • Panelists: George Karypis, Yujia Li, Marinka Zitnik
    • Moderator: Jennifer Neville
  • 10:45AM - Marinka Zitnik (Harvard University), Infusing Structure and Knowledge into Biomedical AI
  • 11:20AM - Yujia Li (DeepMind), Competitive Programming with AlphaCode
  • 11:55AM - Closing Remarks (for the plenary session)
  • 12:00PM - Lunch
  • 1:00–5:00PM Deep Learning Day Workshops
    • The 3rd KDD Workshop on Deep Learning for Spatiotemporal Data, Applications, and Systems (DeepSpatial’22)
    • The 4th Workshop on Adversarial Learning Methods for Machine Learning and Data Mining

Speakers

Jennifer Neville
Bio: Jennifer is a Senior Principal Researcher at Research at Microsoft Redmond and the Samuel Conte Chair Professor of Computer Science and Statistics at Purdue University. Her research focuses on developing data mining and machine learning techniques for complex relational and network domains, including social, information, and physical networks. This work has produced over 100 publications with 10K citations. Her awards include an NSF Career Award (2012), ICDM Best Paper (2009), and IEEE’s “10 to Watch in AI” (2008). She was an elected member of the AAAI Executive Council from 2015-2018 and a member of the DARPA Computer Science Study Group in 2007-2008. She was PC chair of the SIAM International Conference on Data Mining in 2019 and the ACM International Conference on Web Search and Data in 2016.
George Karypis
Bio: George Karypis is a Senior Principal Scientist at AWS AI and a Distinguished McKnight University Professor and an ADC Chair of Digital Technology at the Department of Computer Science & Engineering at the University of Minnesota. His research interests span the areas of data mining, machine learning, high performance computing, information retrieval, collaborative filtering, bioinformatics, cheminformatics, and scientific computing. His research has resulted in the development of software libraries for serial and parallel graph partitioning (METIS and ParMETIS), hypergraph partitioning (hMETIS), for parallel Cholesky factorization (PSPASES), for collaborative filtering-based recommendation algorithms (SUGGEST), clustering high dimensional datasets (CLUTO), finding frequent patterns in diverse datasets (PAFI), and for protein secondary structure prediction (YASSPP). He has coauthored over 300 papers on these topics and two books (“Introduction to Protein Structure Prediction: Methods and Algorithms” (Wiley, 2010) and “Introduction to Parallel Computing” (Publ. Addison Wesley, 2003, 2nd edition)). In addition, he is serving on the program committees of many conferences and workshops on these topics, and on the editorial boards of the IEEE Transactions on Knowledge and Data Engineering, ACM Transactions on Knowledge Discovery from Data, Data Mining and Knowledge Discovery, Social Network Analysis and Data Mining Journal, International Journal of Data Mining and Bioinformatics, the journal on Current Proteomics, Advances in Bioinformatics, and Biomedicine and Biotechnology. He is a Fellow of the IEEE.
Marinka Zitnikis
Bio: Marinka Zitnik is an Assistant Professor at Harvard with appointments in the Department of Biomedical Informatics, Broad Institute of MIT and Harvard, and Harvard Data Science. Dr. Zitnik investigates graph representation learning, including pre-trained, self-supervised, multi-purpose, and multi-modal models trained on broad data at scale. Dr. Zitnik has published extensively in top ML venues and leading scientific journals. Her research won best paper and research awards from the International Society for Computational Biology, Bayer Early Excellence in Science Award, Amazon Faculty Research Award, Roche Alliance with Distinguished Scientists Award, Rising Star Award in EECS, and Next Generation in Biomedicine Recognition, being the only young scientist who received such recognition in both EECS and Biomedicine. Dr. Zitnik has organized numerous workshops and meetings in the nexus of deep learning, drug discovery, and biomedical AI at NeurIPS, ICLR, ICML, ISMB, AAAI, IJCAI, and WWW, where she is also in the organizing committees.
Yujia Li
Bio: Yujia Li is a research scientist at DeepMind. He got his Ph.D. from University of Toronto and joined DeepMind in 2016. His research interests cover structured models for structured data, for example graph neural networks, and more recently large language models, in particular for code. He led DeepMind's recent AlphaCode project that reached average human competitor level with a code generation model in programming competitions.
Mengdi Huai
Bio: Mengdi Huai is an Assistant Professor in the Department of Computer Science at Iowa State University. She received her PhD in Computer Science from the University of Virginia in 2022. Her research interests lie in the areas of machine learning and data mining, with a current focus on developing novel techniques to build trustworthy learning systems that are explainable, robust, private, and fair. Mengdi is also interested in designing effective machine learning and data mining algorithms to deal with complex data with both strong empirical performance and theoretical guarantees. Her research work has been published in various top-tier venues, such as KDD, AAAI, IJCAI, NeurIPS, and TKDD. Mengdi was selected as Rising Star in EECS by MIT and Rising Star in Data Science by UChicago in 2021.
Han Xu
Bio: Han Xu is a Ph.D. student of Computer Science and Engineering at Michigan State University. Before joining MSU, he gained his master’s degree of Applied Statistics in the University of Michigan. His current research interest lies on adversarial attacks and defenses, with their applications on various deep learning tasks. He is one of the main contributors of the PyTorch library about adversarial learning, DeepRobust, which helps researchers in the field of adversarial learning. He also has several publications on top conferences and journals such as ICML, NeurIPS, SDM and KDD Explorations.
Lu Lin
Bio: Lu Lin will be joining as a tenure-track assistant professor in the College of Information Sciences and Technology at Penn State University in 2022 Fall. She received a doctorate in computer science in 2022 from the University of Virginia, during which time she also interned at LinkedIn and Pinterest. Lu Lin holds a bachelor’s and master's degree in computer science from Beihang University. Her research interests broadly include machine learning and data mining, with an emphasis on modeling relational data, e.g. graph and network. Her current focus is trustworthy self-supervised machine learning on large-scale graph-structured data, concerning multiple aspects including robustness, fairness, and efficiency. Her research papers have been published in high-impact machine learning/data mining/AI venues such as ICML, AISTATS, KDD, WWW, WSDM, and TKDE.

Organizers

Chandan Reddy
Virginia Tech
Danai Koutra
Univ. of michigan / Amazon

Workshops



The 3rd KDD Workshop on Deep Learning for Spatiotemporal Data, Applications, and Systems (DeepSpatial’22)

Organizers: Zhe Jiang, Zhao Liang, Xun Zhou, Robert Stewart, Junbo Zhang, Shashi Shekhar, Jieping Ye
Description: The significant advancements in software and hardware technologies stimulated the prosperities of the domains in spatial computing and deep learning algorithms, respectively. Recent breakthroughs in the deep learning field have exhibited outstanding performance in handling data in space and time in specific domains such as image, audio, and video. Meanwhile, the development of sensing and data collection techniques in relevant domains have enabled and accumulated large scale of spatiotemporal data over the years, which in turn has led to unprecedented opportunities and prerequisites for the discovery of macro- and micro- spatiotemporal phenomena accurately and precisely. The complementary strengths and challenges between spatiotemporal data computing and deep learning in recent years suggest urgent needs to bring together the experts in these two domains in prestigious venues, which is still missing until now.

This workshop will provide a premium platform for both research and industry to exchange ideas on opportunities, challenges, and cutting-edge techniques of deep learning in spatiotemporal data, applications, and systems



The 4th Workshop on Adversarial Learning Methods for Machine Learning and Data Mining

Organizers: Pin-Yu Chen, Cho-Jui Hsieh, Bo Li, Sijia Liu
Description: In recent years, adversarial learning methods are shown to be a key technique that leads to exciting breakthroughs and new challenges of many machine learning and data mining tasks. Examples include improved training of generative models (e.g., generative adversarial nets), adversarial robustness of machine learning systems in different domains (e.g., adversarial attacks, defenses, and property verification), and robust representation learning (e.g., adversarial loss for learning embedding), to name a few. Generally speaking, the idea of “learning with an adversary” is crucial for expanding the learning capability, ensuring trustworthy decision making, and enhancing generalizability of machine learning and data mining methods.

This workshop also aims to bridge theory and practice by encouraging theoretical studies motivated by adversarial ML/DM problems, such as robust (minimax) optimization and game theory.