Organizing Committee
- Ben Adcock
Simon Fraser University - Simone Brugiapaglia
Concordia University - Anders Hansen
University of Cambridge - Clayton Webster
University of Texas
Abstract
Deep learning is profoundly reshaping the research directions of entire scientific communities across mathematics, computer science, and statistics, as well as the physical, biological and medical sciences . Yet, despite their indisputable success, deep neural networks are known to be universally unstable. That is, small changes in the input that are almost undetectable produce significant changes in the output. This happens in applications such as image recognition and classification, speech and audio recognition, automatic diagnosis in medicine, image reconstruction and medical imaging as well as inverse problems in general. This phenomenon is now very well documented and yields non-human-like behaviour of neural networks in the cases where they replace humans, and unexpected and unreliable behaviour where they replace standard algorithms in the sciences.
The many examples produced over the last years demonstrate the intricacy of this complex problem and the questions of safety and security of deep learning become crucial. Moreover, the ubiquitous phenomenon of instability combined with the lack of interpretability of deep neural networks makes the reproducibility of scientific results based on deep learning at stake.
For these reasons, the development of mathematical foundations aimed at improving the safety and security of deep learning is of key importance. The goal of this workshop is to bring together experts from mathematics, computer science, and statistics in order to accelerate the exploration of breakthroughs and of emerging mathematical ideas in this area.
This workshop is fully funded by a Simons Foundation Targeted Grant to Institutes.
Confirmed Speakers & Participants
Talks will be presented virtually or in-person as indicated in the schedule below.
- Speaker
- Poster Presenter
- Attendee
- Virtual Attendee
-
Ben Adcock
Simon Fraser University
-
Ibrahim Olalekan Alabi
BOISE STATE UNIVERSITY
-
Elie Alhajjar
US Military Academy
-
Genevera Allen
Rice University
-
jenny baglivo
boston college
-
Nicola Bastianello
University of Padova
-
Getachew Befekadu
Morgan State University
-
Aaron Berk
University of British Columbia
-
Alex Bespalov
University of Birmingham
-
Ghanshyam Bhatt
Tennessee State University
-
Shivam Bhatt
University of Toronto
-
Ralph Bording
Alabama A&M University
-
Nicolas Boulle
University of Oxford
-
Elisa Bravo
Florida International University
-
Simone Brugiapaglia
Concordia University
-
Emmanuel Candes
Stanford University
-
Aaron Charous
MIT
-
Ke Chen
University of Liverpool
-
Janhavi Chitale
Florida International University
-
Matthew Colbrook
University of Cambridge
-
Rachel Cummings
Columbia University
-
Marta D'Elia
Sandia National Laboratories, NM
-
Anil Damle
Cornell University
-
Ronald DeVore
Texas A&M University
-
Nick Dexter
Simon Fraser University
-
George Dulikravich
Florida International University
-
Thomas Fel
Artificial and Natural Intelligence Toulouse Institute, Brown University
-
Marija Furdek
Chalmers University of Technology
-
Gustavo Gasperazzo
Federal University of Rio de Janeiro
-
Horacio Gomez-Acevedo
University of Arkansas for Medical Sciences
-
Pedro González Rodelas
University of Granada
-
Zach Grey
National Institute of Standards and Technology
-
Suman Guha
Presidency University
-
Anders Hansen
University of Cambridge
-
Fengxiang He
The University of Sydney
-
Kaveh Heidary
Alabama A&M University
-
Fred Hickernell
Illinois Institute of Technology
-
James Hyman
Tulane University
-
Rajesh Jha
Florida International University, Miami, Florida, USA
-
Frederic Jurie
SAFRAN
-
Avleen Kaur
University of Manitoba
-
Abdul Khaliq
Middle Tennessee State University
-
Tamara Kolda
Sandia National Labs
-
Boris Krämer
University of California San Diego
-
Amit Kumar
Indian Institute of Technology Kharagpur
-
Gitta Kutyniok
LMU Munich
-
Henry Kvinge
Pacific Northwest National Lab
-
Christopher Lehnig
San Diego State University
-
Wenyuan Liao
University of Calgary
-
En-Bing Lin
Central Michigan University
-
Jie Long
Middle Tennessee State University
-
Kathryn Lund
Unaffiliated
-
Aleksander Madry
Massachusetts Institute of Technology
-
Jodi Mead
Boise State University
-
Kevin Miller
University of California, Los Angeles
-
Pablo Moriano
Oak Ridge National Laboratory
-
Marshall Mueller
Tufts University
-
Reshma Munbodh
Alpert Medical School of Brown University
-
Basim Mustafa
University of Granada
-
Evangelos Nastas
SUNY
-
Linda Ness
Rutgers University
-
Maksym Neyra-Nesterenko
Simon Fraser University
-
Shobhit Nigam
Pandit Deendayal Petroleum University Gandhinagar
-
Evi Ofekeze
Boise State University
-
HELCIO ORLANDE
Federal University of Rio de Janeiro, UFRJ
-
Jun Sur Park
University of Iowa
-
Vivak Patel
University of Wisconsin -- Madison
-
Sandhya Prabhakaran
Moffitt Cancer Center
-
Jing Qin
University of Kentucky
-
Jason Quinones
Gallaudet University
-
Viktor Reshniak
Oak Ridge National Laboratory
-
Jacob Rezac
National Institute of Standards and Technology
-
Cynthia Rudin
Duke University
-
Quratulan Sabir
National College of business administration and economics
-
Tarik Sahin
Bundeswehr University Munich
-
Giovanni Samaey
KU Leuven
-
Ruchi Sandilya
TIFR Centre For Applicable Mathematics
-
Thomas Serre
Brown University
-
Qin Sheng
Baylor University
-
Yeonjong Shin
Brown University
-
Mansi Sood
Carnegie Mellon University, Pittsburgh
-
Varsha Srivastava
Quantum Integrators Group LLC
-
Ömer Sümer
University of Tubingen
-
Li-yeng Sung
Louisiana State University
-
THOMAS Torku
Middle Tennessee State University
-
Alex Townsend
Cornell University
-
Ivan Tyukin
University of Leicester
-
Marilyn Vazquez Landrove
Ohio State University
-
Abhinav Verma
The University of Texas at Austin
-
Clayton Webster
University of Texas
-
Colby Wight
Pacific Northwest National Laboratory
-
Joab Winkler
Sheffield University
-
Eliyas Woldegeorgis
University of Leicester
-
Karamatou Yacoubou Djima
Amherst College
-
Masanao Yajima
Boston Universisty
-
Ming Yan
Michigan State University
-
Yunan Yang
New York University
-
Vladimir Yushutin
University of Maryland
-
Vasilis Zafiris
University of Houston-Downtown
-
Longbin Zhang
KTH royal institute of technology
-
Bentuo Zheng
University of Memphis
-
Philipp Zilk
Bundeswehr Universität München
Workshop Schedule
Saturday, April 10, 2021
-
10:00 - 10:15 am EDTWelcomeWelcome - Virtual
- Brendan Hassett, ICERM/Brown University
-
10:15 - 10:55 am EDTAn Information Theoretic Approach to Validate Deep Learning-Based AlgorithmsVirtual
- Speaker
- Gitta Kutyniok, LMU Munich
- Session Chair
- Simone Brugiapaglia, Concordia University (Virtual)
Abstract
In this talk, we provide a theoretical framework for interpreting neural network decisions by formalizing the problem in a rate-distortion framework. The solver of the associated optimization, which we coin Rate-Distortion Explanation (RDE), is then accessible to a mathematical analysis. We will discuss theoretical results as well as present numerical experiments showing that our algorithmic approach outperforms established methods, in particular, for sparse explanations of neural network decisions.
-
11:55 am - 1:30 pm EDTLunch/Free TimeVirtual
-
1:30 - 2:10 pm EDTData Matters in Robust MLVirtual
- Speaker
- Aleksander Madry, Massachusetts Institute of Technology
- Session Chair
- Clayton Webster, University of Texas (Virtual)
-
2:20 - 3:00 pm EDTBreaking into a Deep Learning boxVirtual
- Speaker
- Ivan Tyukin, University of Leicester
- Session Chair
- Clayton Webster, University of Texas (Virtual)
Abstract
Recent decade brought explosive progress in the applications of Machine Learning and data-driven Artificial Intelligence (AI) to real-life problems across sectors. Autonomous cars and automated passport control are examples of the new reality. Deep Learning models, or more generally, models with multiple learnable processing stages constitute a large class of models to which a significant part of the recent successes has been apportioned. Notwithstanding these successes, there are emerging challenges too. In this talk we will discuss a set of vulnerabilities which may typically arise in large Deep Learning models. These vulnerabilities are extreme sensitivities of the models to data or structure perturbations. We will present a formal theoretical framework for assessing and analysing two classes of such vulnerabilities. The first class is linked with adversarial examples. Vulnerabilities of the second class are linked with purposeful malicious structure perturbations which may be, with high probability, undetectable through input-output validation. We name these perturbations “stealth attacks”. We will show how to construct stealth attacks on Deep Learning models that are hard to spot unless the validation set is made exponentially large. For both classes of attacks, the high dimensionality of the AI’s decision-making space appears to be a major contributor to the AI’s vulnerability. We conclude with recommendations of how robustness to malicious perturbations of data and structure can be mitigated by ensuring that the data dimensionality at relevant processing stages in Deep Learning models is kept sufficiently small.
-
3:10 - 4:00 pm EDTGathertown Afternoon Coffee BreakCoffee Break - Virtual
Sunday, April 11, 2021
-
9:10 - 9:50 am EDTDeep Learning and Neural Networks: The Mathematical ViewVirtual
- Speaker
- Ronald DeVore, Texas A&M University
- Session Chair
- Anders Hansen, University of Cambridge (Virtual)
Abstract
Deep Learning is much publicized and has had great empirical success on challenging problems in learning. Yet there is no quantifiable proof of performance and certified guarantees for these methods. This talk will give an overview of Deep Learning from the viewpoint of mathematics and numerical computation.
-
10:00 - 10:30 am EDTGathertown Morning Coffee BreakCoffee Break - Virtual
-
10:30 - 11:10 am EDTCan we design deep learning models that are inherently interpretable?Virtual
- Speaker
- Cynthia Rudin, Duke University
- Session Chair
- Anders Hansen, University of Cambridge (Virtual)
Abstract
Black box deep learning models are difficult to troubleshoot. In practice, it can be difficult to tell whether their reasoning process is correct, and ""explanations"" have repeatedly been shown to be ineffective. In this talk I will discuss two possible approaches to create deep learning methods that are inherently interpretable. The first is to use case-based reasoning, through a neural architecture called ProtoPNet, where an extra ""prototype"" layer in the network allows it to reason about an image based on how similar it looks to other images (the network says ""this looks like that""). Second, I will describe ""concept whitening,"" a method for disentangling the latent space of a neural network by decorrelating concepts in the latent space and aligning them along the axes. This Looks Like That: Deep Learning for Interpretable Image Recognition. NeurIPS spotlight, 2019. https://arxiv.org/abs/1806.10574 Concept Whitening for Interpretable Image Recognition. Nature Machine Intelligence, 2020. https://rdcu.be/cbOKj
-
11:20 am - 12:00 pm EDTDifferential privacy, deep learning, and synthetic data generationVirtual
- Speaker
- Rachel Cummings, Columbia University
- Session Chair
- Anders Hansen, University of Cambridge (Virtual)
Abstract
Differential privacy is a parameterized notion of database privacy that gives a mathematically rigorous worst-case bound on the maximum amount of information that can be learned about an individual's data from the output of a computation. Recent work has provided tools for differentially private stochastic gradient decent, which enables differentially private deep learning. These in turn enable differentially private synthetic data generation, to provide synthetic versions of sensitive datasets that share statistical properties with the original data while additionally providing formal privacy guarantees for the training dataset. This talk will first give an introduction to differential privacy, and then survey recent advances in differentially private deep learning and its application to synthetic data generation.
-
12:10 - 1:30 pm EDTLunch/Free TimeVirtual
-
1:30 - 2:10 pm EDTReliability, Robustness and Minipatch LearningVirtual
- Speaker
- Genevera Allen, Rice University
- Session Chair
- Ben Adcock, Simon Fraser University (Virtual)
Abstract
Many have noted and lamented a reproducibility crisis in science with more recent discussion and interest on the reproducibility and reliability of data science and machine learning techniques. In this talk, I will introduce the Four R's, a tiered framework for discussing and assessing the reproducibility, replicability, reliability, and robustness of a data science or machine learning pipeline. Then, I will introduce a new minipatch learning framework that helps to improve the reliability and robustness of machine learning procedures. Inspired by stability approaches from high-dimensional statistics, random forests, and dropout training in deep learning, minipatch learning is an ensemble approach where we train on very tiny randomly or adaptively chosen subsets of both observations and features or parameters. Beyond the obvious computational and memory efficiency advantages, we show that minipatch learning also yields more reliable and robust solutions by providing implicit regularization.
-
2:20 - 3:00 pm EDTReliable predictions? Counterfactual predictions? Equitable treatment? Some recent progress in predictive inferenceVirtual
- Speaker
- Emmanuel Candes, Stanford University
- Session Chair
- Ben Adcock, Simon Fraser University (Virtual)
Abstract
Recent progress in machine learning provides us with many potentially effective tools to learn from datasets of ever increasing sizes and make useful predictions. How do we know that these tools can be trusted in critical and high-sensitivity systems? If a learning algorithm predicts the GPA of a prospective college applicant, what guarantees do I have concerning the accuracy of this prediction? How do we know that it is not biased against certain groups of applicants? This talk introduces statistical ideas to ensure that the learned models satisfy some crucial properties, especially reliability and fairness (in the sense that the models need to apply to individuals in an equitable manner). To achieve these important objectives, we shall not “open up the black box” and try understanding its underpinnings. Rather we discuss broad methodologies that can be wrapped around any black box to produce results that can be trusted and are equitable. We also show how our ideas can inform causal inference predictive; for instance, we will answer counterfactual predictive problems: i.e. predict the outcome of a treatment would have been given that the patient was actually not treated.
-
3:10 - 4:00 pm EDTGathertown Afternoon Coffee BreakCoffee Break - Virtual
All event times are listed in ICERM local time in Providence, RI (Eastern Daylight Time / UTC-4).
All event times are listed in .
ICERM local time in Providence, RI is Eastern Daylight Time (UTC-4). Would you like to switch back to ICERM time or choose a different custom timezone?