Organizing Committee
 Anna Gilbert
Yale University  Roy Lederman
Yale University  Gilad Lerman
University of Minnesota  PerGunnar Martinsson
University of Texas at Austin  Andrea Nahmod
University of Massachusetts Amherst  Kirill Serkh
University of Toronto  Christoph Thiele
University of Bonn  Sijue Wu
University of Michigan
Abstract
The mathematical and computational toolbox for modern experimental and engineering problems has become more diverse than ever before, with a flurry of new challenges in inverse problems and successful practical solutions that present further theoretical questions. In the spirit of the 2012 “Challenges in Geometry, Analysis, and Computation: HighDimensional Synthesis” workshop at Yale, the “Modern Applied and Computational Analysis” workshop will be a celebration of different perspectives on inverse problems, models, inference, and harmonic analysis and a debate about the challenges and opportunities in the next decade of applied analysis. The topics include inverse problems, randomized linear algebra, machine learning in applied analysis, and tensor networks.
The organizers would like to thank James Bremer, Ronald Coifman, Jingfang Huang, Peter Jones, Mauro Maggioni, Yair Minsky, Vladimir Rokhlin, Wilhelm Schlag, John Schotland, Amit Singer, Stefan Steinerberger, and Mark Tygert for their help.
Confirmed Speakers & Participants
Talks will be presented virtually or inperson as indicated in the schedule below.
 Speaker
 Poster Presenter
 Attendee
 Virtual Attendee

Ian Adelstein
Yale University

Yariv Aizenbud
Yale University

Bradley Alpert
NIST

Noah Amsel
New York University

Travis Askham
New Jersey Institute of Technology

Amir Averbuch
Tel Aviv University

Francoise Axel
Yale University

Demba Ba
Harvard University

Ronen Basri
Weizmann Institute of Science

Gregory Beylkin
University of Colorado Boulder

Chris Bishop
Stony Brook University

Carlos Borges
University of Central Florida

Leon Bottou
Meta

Jeffrey Brock
Yale University

James Brofos
SESCO Enterprises LLC

Satish Chandran
University of California Riverside

Dongwei Chen
Clemson University

Xiuyuan Cheng
Duke University

Alexander Cloninger
University of California at San Diego

Ronald Coifman
Yale University

Mark Comerford
University of Rhode Island

Zizai Cui
Yale University

Steve Damelin
Mathematical Scientist, Ann Arbor MI

Guy David
Université ParisSaclay

Edward De Brouwer
Yale University

Tiago dos Santos Domingues
Yale University

Ran Duan
PIMCO

Dominique Duncan
University of Southern California

Fariba Fahroo
AFOSR

Dami Fasina
Yale University

Anna Gilbert
Yale University

Zydrunas Gimbutas
NIST

Alex Gittens
Rensselaer Polytechnic Institute

Alexandra Golovin
Duke University

María González
Universidad de Cádiz

Tristan Goodwill
Courant Institute of Mathematical Sciences

Abinand Gopal
Yale University

Sigal Gottlieb
University of Massachusetts Dartmouth

Leslie Greengard
New York University

Philip Greengard
Columbia University

F. Alberto Grünbaum
University of California, Berkeley

Diana Halikias
Cornell University

Ashlin Harris
Brown University

Jeremy Hoskins
University of Chicago

Jingfang Huang
University of North Carolina at Chapel Hill

Jonas Katona
Yale University

Isay Katsman
Yale University

Yannis Kevrekidis
Johns Hopkins University

Bahram Khalichi
New York University

Yuehaw Khoo
The University of Chicago

Yuval Kluger
Yale University

Brian Knight
UC Davis

Dhruv Kohli
University of California San Diego

Shahar Kovalsky
Duke University

Smita Krishnaswamy
Yale University

Dan Kushnir
Bell Laboratories, Nokia

Boris Landa
Yale University

Roy Lederman
Yale University

William Leeb
University of Minnesota, Twin Cities

Gilad Lerman
University of Minnesota

Sivan Leviyang
Georgetown University

Lin Lin
University of California  Berkeley

YaWei Eileen Lin
Technion  Israel Institute of Technology

Ofir Lindenbaum
Bar Ilan University

Sheng Liu
New York University

Yiping Lu
stanford

Paul MacManus
Charles River Development

Mauro Maggioni
Johns Hopkins University

Nicholas Marshall
Oregon State University

PerGunnar Martinsson
University of Texas at Austin

Ian Oliver McPherson
Johns Hopkins University

Michelle Michelle
Purdue University

Eric Michielssen
University of Michigan

Gal Mishne
University of California San Diego

Martin Mohlenkamp
Ohio University

Caroline Moosmüller
University of North Carolina at Chapel Hill

Jason Morris
SUNY Brockport

Arje Nachman
Air Force Office of Scientific Research

Sachin Natesh
University of Colorado Boulder

Kevin O'Neill
Yale University

Andrei Osipov
Two Sigma

Maria Pereyra
University of New Mexico

Erez Peterfreund
Yale

Jacques PEYRIERE
Universite ParisSaclay

Jill Pipher
Brown University  ICERM

Matthew Piselli
Yale University

Qing Qu
University of Michigan

Neta Rabin
TelAviv University

Manas Rachh
Flatiron Institute

Vladimir Rokhlin
Yale University

Amir Sagiv
Columbia University

Naoki Saito
University of California, Davis

John Schotland
Yale University

Raanan Schul
stony brook university

Kirill Serkh
University of Toronto

Sarswati Shah
Universidad Nacional Autónoma de México

Uri Shaham
Bar Ilan University

Zhaiming Shen
University of Georgia

Zewen Shen
University of Toronto

Nan Sheng
University of Chicago

David Silva Sánchez
Yale University

Amit Singer
Princeton University

Stanislav Smirnov
Université de Genève

Stefan Steinerberger
University of Washington

Ronen Talmon
Technion  Israel Institute of Technology

Christoph Thiele
University of Bonn

Bogdan Toader
Yale University

Mark Tygert
Meta Platforms, Inc.

Ignacio UriarteTuero
University of Toronto

Shravan Veerapaneni
University of Michigan

Adrian Vladu
CNRS

Peng Wang
The University of Michigan

Li Wang
University of Minnesota

Zhongjian Wang
The University of Chicago

Haiyang Wang
Flatiron Institute

Tony Wong
ICERM

Bobbie Wu
University of Massachusetts Lowell

HauTieng Wu
Duke University

Xingchi Yan
Harvard University

Fengyu Yang
University of North Carolina at Chapel Hill

Ruiyi Yang
Princeton University

Zhuolun Yang
Brown University

JoonHyeok Yim
Yale University

Yukun Yue
Carnegie Mellon University

Hanwen Zhang
Yale University

Wenjun Zhao
Brown University

Mohan Zhao
University of Toronto

Valery Zheludev
Tel Aviv University

Ming Zhong
Johns Hopkins University
Workshop Schedule
Monday, June 26, 2023

8:30  8:50 am EDTCheck In11th Floor Collaborative Space

8:50  9:00 am EDTWelcome11th Floor Lecture Hall
 Brendan Hassett, ICERM/Brown University

9:00  9:45 am EDTWeilPetersson curves, traveling salesman theorems, and minimal surfaces11th Floor Lecture Hall
 Speaker
 Chris Bishop, Stony Brook University
 Session Chairs
 Amir Sagiv, Columbia University
 Raanan Schul, stony brook university
Abstract
WeilPetersson curves are a class of rectifiable closed curves in the plane, defined as the closure of the smooth curves with respect to the WeilPetersson metric defined by Takhtajan and Teo in 2006. Their work solved a problem from string theory by making the space of closed loops into a Hilbert manifold, but the same class of curves also arises naturally in complex analysis, geometric measure theory, probability theory, knot theory, computer vision, and other areas. No geometric description of WeilPetersson curves was known until 2019, but there are now more than twenty equivalent conditions. One involves inscribed polygons and can be explained to a calculus student. Another is a strengthening of Peter Jones's traveling salesman condition characterizing rectifiable curves. A third says a curve is WeilPetersson iff it bounds a minimal surface in hyperbolic 3space that has finite total curvature. I will discuss these and several other characterizations and sketch why they are all equivalent to each other.

10:00  10:30 am EDTCoffee Break11th Floor Collaborative Space

10:30  11:15 am EDTNew perspectives on inverse problems: stochasticity and Monte Carlo method11th Floor Lecture Hall
 Speaker
 Li Wang, University of Minnesota
 Session Chairs
 Amir Sagiv, Columbia University
 Raanan Schul, stony brook university
Abstract
In this talk, we introduce two new aspects of inverse problems formulated as PDEconstrained optimization. Firstly, while current approaches assume deterministic parameters, many realworld problems exhibit stochastic behavior. We present a novel approach that treats the PDE solver as a pushforward map to recover the full distribution of unknown random parameters. We introduce a gradientflow equation to estimate the groundtruth parameter probability distribution. Secondly, as problem dimensions increase, Monte Carlo methods regain relevance. However, directly applying them to gradientbased PDEconstrained optimization poses challenges due to the product of forward and adjoint solutions involving Dirac deltas. We propose strategies to rescue Monte Carlo methods and make them compatible with gradientbased optimization.

11:30 am  12:15 pm EDTQuantum Signal Processing11th Floor Lecture Hall
 Speaker
 Lin Lin, University of California  Berkeley
 Session Chairs
 Amir Sagiv, Columbia University
 Raanan Schul, stony brook university
Abstract
Quantum Signal Processing (QSP) is a revolutionary technique that uses a product of unitary matrices to represent polynomials, with numerous applications in quantum computing. In this talk, I will introduce QSP in a fashion that does not require prior knowledge on quantum computing. We introduce optimizationbased algorithms that can efficiently find the ""phase factors"" used to represent a given polynomial. We also identify a surprising connection between the smoothness of the target function and the decay properties of a specific branch of the phase factors. Y. Dong, L. Lin, H. Ni, J. Wang, Infinite quantum signal processing [arXiv:2209.10162] J. Wang, Y. Dong, L. Lin, On the energy landscape of symmetric quantum signal processing, Quantum 6, 850, 2022 Y. Dong, X. Meng, K. B. Whaley, L. Lin, Efficient phase factor evaluation in quantum signal processing, Phys. Rev. A 103, 042419, 2021

12:30  2:30 pm EDTLunch/Free Time

2:30  3:15 pm EDTNonlocal PDEs and Quantum Optics11th Floor Lecture Hall
 Speaker
 John Schotland, Yale University
 Session Chairs
 PerGunnar Martinsson, University of Texas at Austin
 Manas Rachh, Flatiron Institute
Abstract
Quantum optics is the quantum theory of the interaction of light and matter. In this talk, I will describe a realspace formulation of quantum electrodynamics with applications to many body problems. The goal is to understand the transport of nonclassical states of light in random media. In this setting, there is a close relation to kinetic equations for nonlocal PDEs with random coefficients.

3:30  4:00 pm EDTCoffee Break11th Floor Collaborative Space

4:00  4:45 pm EDTWignerSmith Methods for Computational Electromagnetics and Acoustics11th Floor Lecture Hall
 Speaker
 Eric Michielssen, University of Michigan
 Session Chairs
 PerGunnar Martinsson, University of Texas at Austin
 Manas Rachh, Flatiron Institute
Abstract
WignerSmith (WS) time delay concepts have been used extensively in quantum mechanics to characterize delays experienced by particles interacting with a potential well. This presentation will formally extend WS time delay theory to Maxwells equations and explores its potential applications in electromagnetics. The WS time delay matrix relates a lossless and reciprocal systems scattering matrix to its frequency derivative and allows for the construction of modes that experience welldefined group delays when interacting with the system. The matrix entries for guiding, scattering, and radiating systems are energylike overlap integrals of the electric and/or magnetic fields that arise upon excitation of the system via its ports. Numerous applications in electromagnetics will be highlighted, including the characterization of group delays in multiport systems, the description of electromagnetic fields in terms of elementary scattering processes, and the characterization of frequency sensitivities of fields and multiport antenna impedance matrices. Extensions of WS methods towards lossy and dispersive systems will be analyzed as well, and avenues for leveraging WS concepts in computational electromagnetics will be discussed.

5:00  6:30 pm EDTReception11th Floor Collaborative Space
Tuesday, June 27, 2023

9:00  9:45 am EDTOf Crystals and Corals11th Floor Lecture Hall
 Speaker
 Stanislav Smirnov, Université de Genève
 Session Chairs
 Gilad Lerman, University of Minnesota
 Gal Mishne, University of California San Diego
Abstract
There are many realworld processes exhibiting fractal growing shapes  from mineral deposition and coral growth to lightning strikes, and in many of them growth is related to diffusion properties. We will discuss two seminal models: Diffusion Limited Aggregation was introduced by Witten and Sanders in 1981 and was generalized to Dielectric Breakdown Model by Niemayer et al shortly afterwards. Numerically they approximate very well a wide range of physical phenomena. However, despite a very simple definition (DLA cluster grows by attaching particles undergoing Brownian motion when they hit the aggregate), very little is understood today, and even less is known rigorously  essentially, only the famous Harry Kesten upper bound on the DLA growth. We will try to show the flavor of these models and present some new results. Based on a joint preprint with Ilya Losev and some further work.

10:00  10:30 am EDTCoffee Break11th Floor Collaborative Space

10:30  11:15 am EDTOn the Connectivity of Chordarc Curves11th Floor Lecture Hall
 Speaker
 María González, Universidad de Cádiz
 Session Chairs
 Gilad Lerman, University of Minnesota
 Gal Mishne, University of California San Diego
Abstract
A chordarc curve is a locally rectifiable curve satisfying the property that the length of the shortest arc on the curve joining any two points is comparable to the distance between these two points. In this talk, we will introduce the open problem of the connectivity of the manifold of chord arc curves, already mentioned by G. David in his thesis in 1981, and present some recent results that allow us to transfer the connectivity problem to a problem involving the spectrum of a Beurling type operator on a particular weighted space.

11:30 am  12:15 pm EDTMultiscale Diffusion Geometry for Learning Manifolds, Flows and Optimal Transport11th Floor Lecture Hall
 Speaker
 Smita Krishnaswamy, Yale University
 Session Chairs
 Gilad Lerman, University of Minnesota
 Gal Mishne, University of California San Diego
Abstract
In this talk we show how to learn the underlying geometry of data using multiscale data diffusion, and then combine this with deep learning for prediction and inference in several different settings. First we look at capturing graphs using multiscale diffusion based geometric scattering within neural frameworks. We show how to make such networks endtoend differentiable in order to learn rich representations spaces from which to classify and generate graphs. We then show how to extend this type of analysis to manifolds, where pointclouds of data can be similarly featurized using cascades of wavelets on data graphs to create a manifold scattering transform. Next we show how to derive Wasserstein distances between pointclouds of such data using multiscale diffusion distances. FInally we move from static to dynamic optimal transport using neural ODEs in order to learn dynamic trajectories from static snapshot data—a key problem in inference from single cell data. Throughout the talk, we present examples of such techniques being applied to massively high throughput and high dimensional datasets from biology and medicine.

12:30  2:00 pm EDTLunch/Free TimeLunch/Free Time

2:00  3:00 pm EDTPoster Session BlitzLightning Talks  11th Floor Lecture Hall
 Session Chair
 Bogdan Toader, Yale University

3:30  5:30 pm EDTPoster Session / Coffee BreakPoster Session  11th Floor Collaborative Space

4:30  4:45 pm EDTRemarks  Peter Jones pt 111th Floor Lecture Hall
 Jill Pipher, Brown University  ICERM

4:45  5:00 pm EDTRemarks  Peter Jones pt 211th Floor Lecture Hall
 Ronald Coifman, Yale University
Wednesday, June 28, 2023

9:00  9:45 am EDTSome old and some newer perspectives on datadriven modeling of complex systems11th Floor Lecture Hall
 Speaker
 Yannis Kevrekidis, Johns Hopkins University
 Session Chairs
 James Brofos, SESCO Enterprises LLC
 Shira Golovin, Duke University
Abstract
I will discuss avenues in data driven modeling of complex systems (for my group and myself) that my interaction with Raphy Coifman has enabled  from "variable free" latent space dynamics twenty years ago to "learning what to learn" and "backward in time" today.

10:00  10:30 am EDTCoffee Break11th Floor Collaborative Space

10:30  11:15 am EDTHow will we plan our stakes in deep haystacks? Science in the AI Spring11th Floor Lecture Hall
 Speaker
 Demba Ba, Harvard University
 Session Chairs
 James Brofos, SESCO Enterprises LLC
 Shira Golovin, Duke University
Abstract
To elucidate the basic laws that govern processes around us, scientists ask questions and, often, collect data that will let them answer these questions. The answers typically rely on the solutions to socalled inverse problems, namely algorithms for mapping data to latent variables the scientist can interpret. Physical/statistical models often constrain the latent variables and how they relate to data we acquire. In recent years, deep learning algorithms, largely myopic to these constraints, have become a popular method for solving inverse problems. I will argue that the cost of collecting data in science, the need for interpretability, and the dynamic nature of scientific data make vanilla artificial neural networks (ANNs) unsuitable, at best, in scientific settings. I will argue for sparsity of latent representations as a mild form of inductive biase for DNN models of scientific data, which can let us enjoy both the interpretability of traditional methods for solving inverse problems, and the expressive power of ANNs. I will demonstrate that ANNs designed in this fashion make powerful interpretable tools for elucidating the principles of neural computation, and for solving a wide range of inverse problems in imaging, Physics, and beyond, particularly in the datascare/limited regime that characterizes many scientific settings.

11:30 am  12:15 pm EDTOn the Connection between Deep Neural Networks and Kernel Methods11th Floor Lecture Hall
 Speaker
 Ronen Basri, Weizmann Institute of Science
 Session Chairs
 James Brofos, SESCO Enterprises LLC
 Shira Golovin, Duke University
Abstract
Recent theoretical work has shown that under certain conditions, massively overparameterized neural networks are equivalent to kernel regressors with a family of kernels called Neural Tangent Kernels (NTKs). My work in this subject aims to better understand the properties of NTK for various network architectures and relate them to the inductive bias of real neural networks. In particular, I will argue that for input data distributed uniformly on the sphere NTK favors lowfrequency predictions over highfrequency ones, potentially explaining why overparameterized networks can generalize even when they perfectly fit their training data. I will further discuss the behavior of NTK when data is distributed nonuniformly and show that NTK (with ReLU activation) is tightly related to the classical Laplace kernel, which has a simple closedform. Finally, I will discuss our analysis of NTK for convolutional networks, which indicates that these networks are biased toward learning low frequency target functions with any higher frequencies concentrated in local regions. Overall, our results suggest that much insight about neural networks can be obtained from the analysis of NTK.

12:25  12:30 pm EDTGroup Photo (Immediately After Talk)11th Floor Lecture Hall

12:30  2:30 pm EDTOpen Problem Session LunchLunch/Free Time

2:30  3:30 pm EDTPanel Part 1 (Introductions / Presentations)Panel Discussion  11th Floor Lecture Hall
 Moderator
 Anna Gilbert, Yale University
 Panelists
 Ronald Coifman, Yale University
 Guy David, Université ParisSaclay
 Fariba Fahroo, AFOSR
 Leslie Greengard, New York University
 F. Alberto Grünbaum, University of California, Berkeley
 Yannis Kevrekidis, Johns Hopkins University
 Arje Nachman, Air Force Office of Scientific Research
 Vladimir Rokhlin, Yale University

3:30  4:00 pm EDTCoffee Break11th Floor Collaborative Space

4:00  5:00 pm EDTPanel Part 2Panel Discussion  11th Floor Lecture Hall
 Moderator
 Anna Gilbert, Yale University
 Panelists
 Ronald Coifman, Yale University
 Guy David, Université ParisSaclay
 Fariba Fahroo, AFOSR
 Leslie Greengard, New York University
 F. Alberto Grünbaum, University of California, Berkeley
 Yannis Kevrekidis, Johns Hopkins University
 Arje Nachman, Air Force Office of Scientific Research
 Vladimir Rokhlin, Yale University
Thursday, June 29, 2023

9:00  9:45 am EDTSome estimation problems for highdimensional stochastic dynamical systems with structure11th Floor Lecture Hall
 Speaker
 Mauro Maggioni, Johns Hopkins University
 Session Chairs
 Yariv Aizenbud, Yale University
 Ronen Talmon, Technion  Israel Institute of Technology
Abstract
We consider several estimation problems for stochastic dynamical systems from observations of trajectories: Let A be a linear dynamical system on a graph G. A and G are unknown, we observe a small number of entries of A, A^2, …, A^T, and we wish to estimate A. We study when this problem is wellposed, introduce an estimator of A based on matrix completion of a lowrank structured blockHankel matrix, obtain results that capture some of the tradeoffs between sampling in space and time, and finally show that this estimator can be constructed by a fast algorithm that provably locally converges quadratically to A. We verify this numerically on a variety of examples [C. Kuemmerle, MM, S. Tang]. We consider nonlinear dynamical systems modeling interacting agents. The laws of interactions between the agents are often simple, e.g. they depend only on a function of pairwise interactions. Given observations along trajectories of the agents, we construct statistically and computationally efficient estimators for the laws of interactions, in a nonparametric fashion, and give conditions guaranteeing the problem is wellposed [F. Lu, MM, J. Feng, P. Martin, J.Miller, S. Tang and M. Zhong]. We consider model reduction of fastslow highdimensional stochastic systems with a lowdimensional slow manifold M. The fast modes are not assumed to be small, nor orthogonal to M. Both the dynamics and M are unknown; given access to a blackbox simulator from which short bursts of simulations can be obtained, we estimate of the manifold M, an effective stochastic process on M, and a simulator thereof, adapted to the dimension of M, and with time steps dependent on the regularity of the effective process. The estimation may be performed onthefly, for efficient exploration. We demonstrate the simulation of paths of the effective dynamics, and estimation of crucial features, including the stationary distribution, metastable states, residence times and transition rates [MM, X.F. Ye, S. Yang].

10:00  10:30 am EDTCoffee Break11th Floor Collaborative Space

10:30  11:15 am EDTLow distortion embeddings with bottomup manifold learning11th Floor Lecture Hall
 Speaker
 Gal Mishne, University of California San Diego
 Session Chairs
 Yariv Aizenbud, Yale University
 Ronen Talmon, Technion  Israel Institute of Technology
Abstract
Manifold learning algorithms aim to map highdimensional data into lower dimensions while preserving local and global structure. In this talk, I present Low Distortion Local Eigenmaps (LDLE), a bottomup manifold learning framework that constructs lowdistortion local views of a dataset in lower dimensions and registers them to obtain a global embedding. Motivated by Jones, Maggioni, and Schul (2008), LDLE constructs local views by selecting subsets of the global eigenvectors of the graph Laplacian such that they are locally orthogonal. The global embedding is obtained by rigidly aligning these local views, which is solved iteratively. Our global alignment formulation enables tearing manifolds so as to embed them into their intrinsic dimension, including manifolds without boundary and nonorientable manifolds. We define a strong and weak notion of global distortion to evaluate embeddings in low dimensions. We show that Riemannian Gradient Descent (RGD) converges to an embedding with guaranteed low global distortion. Compared to competing manifold learning and data visualization approaches, we demonstrate that LDLE achieves lowest local and global distortion on real and synthetic datasets.

11:30 am  12:15 pm EDTCurvature on Combinatorial Graphs11th Floor Lecture Hall
 Speaker
 Stefan Steinerberger, University of Washington
 Session Chairs
 Yariv Aizenbud, Yale University
 Ronen Talmon, Technion  Israel Institute of Technology
Abstract
Curvature is one of the fundamental ingredients in differential geometry. It is interesting to think of combinatorial graphs as manifolds and a number of different notions of curvature have been proposed. I will introduce some of the existing ideas and then propose a new notion based on a simple and completely explicit linear system of equations. This notion satisfies a surprisingly large number of desirable properties  connections to game theory (especially the von Neumann Minimax Theorem) and potential theory will be sketched. I will also sketch some curious related open problems. No prior knowledge of differential geometry (or graphs) is required.

12:30  2:30 pm EDTNetworking LunchLunch/Free Time  11th Floor Collaborative Space

2:30  3:15 pm EDTReduced label complexity for tight linear regression11th Floor Lecture Hall
 Speaker
 Alex Gittens, Rensselaer Polytechnic Institute
 Session Chairs
 Kirill Serkh, University of Toronto
 Amit Singer, Princeton University
Abstract
The success of modern supervised machine learning is predicated on the existence of large labeled data sets, but in many domains, it is cost prohibitive to generate a large number of high quality labels. It is more feasible to first collect a large unlabeled data set, then decide to invest in labeling a subset of this data set. This motivates the consideration of label complexity: for a given supervised learning problem and unlabeled data set, what is the minimal number of labels required so that a model fitted using the labeled subset has almost as much predictive power as would a model fitted on the entire data set if all the labels were available? We present algorithmic results on the label complexity of linear regression: given n data points, how many samples must be labeled to obtain a model with near optimal insample prediction error? Existing approaches to reducing the label complexity of linear regressionincluding various approaches from the randomized numerical linear algebra community such as coresets and the use of iterative algorithms that touch one data point per iteration are applicable when a constant factor approximation is acceptable. New approaches are needed to enter the regime where the approximation factor decreases with the size of the data set. In this setting, we provide a polynomial time algorithm that reduces the label complexity by O(sqrt(n)) additively. The algorithm is based on a tight analysis of the regression error incurred by forming a coreset using backward selection.

3:30  4:15 pm EDTRandomized algorithms for linear algebraic computations11th Floor Lecture Hall
 Speaker
 PerGunnar Martinsson, University of Texas at Austin
 Session Chairs
 Kirill Serkh, University of Toronto
 Amit Singer, Princeton University
Abstract
The talk will describe how randomized algorithms can effectively, accurately, and reliably solve linear algebraic problems that are omnipresent in scientific computing and in data analysis. We will focus on techniques for low rank approximation, since these methods are particularly simple and powerful. The talk will also briefly survey a number of other randomized algorithms for tasks such as solving linear systems, estimating matrix norms, and computing full matrix factorizations.

5:00  7:00 pm EDTBanquet (offsite)Banquet (offsite)
Friday, June 30, 2023

9:00  9:45 am EDTProject and Forget: Solving LargeScale Metric Constrained Problems11th Floor Lecture Hall
 Speaker
 Anna Gilbert, Yale University
 Session Chairs
 Boris Landa, Yale University
 John Schotland, Yale University
Abstract
Many important machine learning problems can be formulated as highly constrained convex optimization problems. One important example is metric constrained problems. In this paper, we show that standard optimization techniques can not be used to solve metric constrained problem. To solve such problems, we provide a general active set framework, called Project and Forget, and several variants thereof that use Bregman projections. Project and Forget is a general purpose method that can be used to solve highly constrained convex problems with many (possibly exponentially) constraints. We provide a theoretical analysis of Project and Forget and prove that our algorithms converge to the global optimal solution and have a linear rate of convergence. We demonstrate that using our method, we can solve large problem instances of general weighted correlation clustering, metric nearness, information theoretic metric learning and quadratically regularized optimal transport; in each case, outperforming the state of the art methods with respect to CPU times and problem sizes. Joint work with Rishi Sonthalia (UCLA)

10:00  10:30 am EDTCoffee Break11th Floor Collaborative Space

10:30  11:15 am EDTRandomized tensornetwork algorithms for random data in highdimensions11th Floor Lecture Hall
 Speaker
 Yuehaw Khoo, The University of Chicago
 Session Chairs
 Boris Landa, Yale University
 John Schotland, Yale University
Abstract
Tensornetwork ansatz have long been employed to solve the highdimensional Schrödinger equation, demonstrating linear complexity scaling with respect to dimensionality. Recently, this ansatz has found applications in various machine learning scenarios, including supervised learning and generative modeling, where the data originates from a random process. In this talk, we present a new perspective on randomized linear algebra, showcasing its usage in estimating a density as a tensornetwork from i.i.d. samples of a distribution, without the curse of dimensionality, and without the use of optimization techniques. Moreover, we illustrate how this concept can combine the strengths of particle and tensornetwork methods for solving highdimensional PDEs, resulting in enhanced flexibility for both approaches. (Based on joint works with Yian Chen, Jeremy Hoskins, YoonHaeng Hur, Michael Lindsey, Yifan Peng, Miles Stoudenmire, Xun Tang, and Lexing Ying).

11:30 am  12:15 pm EDTIntegral equations and singular waveguides11th Floor Lecture Hall
 Speaker
 Jeremy Hoskins, University of Chicago
 Session Chairs
 Boris Landa, Yale University
 John Schotland, Yale University

12:30  4:00 pm EDTLunch/ Free Time for CollaborationLunch/Free Time
All event times are listed in ICERM local time in Providence, RI (Eastern Daylight Time / UTC4).
All event times are listed in .
ICERM local time in Providence, RI is Eastern Daylight Time (UTC4). Would you like to switch back to ICERM time or choose a different custom timezone?