Organizing Committee
Abstract

The mathematical and computational toolbox for modern experimental and engineering problems has become more diverse than ever before, with a flurry of new challenges in inverse problems and successful practical solutions that present further theoretical questions. In the spirit of the 2012 “Challenges in Geometry, Analysis, and Computation: High-Dimensional Synthesis” workshop at Yale, the “Modern Applied and Computational Analysis” workshop will be a celebration of different perspectives on inverse problems, models, inference, and harmonic analysis and a debate about the challenges and opportunities in the next decade of applied analysis. The topics include inverse problems, randomized linear algebra, machine learning in applied analysis, and tensor networks.

The organizers would like to thank James Bremer, Ronald Coifman, Jingfang Huang, Peter Jones, Mauro Maggioni, Yair Minsky, Vladimir Rokhlin, Wilhelm Schlag, John Schotland, Amit Singer, Stefan Steinerberger, and Mark Tygert for their help.

Image for "Modern Applied and Computational Analysis"

Confirmed Speakers & Participants

Talks will be presented virtually or in-person as indicated in the schedule below.

  • Speaker
  • Poster Presenter
  • Attendee
  • Virtual Attendee

Workshop Schedule

Monday, June 26, 2023
  • 8:30 - 8:50 am EDT
    Check In
    11th Floor Collaborative Space
  • 8:50 - 9:00 am EDT
    Welcome
    11th Floor Lecture Hall
    • Brendan Hassett, ICERM/Brown University
  • 9:00 - 9:45 am EDT
    Weil-Petersson curves, traveling salesman theorems, and minimal surfaces
    11th Floor Lecture Hall
    • Speaker
    • Chris Bishop, Stony Brook University
    • Session Chairs
    • Amir Sagiv, Columbia University
    • Raanan Schul, stony brook university
    Abstract
    Weil-Petersson curves are a class of rectifiable closed curves in the plane, defined as the closure of the smooth curves with respect to the Weil-Petersson metric defined by Takhtajan and Teo in 2006. Their work solved a problem from string theory by making the space of closed loops into a Hilbert manifold, but the same class of curves also arises naturally in complex analysis, geometric measure theory, probability theory, knot theory, computer vision, and other areas. No geometric description of Weil-Petersson curves was known until 2019, but there are now more than twenty equivalent conditions. One involves inscribed polygons and can be explained to a calculus student. Another is a strengthening of Peter Jones's traveling salesman condition characterizing rectifiable curves. A third says a curve is Weil-Petersson iff it bounds a minimal surface in hyperbolic 3-space that has finite total curvature. I will discuss these and several other characterizations and sketch why they are all equivalent to each other.
  • 10:00 - 10:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:30 - 11:15 am EDT
    New perspectives on inverse problems: stochasticity and Monte Carlo method
    11th Floor Lecture Hall
    • Speaker
    • Li Wang, University of Minnesota
    • Session Chairs
    • Amir Sagiv, Columbia University
    • Raanan Schul, stony brook university
    Abstract
    In this talk, we introduce two new aspects of inverse problems formulated as PDE-constrained optimization. Firstly, while current approaches assume deterministic parameters, many real-world problems exhibit stochastic behavior. We present a novel approach that treats the PDE solver as a push-forward map to recover the full distribution of unknown random parameters. We introduce a gradient-flow equation to estimate the ground-truth parameter probability distribution. Secondly, as problem dimensions increase, Monte Carlo methods regain relevance. However, directly applying them to gradient-based PDE-constrained optimization poses challenges due to the product of forward and adjoint solutions involving Dirac deltas. We propose strategies to rescue Monte Carlo methods and make them compatible with gradient-based optimization.
  • 11:30 am - 12:15 pm EDT
    Quantum Signal Processing
    11th Floor Lecture Hall
    • Speaker
    • Lin Lin, University of California - Berkeley
    • Session Chairs
    • Amir Sagiv, Columbia University
    • Raanan Schul, stony brook university
    Abstract
    Quantum Signal Processing (QSP) is a revolutionary technique that uses a product of unitary matrices to represent polynomials, with numerous applications in quantum computing. In this talk, I will introduce QSP in a fashion that does not require prior knowledge on quantum computing. We introduce optimization-based algorithms that can efficiently find the ""phase factors"" used to represent a given polynomial. We also identify a surprising connection between the smoothness of the target function and the decay properties of a specific branch of the phase factors. Y. Dong, L. Lin, H. Ni, J. Wang, Infinite quantum signal processing [arXiv:2209.10162] J. Wang, Y. Dong, L. Lin, On the energy landscape of symmetric quantum signal processing, Quantum 6, 850, 2022 Y. Dong, X. Meng, K. B. Whaley, L. Lin, Efficient phase factor evaluation in quantum signal processing, Phys. Rev. A 103, 042419, 2021
  • 12:30 - 2:30 pm EDT
    Lunch/Free Time
  • 2:30 - 3:15 pm EDT
    Nonlocal PDEs and Quantum Optics
    11th Floor Lecture Hall
    • Speaker
    • John Schotland, Yale University
    • Session Chairs
    • Per-Gunnar Martinsson, University of Texas at Austin
    • Manas Rachh, Flatiron Institute
    Abstract
    Quantum optics is the quantum theory of the interaction of light and matter. In this talk, I will describe a real-space formulation of quantum electrodynamics with applications to many body problems. The goal is to understand the transport of nonclassical states of light in random media. In this setting, there is a close relation to kinetic equations for nonlocal PDEs with random coefficients.
  • 3:30 - 4:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 4:00 - 4:45 pm EDT
    Wigner-Smith Methods for Computational Electromagnetics and Acoustics
    11th Floor Lecture Hall
    • Speaker
    • Eric Michielssen, University of Michigan
    • Session Chairs
    • Per-Gunnar Martinsson, University of Texas at Austin
    • Manas Rachh, Flatiron Institute
    Abstract
    Wigner-Smith (WS) time delay concepts have been used extensively in quantum mechanics to characterize delays experienced by particles interacting with a potential well. This presentation will formally extend WS time delay theory to Maxwells equations and explores its potential applications in electromagnetics. The WS time delay matrix relates a lossless and reciprocal systems scattering matrix to its frequency derivative and allows for the construction of modes that experience well-defined group delays when interacting with the system. The matrix entries for guiding, scattering, and radiating systems are energy-like overlap integrals of the electric and/or magnetic fields that arise upon excitation of the system via its ports. Numerous applications in electromagnetics will be highlighted, including the characterization of group delays in multiport systems, the description of electromagnetic fields in terms of elementary scattering processes, and the characterization of frequency sensitivities of fields and multiport antenna impedance matrices. Extensions of WS methods towards lossy and dispersive systems will be analyzed as well, and avenues for leveraging WS concepts in computational electromagnetics will be discussed.
  • 5:00 - 6:30 pm EDT
    Reception
    11th Floor Collaborative Space
Tuesday, June 27, 2023
  • 9:00 - 9:45 am EDT
    Of Crystals and Corals
    11th Floor Lecture Hall
    • Speaker
    • Stanislav Smirnov, Université de Genève
    • Session Chairs
    • Gilad Lerman, University of Minnesota
    • Gal Mishne, University of California San Diego
    Abstract
    There are many real-world processes exhibiting fractal growing shapes - from mineral deposition and coral growth to lightning strikes, and in many of them growth is related to diffusion properties. We will discuss two seminal models: Diffusion Limited Aggregation was introduced by Witten and Sanders in 1981 and was generalized to Dielectric Breakdown Model by Niemayer et al shortly afterwards. Numerically they approximate very well a wide range of physical phenomena. However, despite a very simple definition (DLA cluster grows by attaching particles undergoing Brownian motion when they hit the aggregate), very little is understood today, and even less is known rigorously - essentially, only the famous Harry Kesten upper bound on the DLA growth. We will try to show the flavor of these models and present some new results. Based on a joint preprint with Ilya Losev and some further work.
  • 10:00 - 10:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:30 - 11:15 am EDT
    On the Connectivity of Chord-arc Curves
    11th Floor Lecture Hall
    • Speaker
    • María González, Universidad de Cádiz
    • Session Chairs
    • Gilad Lerman, University of Minnesota
    • Gal Mishne, University of California San Diego
    Abstract
    A chord-arc curve is a locally rectifiable curve satisfying the property that the length of the shortest arc on the curve joining any two points is comparable to the distance between these two points. In this talk, we will introduce the open problem of the connectivity of the manifold of chord arc curves, already mentioned by G. David in his thesis in 1981, and present some recent results that allow us to transfer the connectivity problem to a problem involving the spectrum of a Beurling- type operator on a particular weighted space.
  • 11:30 am - 12:15 pm EDT
    Multiscale Diffusion Geometry for Learning Manifolds, Flows and Optimal Transport
    11th Floor Lecture Hall
    • Speaker
    • Smita Krishnaswamy, Yale University
    • Session Chairs
    • Gilad Lerman, University of Minnesota
    • Gal Mishne, University of California San Diego
    Abstract
    In this talk we show how to learn the underlying geometry of data using multiscale data diffusion, and then combine this with deep learning for prediction and inference in several different settings. First we look at capturing graphs using multiscale diffusion based geometric scattering within neural frameworks. We show how to make such networks end-to-end differentiable in order to learn rich representations spaces from which to classify and generate graphs. We then show how to extend this type of analysis to manifolds, where point-clouds of data can be similarly featurized using cascades of wavelets on data graphs to create a manifold scattering transform. Next we show how to derive Wasserstein distances between pointclouds of such data using multiscale diffusion distances. FInally we move from static to dynamic optimal transport using neural ODEs in order to learn dynamic trajectories from static snapshot data—a key problem in inference from single cell data. Throughout the talk, we present examples of such techniques being applied to massively high throughput and high dimensional datasets from biology and medicine.
  • 12:30 - 2:00 pm EDT
    Lunch/Free Time
    Lunch/Free Time
  • 2:00 - 3:00 pm EDT
    Poster Session Blitz
    Lightning Talks - 11th Floor Lecture Hall
    • Session Chair
    • Bogdan Toader, Yale University
  • 3:30 - 5:30 pm EDT
    Poster Session / Coffee Break
    Poster Session - 11th Floor Collaborative Space
  • 4:30 - 4:45 pm EDT
    Remarks - Peter Jones pt 1
    11th Floor Lecture Hall
    • Jill Pipher, Brown University - ICERM
  • 4:45 - 5:00 pm EDT
    Remarks - Peter Jones pt 2
    11th Floor Lecture Hall
    • Ronald Coifman, Yale University
Wednesday, June 28, 2023
  • 9:00 - 9:45 am EDT
    Some old and some newer perspectives on data-driven modeling of complex systems
    11th Floor Lecture Hall
    • Speaker
    • Yannis Kevrekidis, Johns Hopkins University
    • Session Chairs
    • James Brofos, SESCO Enterprises LLC
    • Shira Golovin, Duke University
    Abstract
    I will discuss avenues in data driven modeling of complex systems (for my group and myself) that my interaction with Raphy Coifman has enabled - from "variable free" latent space dynamics twenty years ago to "learning what to learn" and "backward in time" today.
  • 10:00 - 10:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:30 - 11:15 am EDT
    How will we plan our stakes in deep haystacks? Science in the AI Spring
    11th Floor Lecture Hall
    • Speaker
    • Demba Ba, Harvard University
    • Session Chairs
    • James Brofos, SESCO Enterprises LLC
    • Shira Golovin, Duke University
    Abstract
    To elucidate the basic laws that govern processes around us, scientists ask questions and, often, collect data that will let them answer these questions. The answers typically rely on the solutions to so-called inverse problems, namely algorithms for mapping data to latent variables the scientist can interpret. Physical/statistical models often constrain the latent variables and how they relate to data we acquire. In recent years, deep learning algorithms, largely myopic to these constraints, have become a popular method for solving inverse problems. I will argue that the cost of collecting data in science, the need for interpretability, and the dynamic nature of scientific data make vanilla artificial neural networks (ANNs) unsuitable, at best, in scientific settings. I will argue for sparsity of latent representations as a mild form of inductive biase for DNN models of scientific data, which can let us enjoy both the interpretability of traditional methods for solving inverse problems, and the expressive power of ANNs. I will demonstrate that ANNs designed in this fashion make powerful interpretable tools for elucidating the principles of neural computation, and for solving a wide range of inverse problems in imaging, Physics, and beyond, particularly in the data-scare/limited regime that characterizes many scientific settings.
  • 11:30 am - 12:15 pm EDT
    On the Connection between Deep Neural Networks and Kernel Methods
    11th Floor Lecture Hall
    • Speaker
    • Ronen Basri, Weizmann Institute of Science
    • Session Chairs
    • James Brofos, SESCO Enterprises LLC
    • Shira Golovin, Duke University
    Abstract
    Recent theoretical work has shown that under certain conditions, massively overparameterized neural networks are equivalent to kernel regressors with a family of kernels called Neural Tangent Kernels (NTKs). My work in this subject aims to better understand the properties of NTK for various network architectures and relate them to the inductive bias of real neural networks. In particular, I will argue that for input data distributed uniformly on the sphere NTK favors low-frequency predictions over high-frequency ones, potentially explaining why overparameterized networks can generalize even when they perfectly fit their training data. I will further discuss the behavior of NTK when data is distributed nonuniformly and show that NTK (with ReLU activation) is tightly related to the classical Laplace kernel, which has a simple closed-form. Finally, I will discuss our analysis of NTK for convolutional networks, which indicates that these networks are biased toward learning low frequency target functions with any higher frequencies concentrated in local regions. Overall, our results suggest that much insight about neural networks can be obtained from the analysis of NTK.
  • 12:25 - 12:30 pm EDT
    Group Photo (Immediately After Talk)
    11th Floor Lecture Hall
  • 12:30 - 2:30 pm EDT
    Open Problem Session Lunch
    Lunch/Free Time
  • 2:30 - 3:30 pm EDT
    Panel Part 1 (Introductions / Presentations)
    Panel Discussion - 11th Floor Lecture Hall
    • Moderator
    • Anna Gilbert, Yale University
    • Panelists
    • Ronald Coifman, Yale University
    • Guy David, Université Paris-Saclay
    • Fariba Fahroo, AFOSR
    • Leslie Greengard, New York University
    • F. Alberto Grünbaum, University of California, Berkeley
    • Yannis Kevrekidis, Johns Hopkins University
    • Arje Nachman, Air Force Office of Scientific Research
    • Vladimir Rokhlin, Yale University
  • 3:30 - 4:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 4:00 - 5:00 pm EDT
    Panel Part 2
    Panel Discussion - 11th Floor Lecture Hall
    • Moderator
    • Anna Gilbert, Yale University
    • Panelists
    • Ronald Coifman, Yale University
    • Guy David, Université Paris-Saclay
    • Fariba Fahroo, AFOSR
    • Leslie Greengard, New York University
    • F. Alberto Grünbaum, University of California, Berkeley
    • Yannis Kevrekidis, Johns Hopkins University
    • Arje Nachman, Air Force Office of Scientific Research
    • Vladimir Rokhlin, Yale University
Thursday, June 29, 2023
  • 9:00 - 9:45 am EDT
    Some estimation problems for high-dimensional stochastic dynamical systems with structure
    11th Floor Lecture Hall
    • Speaker
    • Mauro Maggioni, Johns Hopkins University
    • Session Chairs
    • Yariv Aizenbud, Yale University
    • Ronen Talmon, Technion - Israel Institute of Technology
    Abstract
    We consider several estimation problems for stochastic dynamical systems from observations of trajectories: Let A be a linear dynamical system on a graph G. A and G are unknown, we observe a small number of entries of A, A^2, …, A^T, and we wish to estimate A. We study when this problem is well-posed, introduce an estimator of A based on matrix completion of a low-rank structured block-Hankel matrix, obtain results that capture some of the trade-offs between sampling in space and time, and finally show that this estimator can be constructed by a fast algorithm that provably locally converges quadratically to A. We verify this numerically on a variety of examples [C. Kuemmerle, MM, S. Tang]. We consider nonlinear dynamical systems modeling interacting agents. The laws of interactions between the agents are often simple, e.g. they depend only on a function of pairwise interactions. Given observations along trajectories of the agents, we construct statistically and computationally efficient estimators for the laws of interactions, in a nonparametric fashion, and give conditions guaranteeing the problem is well-posed [F. Lu, MM, J. Feng, P. Martin, J.Miller, S. Tang and M. Zhong]. We consider model reduction of fast-slow high-dimensional stochastic systems with a low-dimensional slow manifold M. The fast modes are not assumed to be small, nor orthogonal to M. Both the dynamics and M are unknown; given access to a black-box simulator from which short bursts of simulations can be obtained, we estimate of the manifold M, an effective stochastic process on M, and a simulator thereof, adapted to the dimension of M, and with time steps dependent on the regularity of the effective process. The estimation may be performed on-the-fly, for efficient exploration. We demonstrate the simulation of paths of the effective dynamics, and estimation of crucial features, including the stationary distribution, metastable states, residence times and transition rates [MM, X.-F. Ye, S. Yang].
  • 10:00 - 10:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:30 - 11:15 am EDT
    Low distortion embeddings with bottom-up manifold learning
    11th Floor Lecture Hall
    • Speaker
    • Gal Mishne, University of California San Diego
    • Session Chairs
    • Yariv Aizenbud, Yale University
    • Ronen Talmon, Technion - Israel Institute of Technology
    Abstract
    Manifold learning algorithms aim to map high-dimensional data into lower dimensions while preserving local and global structure. In this talk, I present Low Distortion Local Eigenmaps (LDLE), a bottom-up manifold learning framework that constructs low-distortion local views of a dataset in lower dimensions and registers them to obtain a global embedding. Motivated by Jones, Maggioni, and Schul (2008), LDLE constructs local views by selecting subsets of the global eigenvectors of the graph Laplacian such that they are locally orthogonal. The global embedding is obtained by rigidly aligning these local views, which is solved iteratively. Our global alignment formulation enables tearing manifolds so as to embed them into their intrinsic dimension, including manifolds without boundary and non-orientable manifolds. We define a strong and weak notion of global distortion to evaluate embeddings in low dimensions. We show that Riemannian Gradient Descent (RGD) converges to an embedding with guaranteed low global distortion. Compared to competing manifold learning and data visualization approaches, we demonstrate that LDLE achieves lowest local and global distortion on real and synthetic datasets.
  • 11:30 am - 12:15 pm EDT
    Curvature on Combinatorial Graphs
    11th Floor Lecture Hall
    • Speaker
    • Stefan Steinerberger, University of Washington
    • Session Chairs
    • Yariv Aizenbud, Yale University
    • Ronen Talmon, Technion - Israel Institute of Technology
    Abstract
    Curvature is one of the fundamental ingredients in differential geometry. It is interesting to think of combinatorial graphs as manifolds and a number of different notions of curvature have been proposed. I will introduce some of the existing ideas and then propose a new notion based on a simple and completely explicit linear system of equations. This notion satisfies a surprisingly large number of desirable properties -- connections to game theory (especially the von Neumann Minimax Theorem) and potential theory will be sketched. I will also sketch some curious related open problems. No prior knowledge of differential geometry (or graphs) is required.
  • 12:30 - 2:30 pm EDT
    Networking Lunch
    Lunch/Free Time - 11th Floor Collaborative Space
  • 2:30 - 3:15 pm EDT
    Reduced label complexity for tight linear regression
    11th Floor Lecture Hall
    • Speaker
    • Alex Gittens, Rensselaer Polytechnic Institute
    • Session Chairs
    • Kirill Serkh, University of Toronto
    • Amit Singer, Princeton University
    Abstract
    The success of modern supervised machine learning is predicated on the existence of large labeled data sets, but in many domains, it is cost prohibitive to generate a large number of high quality labels. It is more feasible to first collect a large unlabeled data set, then decide to invest in labeling a subset of this data set. This motivates the consideration of label complexity: for a given supervised learning problem and unlabeled data set, what is the minimal number of labels required so that a model fitted using the labeled subset has almost as much predictive power as would a model fitted on the entire data set if all the labels were available? We present algorithmic results on the label complexity of linear regression: given n data points, how many samples must be labeled to obtain a model with near optimal in-sample prediction error? Existing approaches to reducing the label complexity of linear regression--including various approaches from the randomized numerical linear algebra community such as core-sets and the use of iterative algorithms that touch one data point per iteration-- are applicable when a constant factor approximation is acceptable. New approaches are needed to enter the regime where the approximation factor decreases with the size of the data set. In this setting, we provide a polynomial time algorithm that reduces the label complexity by O(sqrt(n)) additively. The algorithm is based on a tight analysis of the regression error incurred by forming a core-set using backward selection.
  • 3:30 - 4:15 pm EDT
    Randomized algorithms for linear algebraic computations
    11th Floor Lecture Hall
    • Speaker
    • Per-Gunnar Martinsson, University of Texas at Austin
    • Session Chairs
    • Kirill Serkh, University of Toronto
    • Amit Singer, Princeton University
    Abstract
    The talk will describe how randomized algorithms can effectively, accurately, and reliably solve linear algebraic problems that are omnipresent in scientific computing and in data analysis. We will focus on techniques for low rank approximation, since these methods are particularly simple and powerful. The talk will also briefly survey a number of other randomized algorithms for tasks such as solving linear systems, estimating matrix norms, and computing full matrix factorizations.
  • 5:00 - 7:00 pm EDT
    Banquet (offsite)
    Banquet (offsite)
Friday, June 30, 2023
  • 9:00 - 9:45 am EDT
    Project and Forget: Solving Large-Scale Metric Constrained Problems
    11th Floor Lecture Hall
    • Speaker
    • Anna Gilbert, Yale University
    • Session Chairs
    • Boris Landa, Yale University
    • John Schotland, Yale University
    Abstract
    Many important machine learning problems can be formulated as highly constrained convex optimization problems. One important example is metric constrained problems. In this paper, we show that standard optimization techniques can not be used to solve metric constrained problem. To solve such problems, we provide a general active set framework, called Project and Forget, and several variants thereof that use Bregman projections. Project and Forget is a general purpose method that can be used to solve highly constrained convex problems with many (possibly exponentially) constraints. We provide a theoretical analysis of Project and Forget and prove that our algorithms converge to the global optimal solution and have a linear rate of convergence. We demonstrate that using our method, we can solve large problem instances of general weighted correlation clustering, metric nearness, information theoretic metric learning and quadratically regularized optimal transport; in each case, out-performing the state of the art methods with respect to CPU times and problem sizes. Joint work with Rishi Sonthalia (UCLA)
  • 10:00 - 10:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:30 - 11:15 am EDT
    Randomized tensor-network algorithms for random data in high-dimensions
    11th Floor Lecture Hall
    • Speaker
    • Yuehaw Khoo, The University of Chicago
    • Session Chairs
    • Boris Landa, Yale University
    • John Schotland, Yale University
    Abstract
    Tensor-network ansatz have long been employed to solve the high-dimensional Schrödinger equation, demonstrating linear complexity scaling with respect to dimensionality. Recently, this ansatz has found applications in various machine learning scenarios, including supervised learning and generative modeling, where the data originates from a random process. In this talk, we present a new perspective on randomized linear algebra, showcasing its usage in estimating a density as a tensor-network from i.i.d. samples of a distribution, without the curse of dimensionality, and without the use of optimization techniques. Moreover, we illustrate how this concept can combine the strengths of particle and tensor-network methods for solving high-dimensional PDEs, resulting in enhanced flexibility for both approaches. (Based on joint works with Yian Chen, Jeremy Hoskins, YoonHaeng Hur, Michael Lindsey, Yifan Peng, Miles Stoudenmire, Xun Tang, and Lexing Ying).
  • 11:30 am - 12:15 pm EDT
    Integral equations and singular waveguides
    11th Floor Lecture Hall
    • Speaker
    • Jeremy Hoskins, University of Chicago
    • Session Chairs
    • Boris Landa, Yale University
    • John Schotland, Yale University
  • 12:30 - 4:00 pm EDT
    Lunch/ Free Time for Collaboration
    Lunch/Free Time

All event times are listed in ICERM local time in Providence, RI (Eastern Daylight Time / UTC-4).

All event times are listed in .

Request Reimbursement

This section is for general purposes only and does not indicate that all attendees receive funding. Please refer to your personalized invitation to review your offer.

ORCID iD
As this program is funded by the National Science Foundation (NSF), ICERM is required to collect your ORCID iD if you are receiving funding to attend this program. Be sure to add your ORCID iD to your Cube profile as soon as possible to avoid delaying your reimbursement.
Acceptable Costs
  • 1 roundtrip between your home institute and ICERM
  • Flights on U.S. or E.U. airlines – economy class to either Providence airport (PVD) or Boston airport (BOS)
  • Ground Transportation to and from airports and ICERM.
Unacceptable Costs
  • Flights on non-U.S. or non-E.U. airlines
  • Flights on U.K. airlines
  • Seats in economy plus, business class, or first class
  • Change ticket fees of any kind
  • Multi-use bus passes
  • Meals or incidentals
Advance Approval Required
  • Personal car travel to ICERM from outside New England
  • Multiple-destination plane ticket; does not include layovers to reach ICERM
  • Arriving or departing from ICERM more than a day before or day after the program
  • Multiple trips to ICERM
  • Rental car to/from ICERM
  • Flights on a Swiss, Japanese, or Australian airlines
  • Arriving or departing from airport other than PVD/BOS or home institution's local airport
  • 2 one-way plane tickets to create a roundtrip (often purchased from Expedia, Orbitz, etc.)
Travel Maximum Contributions
  • New England: $350
  • Other contiguous US: $850
  • Asia & Oceania: $2,000
  • All other locations: $1,500
  • Note these rates were updated in Spring 2023 and superseded any prior invitation rates. Any invitations without travel support will still not receive travel support.
Reimbursement Requests

Request Reimbursement with Cube

Refer to the back of your ID badge for more information. Checklists are available at the front desk and in the Reimbursement section of Cube.

Reimbursement Tips
  • Scanned original receipts are required for all expenses
  • Airfare receipt must show full itinerary and payment
  • ICERM does not offer per diem or meal reimbursement
  • Allowable mileage is reimbursed at prevailing IRS Business Rate and trip documented via pdf of Google Maps result
  • Keep all documentation until you receive your reimbursement!
Reimbursement Timing

6 - 8 weeks after all documentation is sent to ICERM. All reimbursement requests are reviewed by numerous central offices at Brown who may request additional documentation.

Reimbursement Deadline

Submissions must be received within 30 days of ICERM departure to avoid applicable taxes. Submissions after thirty days will incur applicable taxes. No submissions are accepted more than six months after the program end.