Organizing Committee
Abstract

This workshop focuses on connections between higher-order statistics and symmetric tensors, and their applications to machine learning, network science, and other domains. Higher-order statistics refers to the study of correlations between three or more covariates. This is in contrast to the usual mean and covariance, which are based on one and two covariates.

Higher-order statistics are needed to characterize complex data distributions, such as mixture models. Symmetric tensors, meanwhile, are multi-dimensional arrays. They generalize covariance matrices and affinity matrices and can be used to represent higher-order correlations. Tensor decompositions extend matrix factorizations from numerical linear algebra to multilinear algebra. Recently tensor-based approaches have become more practical, due to the availability of bigger datasets and new algorithms.

The workshop brings together applied mathematicians, statisticians, probabilists, machine learning experts, and computational algebraic geometers. Presentations will expose how symmetric tensors, with nonlinear algebra and non-convex optimization, provide natural mathematical machinery for exploiting higher-order interactions. Topics include moment tensor decompositions; spectral methods for hypergraphs; and related random matrix theory.

Image for "Connecting Higher-Order Statistics and Symmetric Tensors"
Image credit: João M. Pereira

Confirmed Speakers & Participants

Talks will be presented virtually or in-person as indicated in the schedule below.

  • Speaker
  • Poster Presenter
  • Attendee
  • Virtual Attendee

Workshop Schedule

Monday, January 8, 2024
  • 8:50 - 9:00 am EST
    Welcome
    11th Floor Lecture Hall
    • Session Chair
    • Brendan Hassett, ICERM/Brown University
  • 9:00 - 9:30 am EST
    Workshop Introduction
    Opening Remarks - 11th Floor Lecture Hall
    • Speaker
    • Tamara Kolda, MathSci.ai
    • Session Chair
    • Joe Kileel, University of Texas at Austin
  • 9:45 - 10:15 am EST
    Higher-order graph algorithms: Hypergraphs vs. tensors.
    11th Floor Lecture Hall
    • Speaker
    • David Gleich, Purdue University
    • Session Chair
    • Joe Kileel, University of Texas at Austin
    Abstract
    This talk is meant to survey a variety of research among what is commonly phrased “higher-order graph algorithms.” We’ll begin by establishing well-known relationships among different approaches for matrices, stochastic, and standard graph algorithms — for instance, the equivalence between finite-space, discrete time Markov chains and weighted graphs. Then look at what happens with higher-order generalizations of these ideas where we tend to find that different higher-order generalizations often tend to be more different than their pairwise or matrix origins with fewer connections.
  • 10:15 - 10:45 am EST
    Coffee Break
    11th Floor Collaborative Space
  • 10:45 - 11:15 am EST
    Computational Lower Bounds for Tensor PCA
    11th Floor Lecture Hall
    • Speaker
    • Daniel Hsu, Columbia University
    • Session Chair
    • Joe Kileel, University of Texas at Austin
  • 11:15 am - 12:15 pm EST
    Group Activity
    - 11th Floor Lecture Hall
    • Session Chair
    • Joe Kileel, University of Texas at Austin
  • 12:15 - 1:45 pm EST
    Networking Lunch Part 1
    Working Lunch - 11th Floor Collaborative Space
  • 1:45 - 2:15 pm EST
    Thinking of symmetric tensors as homogeneous polynomials and applications
    11th Floor Lecture Hall
    • Speaker
    • Joao Pereira, Instituto de Matematica Pura e Aplicada
    • Session Chair
    • Tamara Kolda, MathSci.ai
  • 2:15 - 2:45 pm EST
    Symmetric tensor decompositions and syzygies
    11th Floor Lecture Hall
    • Speaker
    • Kristian Ranestad, University of Oslo
    • Session Chair
    • Tamara Kolda, MathSci.ai
    Abstract
    A symmetric tensordecomposition may be expressed using apolarity. In examples of quartic forms, I will show the role of syzygies in finding such decomposition, reporting on work with M. and G. Kapustka, H. Schenck, M. Stillman and B. Yuan.
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 4:00 pm EST
    Symmetric Tensor Decompositions for Diagonal Gaussian Mixtures
    11th Floor Lecture Hall
    • Speaker
    • Zi Yang, University of Albany
    • Session Chair
    • Tamara Kolda, MathSci.ai
  • 4:00 - 5:00 pm EST
    Group Activity
    - 11th Floor Lecture Hall
    • Session Chair
    • Tamara Kolda, MathSci.ai
  • 5:00 - 6:30 pm EST
    Reception
    11th Floor Collaborative Space
Tuesday, January 9, 2024
  • 8:45 - 9:00 am EST
    Opening Remarks
    11th Floor Lecture Hall
    • Session Chair
    • Joao Pereira, Instituto de Matematica Pura e Aplicada
  • 9:00 - 9:30 am EST
    Covering number of real algebraic varieties and beyond: Improved bound and applications
    11th Floor Lecture Hall
    • Speaker
    • Yifan Zhang, University of Texas at Austin
    • Session Chair
    • Joao Pereira, Instituto de Matematica Pura e Aplicada
    Abstract
    We prove an upper bound on the covering number of real algebraic varieties, images of polynomial maps and semialgebraic sets, which describe the set of low rank tensors and structured tensor networks. The bound remarkably improves the best known general bound by Yomdin-Comte, and its proof is much more straightforward. As a consequence, our result gives new bounds on the volume of the tubular neighborhood of the image of a polynomial map and a semialgebraic set, where results for varieties by Lotz and Basu-Lerario are not directly applicable. In this talk, I will discuss the connection between this new result and tensor-based statistical learning methods. Time permitting, I will discuss applications of our theory to two main domains — sketching for (general) polynomial optimization problems and generalization error bounds for deep neural networks with rational or ReLU activations.
  • 9:45 - 10:15 am EST
    TBD
    11th Floor Lecture Hall
    • Speaker
    • Anna Seigal, University of Oxford
    • Session Chair
    • Joao Pereira, Instituto de Matematica Pura e Aplicada
  • 10:15 - 10:45 am EST
    Coffee Break
    11th Floor Collaborative Space
  • 10:45 - 11:15 am EST
    Nonlinear Meta-learning Can Guarantee Faster Rates
    11th Floor Lecture Hall
    • Speaker
    • Zhu Li, University College London
    • Session Chair
    • Joao Pereira, Instituto de Matematica Pura e Aplicada
    Abstract
    Many recent theoretical works on meta-learning aim to achieve guarantees in leveraging similar representational structures from related tasks towards simplifying a target task. The main aim of theoretical guarantees on the subject is to establish the extent to which convergence rates---in learning a common representation---may scale with the number N of tasks (as well as the number of samples per task). First steps in this setting demonstrate this property when both the shared representation amongst tasks, and task-specific regression functions, are linear. This linear setting readily reveals the benefits of aggregating tasks, e.g., via averaging arguments. In practice, however, the representation is often highly nonlinear, introducing nontrivial biases in each task that cannot easily be averaged out as in the linear case. In the present work, we derive theoretical guarantees for meta-learning with nonlinear representations. In particular, assuming the shared nonlinearity maps to an infinite-dimensional RKHS, we show that additional biases can be mitigated with careful regularization that leverages the smoothness of task-specific regression functions, yielding improved rates that scale with the number of tasks as desired.
  • 11:15 - 11:20 am EST
    Group Photo (Immediately After Talk)
    11th Floor Lecture Hall
  • 11:20 am - 12:15 pm EST
    Group Activity
    - 11th Floor Collaborative Space
  • 12:15 - 1:45 pm EST
    Lunch/Free Time
  • 1:45 - 2:15 pm EST
    Covariance Loss and Privacy
    11th Floor Lecture Hall
    • Speaker
    • March Boedihardjo, Michigan State University
    • Session Chair
    • Eric Chi, Rice University
  • 2:30 - 3:00 pm EST
    Recovering hidden structures via tensor decomposition
    11th Floor Lecture Hall
    • Speaker
    • Bernard Mourrain, INRIA
    • Session Chair
    • Eric Chi, Rice University
    Abstract
    In several application domains such as data analysis, or signal processing, understanding the underlying structure or distribution of the data is a challenging problem. Often, statistics of order 2 are not sufficient to reveal this hidden structure, while statistics of order higher than 3 allow doing it. We will illustrate this phenomenon and explain how tensor decomposition can be used to recover this structure. We will present tensor decomposition methods, based on linear algebra tools and illustrate the recovery techniques on some examples.
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 4:00 pm EST
    Hierarchical nonnegative tensor factorizations and applications
    11th Floor Lecture Hall
    • Speaker
    • Jamie Haddock, Harvey Mudd College
    • Session Chair
    • Eric Chi, Rice University
    Abstract
    Nonnegative matrix factorization (NMF) has found many applications including topic modeling and document analysis. Hierarchical NMF (HNMF) variants are able to learn topics at various levels of granularity and illustrate their hierarchical relationship. Recently, nonnegative tensor factorization (NTF) methods have been applied in a similar fashion in order to handle data sets with complex, multi-modal structure. Hierarchical NTF (HNTF) methods have been proposed, however these methods often do not naturally generalize their matrix-based counterparts. This talk will survey some recent work developing HNTF models that generalize HNMF and implementing them in a neural network architecture, complete with a forward- and backward-propagation training technique.
  • 4:00 - 4:10 pm EST
    Goodness-of-Fit Tests for Linear Non-Gaussian Structural Equation Models
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Daniela Schkoda, Technical University of Munich
    • Session Chair
    • Eric Chi, Rice University
    Abstract
    Inferring causal relationships between variables solely from observational data is a central question in many scientific fields. Various algorithms have been developed to tackle this problem by leveraging different types of a priori assumptions. One prominent example is the assumption that the joint distribution of the observed variables follows a linear non-Gaussian structural equation model. In this talk, we will present a novel goodness-of-fit test that assesses the validity of this assumption in the basic setting without latent confounders as well as in extension to linear models that incorporate latent confounders. Our approach involves testing algebraic relations among second and higher moments that hold as a consequence of the linearity of the structural equations. Specifically, we show that the linearity implies rank constraints on matrices and tensors derived from moments. For a practical implementation of our tests, we consider a multiplier bootstrap method that uses incomplete U-statistics to estimate subdeterminants, as well as asymptotic approximations to the null distribution of singular values.
  • 4:10 - 4:20 pm EST
    Exact recovery in the Gaussian weighted Stochastic block model Statistical and algorithmic thresholds
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Aaradhya Pandey, Princeton University
    • Session Chair
    • Eric Chi, Rice University
    Abstract
    In this article, we analyze the Gaussian weighted version of the Stochastic block model, with two symmetric communities. First, we provide the information-theoretic threshold in terms of the signal-to-noise ratio of the model and prove that when this ratio is less than one, no statistical estimator can exactly recover the community structure with probability approaching one. On the other hand, we show that when the ratio is larger than one, the Maximum likelihood estimator itself succeeds in exactly recovering the community structure with probability approaching one. Then, we provide two algorithms for achieving exact recovery. The Semi-definite relaxation as well as the spectral relaxation of the original Maximum likelihood estimator is able to recover the community structure all the way down to the threshold. We also compare the problem of community detection with the problem of recovering a planted densely weighted community within a graph and provide strong evidence (along with proofs) that the exact recovery of two symmetric communities is a strictly easier problem than recovering a planted dense subgraph of size half the total number of nodes.
  • 4:20 - 4:30 pm EST
    Randomized algorithms for tensor problems with factorized operator or data
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Paulina Hoyos Restrepo, The University of Texas at Austin
    • Session Chair
    • Eric Chi, Rice University
    Abstract
    Solving large-scale systems of linear equations or linear regressions has vital applications in nearly every field of data-driven science. Recently, Kaczmarz-type methods have been proposed for a variety of tensor linear systems and regression problems. In this talk, I will present a variant of the randomized Kaczmarz method for tensor regression problems where the data tensor is factorized by means of the t-product.
  • 4:30 - 4:40 pm EST
    Generating Polynomial Method for Tensor Decomposition and Multi-view Learning
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Zequn Zheng, Louisiana state university
    • Session Chair
    • Eric Chi, Rice University
    Abstract
    Tensors or multidimensional arrays are higher order generalizations of matrices. They are natural structures for expressing data that have inherent higher order structures. Tensor decompositions play an important role in learning those hidden structures. In this talk, we present a novel algorithm to find the tensor decompositions utilizing generating polynomials. Under some conditions on the tensor's rank, we prove that the exact tensor decomposition can be found by our algorithm. This algorithm can also be applied on the multi-view leaning in data science.
Wednesday, January 10, 2024
  • 8:45 - 9:00 am EST
    Opening Remarks
    11th Floor Lecture Hall
  • 9:00 - 9:30 am EST
    Moment tensor and Gaussian mixture model
    11th Floor Lecture Hall
    • Speaker
    • Yihong Wu, Yale University
    • Session Chair
    • Jamie Haddock, Harvey Mudd College
    Abstract
    In this talk we explore the problem of learning a finite Gaussian location mixture model in high dimensions without separation conditions. Extending the one-dimensional result of Heinrich-Kahn and Wu-Yang, we determine the optimal rate of estimating the mixing distribution in Wasserstein distance, achieved by an estimator computable in polynomial time. Furthermore, we show that the mixture density can be estimated at the optimal parametric rate in Hellinger distance and provide a computationally efficient algorithm to achieve this rate in the special case of two components (for three or more components this is open). Both the theoretical and methodological development rely on a careful application of the method of moments. Central to our results is the observation that the information geometry of finite Gaussian mixtures is characterized by the moment tensors of the mixing distribution, whose low-rank structure can be exploited to obtain a sharp local entropy bound.
  • 9:45 - 10:15 am EST
    Möbius inversion and the bootstrap
    11th Floor Lecture Hall
    • Speaker
    • Florian Schaefer, Georgia Institute of Technology
    • Session Chair
    • Amit Singer, Princeton University
    Abstract
    Möbius inversion on the partition lattice converts moment products to cumulants. In this talk, we draw a novel connection to the bootstrap resampling for bias connection by casting it as an iterative algorithm for Möbius inversion. Choosing appropriate step sizes, we obtain exact bias correction for moment polynomials with unknown coefficients.
  • 10:15 am - 12:15 pm EST
    Coffee Break / Poster Session
    Poster Session - 11th Floor Collaborative Space
  • 12:15 - 1:45 pm EST
    Lunch/Free Time
  • 1:45 - 2:45 pm EST
    Work Time
    - 11th Floor Lecture Hall
  • 2:45 - 3:45 pm EST
    Networking Activity with (Math-Inspired) Games
    - 11th Floor Collaborative Space
  • 3:45 - 5:00 pm EST
    Work Time
    - 11th Floor Lecture Hall
Thursday, January 11, 2024
  • 8:45 - 9:00 am EST
    Opening Remarks
    11th Floor Lecture Hall
  • 9:00 - 9:30 am EST
    Method of Moments in Cryo-EM
    11th Floor Lecture Hall
    • Speaker
    • Amit Singer, Princeton University
    • Session Chair
    • Tamara Kolda, MathSci.ai
    Abstract
    Single-Particle Electron Cryomicroscopy (cryo-EM) will soon become the leading technique for determining 3-D molecular structures at high resolution. In cryo-EM, the 3-D structure needs to be determined from many 2-D noisy tomographic projection images of the molecule taken at unknown viewing directions and positions. The maximum likelihood approach has been very successful in determining large molecular structures, but struggles with small molecules for which the signal-to-noise (SNR) of the images is too low for accurate viewing direction estimation and detection. Motivated by the challenge of reconstructing small molecules by cryo-EM, we have been investigating the method of moments as an alternative statistical and computational framework. In particular, the method of moments provides theoretical insight about the sample complexity, that is, the number of images required for reconstruction. No prior knowledge of cryo-EM or the method of moments is necessary for this talk.
  • 9:45 - 10:15 am EST
    Memory-efficient modewise measurements for tensor compression and recovery
    11th Floor Lecture Hall
    • Speaker
    • Liza Rebrova, Princeton
    • Session Chair
    • Tamara Kolda, MathSci.ai
    Abstract
    Data-oblivious measurements play an important role in low-rank data compression and recovery techniques, frequently used in streaming settings and within iterative algorithms. Typically, linear data-oblivious measurements involve some version of a random sketch that preserves the geometric properties of the data. When data is tensorial, a special challenge is to create a sketch with a structure that reflects tensor structure: this way, it can work similarly to a dense unstructured random matrix but can be applied faster and stored much more efficiently. I will discuss our recent work on developing flexible and provable modewise sketches for tensor data processing, including compressed CP rank fitting, modewise tensor iterative hard thresholding and direct recovery from leave-one-out modewise measurements for low Tucker-rank tensors.
  • 10:15 - 10:45 am EST
    Coffee Break
    11th Floor Collaborative Space
  • 10:45 - 11:15 am EST
    Gauss-Newton for Symmetric Tensor Decomposition
    11th Floor Lecture Hall
    • Speaker
    • Eric Chi, Rice University
    • Session Chair
    • Tamara Kolda, MathSci.ai
    Abstract
    In this work we present a simple method for computing a symmetric CP decomposition based on a Gauss-Newton linearization. The algorithm requires solving a sequence of linear least squares problems. With modest modification our approach can be extended to compute symmetric decompositions under additional constraints such as nonnegativity and sparsity. We present empirical results highlighting the effectiveness of the approach. This is joint work with Jocelyn Chi.
  • 11:15 am - 12:15 pm EST
    Group Activity
    - 11th Floor Collaborative Space
  • 12:15 - 1:45 pm EST
    Lunch/Free Time
  • 1:45 - 2:15 pm EST
    Efficient Moment Methods and Mixture Models
    11th Floor Lecture Hall
    • Speaker
    • Joe Kileel, University of Texas at Austin
    • Session Chair
    • Amit Singer, Princeton University
  • 2:30 - 3:00 pm EST
    Using higher-order moments for subspace clustering
    11th Floor Lecture Hall
    • Speaker
    • David Hong, University of Delaware
    • Session Chair
    • Jamie Haddock, Harvey Mudd College
    Abstract
    Consider data points drawn from a union of subspaces (i.e., each point is drawn from one of several subspaces) with noise added. Given only these data points, the goal of subspace clustering is to cluster the points by the subspace to which they belong and to estimate the corresponding subspaces. There has been great progress in recent years on a variety of techniques for tackling this problem, but to the best of my knowledge, a method-of-moments approach has not yet been studied. Inspired by recent work on efficient symmetric tensor methods for estimating Gaussian Mixture Models, this talk will discuss work on developing a method-of-moments technique for subspace clustering.
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 4:00 pm EST
    Zero-Inflated Poisson Tensor Factorization for Multimodal Genomics Data
    11th Floor Lecture Hall
    • Speaker
    • Neriman Tokcan, University of Massachusetts Boston
    • Session Chair
    • Amit Singer, Princeton University
    Abstract
    Tensor factorizations (TF) are powerful tools for the efficient representation and analysis of multidimensional data. However, classic TF methods based on maximum likelihood estimation underperform when applied to zero-inflated count data, such as single-cell RNA sequencing (scRNA-seq) data. Additionally, the stochasticity inherent in TFs results in factors that vary across repeated runs, making interpretation and reproducibility of the results challenging. In this talk, we present Zero Inflated Poisson Tensor Factorization (ZIPTF), a novel approach for the factorization of high-dimensional count data with excess zeros. To address the challenge of stochasticity, we introduce Consensus Zero Inflated Poisson Tensor Factorization (C-ZIPTF), which combines ZIPTF with a consensus-based meta-analysis. We evaluate our proposed ZIPTF and C-ZIPTF on synthetic zero-inflated count data and synthetic and real scRNA-seq data. ZIPTF consistently outperforms baseline matrix and tensor factorization methods in terms of reconstruction accuracy for zero-inflated data and accuracy of the factorization. Furthermore, our work consistently recovers known and biologically meaningful gene expression programs in both synthetic and real scRNA-seq data. In addition to these findings, we will also delve into the potential applications of Bayesian tensor factorizations, showcasing their potential to drive discoveries and advancements across diverse domains by extracting valuable insights from complex high-dimensional datasets.
  • 4:00 - 5:00 pm EST
    Group Activity
    Group Activity - 11th Floor Collaborative Space
Friday, January 12, 2024
  • 8:45 - 9:00 am EST
    Opening Remarks
    11th Floor Lecture Hall
    • Session Chair
    • David Hong, University of Delaware
  • 9:00 - 9:30 am EST
    BM-product-based Tensor Decompositions for Unsymmetric and Symmetric Third Order Tensors
    11th Floor Lecture Hall
    • Speaker
    • Misha Kilmer, Tufts University
    • Session Chair
    • David Hong, University of Delaware
    Abstract
    Given tensors A, B, C of size m x 1 x n, m x p x 1 , and 1 x p x n, respectively, their Bhattacharya-Mesner (BM) product will result in a third order tensor of dimension m x p x n and BM-rank of 1 (Mesner and Bhattacharya, 1990). Decomposition of the BM-product into a sum of BM-rank 1 terms offers a representation of third order tensors that is particularly amenable to the analysis and compression of temporal data, such as surveillance video data. After introducing the general case of the BMD and its computation, we illustrate its effectiveness on a video data example. Then, we propose a constrained BMD formulation and Gauss-Newton type algorithm suitable for decomposition of symmetric tensors, with the goal of soliciting input from the workshop participants on the practical appeal of such an approach.
  • 9:45 - 10:15 am EST
    Functional Tensor Decomposition: High-order Singular Value Decomposition in Tensor Analysis
    11th Floor Lecture Hall
    • Speaker
    • Anru Zhang, Duke University
    • Session Chair
    • David Hong, University of Delaware
  • 10:15 - 10:45 am EST
    Coffee Break
    11th Floor Collaborative Space
  • 10:45 - 11:15 am EST
    Scalable symmetric Tucker tensor decomposition
    11th Floor Lecture Hall
    • Speaker
    • Ruhui Jin, University of Texas at Austin
    • Session Chair
    • David Hong, University of Delaware
    Abstract
    We study the best low-rank Tucker decomposition of symmetric tensors. The motivating application is decomposing higher-order multivariate moments. Moment tensors have special structure and are important to various data science problems. We advocate for projected gradient descent (PGD) method and higher-order eigenvalue decomposition (HOEVD) approximation as computation schemes. Most importantly, we develop scalable adaptations of the basic PGD and HOEVD methods to decompose sample moment tensors. With the help of implicit and streaming techniques, we evade the overhead cost of building and storing the moment tensor. Such reductions make computing the Tucker decomposition realizable for large data instances in high dimensions. Numerical experiments demonstrate the efficiency of the algorithms and the applicability of moment tensor decompositions to real-world datasets. Finally we study the convergence on the Grassmannian manifold, and prove that the update sequence derived by the PGD solver achieves first- and second-order criticality.
  • 11:30 am - 12:00 pm EST
    Estimating Gaussian mixtures using sparse polynomial moment systems
    11th Floor Lecture Hall
    • Speaker
    • Julia Lindberg, University of Texas-Austin
    • Session Chair
    • Joe Kileel, University of Texas at Austin
    Abstract
    The method of moments is a statistical technique for density estimation that solves a system of moment equations to estimate the parameters of an unknown distribution. A fundamental question critical to understanding identifiability asks how many moment equations are needed to get finitely many solutions and how many solutions there are. We answer this question for classes of Gaussian mixture models using the tools of polyhedral geometry. Using these results, we present a homotopy method to perform parameter recovery, and therefore density estimation, for high dimensional Gaussian mixture models. The number of paths tracked in our method scales linearly in the dimension.
  • 12:15 - 1:45 pm EST
    Networking Lunch, Part 2
    Working Lunch - 11th Floor Collaborative Space
  • 1:45 - 3:00 pm EST
    Group Activity
    - 11th Floor Lecture Hall
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 5:00 pm EST
    Group Activity
    - 11th Floor Collaborative Space

All event times are listed in ICERM local time in Providence, RI (Eastern Daylight Time / UTC-4).

All event times are listed in .

Request Reimbursement

This section is for general purposes only and does not indicate that all attendees receive funding. Please refer to your personalized invitation to review your offer.

ORCID iD
As this program is funded by the National Science Foundation (NSF), ICERM is required to collect your ORCID iD if you are receiving funding to attend this program. Be sure to add your ORCID iD to your Cube profile as soon as possible to avoid delaying your reimbursement.
Acceptable Costs
  • 1 roundtrip between your home institute and ICERM
  • Flights on U.S. or E.U. airlines – economy class to either Providence airport (PVD) or Boston airport (BOS)
  • Ground Transportation to and from airports and ICERM.
Unacceptable Costs
  • Flights on non-U.S. or non-E.U. airlines
  • Flights on U.K. airlines
  • Seats in economy plus, business class, or first class
  • Change ticket fees of any kind
  • Multi-use bus passes
  • Meals or incidentals
Advance Approval Required
  • Personal car travel to ICERM from outside New England
  • Multiple-destination plane ticket; does not include layovers to reach ICERM
  • Arriving or departing from ICERM more than a day before or day after the program
  • Multiple trips to ICERM
  • Rental car to/from ICERM
  • Flights on a Swiss, Japanese, or Australian airlines
  • Arriving or departing from airport other than PVD/BOS or home institution's local airport
  • 2 one-way plane tickets to create a roundtrip (often purchased from Expedia, Orbitz, etc.)
Travel Maximum Contributions
  • New England: $350
  • Other contiguous US: $850
  • Asia & Oceania: $2,000
  • All other locations: $1,500
  • Note these rates were updated in Spring 2023 and superseded any prior invitation rates. Any invitations without travel support will still not receive travel support.
Reimbursement Requests

Request Reimbursement with Cube

Refer to the back of your ID badge for more information. Checklists are available at the front desk and in the Reimbursement section of Cube.

Reimbursement Tips
  • Scanned original receipts are required for all expenses
  • Airfare receipt must show full itinerary and payment
  • ICERM does not offer per diem or meal reimbursement
  • Allowable mileage is reimbursed at prevailing IRS Business Rate and trip documented via pdf of Google Maps result
  • Keep all documentation until you receive your reimbursement!
Reimbursement Timing

6 - 8 weeks after all documentation is sent to ICERM. All reimbursement requests are reviewed by numerous central offices at Brown who may request additional documentation.

Reimbursement Deadline

Submissions must be received within 30 days of ICERM departure to avoid applicable taxes. Submissions after thirty days will incur applicable taxes. No submissions are accepted more than six months after the program end.