Spring 2020 Reunion Event

Institute for Computational and Experimental Research in Mathematics (ICERM)

May 23, 2022 - June 10, 2022
Monday, May 23, 2022
  • 3:50 - 4:00 pm EDT
    Welcome
    11th Floor Lecture Hall
    • Brendan Hassett, ICERM/Brown University
  • 4:00 - 5:30 pm EDT
    Welcome (Back!) Reception
    Reception - 11th Floor Collaborative Space
Tuesday, May 24, 2022
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Wednesday, May 25, 2022
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Thursday, May 26, 2022
  • 11:00 - 11:45 am EDT
    A casual talk on electromagnetic wave propagation and quantum computing
    11th Floor Lecture Hall
    • Zhichao Peng, Michigan State University
    Abstract
    This talk will include two parts: (1) a scalable fast iterative solver for the frequency-domain Maxwell equations at high frequencies and (2) a practice of characterization and optimal control of a real quantum computer. The frequency-domain Maxwell equations at high frequencies has indefinite nature, and as a result, it is challenging for iterative linear solvers to invert it efficiently. It also has high-resolution requirement, and the memory needed by standard multifrontal direct solvers may become prohibitive for practical high-frequency 3D problems. In the first part of the talk, we present an efficient scalable frequency-domain Maxwell solver built from scalable time-domain solvers called EM-WaveHoltz. Due to its high potential, quantum computing recently draw a lot of attention from the scientific community. In the second part of the talk, we briefly introduce some main difference between a quantum computer and a classical computer. We also use deterministic and Bayesian methods to characterize a quantum computer based on experimental data. Control pulses are designed based on the characterization results, and their performance is demonstrated through experimental validations.
  • 2:00 - 2:45 pm EDT
    Deep Learning for High-dimensional PDE
    11th Floor Lecture Hall
    • Min Wang, Duke University
    Abstract
    Solving high-dimensional PDEs is a long-standing challenge in scientific computing. Luckily, studies discovered that the neural networks can be used to approximate a certain class of functions without the curse of dimensionality. It is therefore natural to expect that the solutions to high dimensional PDEs can be accurately approximated with the neural networks under controllable numerical costs. Along this line, three questions have been widely considered: 1) How can a PDE problem be formulated into an optimization problem so that it fits into the frame of deep learning? 2) How accurate will a neural network approximation be? 3) How could the training be conducted in a systematic way so that we could be close to a global minimum? In this talk, I will briefly describe a few initial attempts I have made to answer the questions above.
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Friday, May 27, 2022
  • 1:30 - 2:15 pm EDT
    $\mathcal{L}_2$-Optimal Reduced-Order Modeling
    11th Floor Lecture Hall
    • Petar Mlinarić, Virginia Tech
    Abstract
    Motivated by $\mathcal{H}_2$-optimal model order reduction (MOR) for non-parametric linear time-invariant (LTI) systems and $\mathcal{H}_2 \otimes \mathcal{L}_2$-optimal parametric MOR for parametric LTI systems, we investigate $\mathcal{L}_2$-optimal parametric MOR for parametric stationary problems arising from, e.g., discretizations of parametric stationary partial differential equations. We first develop gradients of the squared $\mathcal{L}_2$ error with respect to the reduced system operators, which then leads to a gradient-based optimization method for MOR of parametric stationary problems. We also illustrate that the optimization algorithm can be performed purely in a data-driven manner using only the samples of the quantities of interest without access to full-order operators. Furthermore, we develop interpolatory conditions for optimal MOR of a class of parametric stationary problems. Finally, we discuss MOR methods based on (Petrov-)Galerkin projection and whether $\mathcal{L}_2$-optimal reduced-order models are necessarily of such type. We illustrate the theory via various numerical examples and compare our framework to standard projection-based approaches.
  • 2:15 - 3:00 pm EDT
    Learning Dynamical Models via Identifying Suitable Quadratic-Embeddings
    11th Floor Lecture Hall
    • Pawan Goyal, Max Planck Institute for Dynamics of Complex Technical Systems
    Abstract
    Dynamical modeling of a process is essential to study its dynamical behavior and perform engineering studies such as control and optimization. With the ease of accessibility of data, learning models directly from the data have recently drawn much attention. It is also desirable to construct simple models describing complex nonlinear dynamics for efficient simulations and engineering studies. The simplest model--one can think of--is the linear model, but they are often not expressive enough to model complex dynamics. In this work, we propose \emph{McCormick-envelope} inspired modeling of nonlinear dynamics and discuss a common framework to model nonlinear dynamic processes. The preeminent idea, coming from the envelope, is smooth nonlinear systems can be written as quadratic systems in appropriate lifted coordinate systems without any approximation. We utilize deep learning capabilities and discuss suitable neural network architectures to find such a coordinate system using data. We also discuss an extension to high-dimensional data, which exhibits a slow decay of singular values. We showcase the approach using data coming from applications in engineering and biology.
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Monday, May 30, 2022
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Tuesday, May 31, 2022
  • 2:00 - 2:45 pm EDT
    From frequency response data to (balanced) reduced-order models: system-theoretical approaches
    11th Floor Lecture Hall
    • Ion Victor Gosea, Max Planck Institute for Dynamics of Complex Technical Systems
    Abstract
    In many applied sciences applications, the underlying dynamics of the process under study may be inaccessible to direct modeling, or it may be only partially known. However, with the increasing prevalence of available data from practical experiments, it is of high relevance to include such measurements in the modeling process. Data corresponding to the underlying dynamical systems are available in various formats. For example, among others, in the form of the frequency response. In such cases, one could construct a simplified empirical model of lower dimension that fits the measured data, and hence accurately approximates the original system. This reduced-order system may then be used as a surrogate to predict behavior or derive control strategies. The main motivation of the methods discussed here is that measured system response data can be used in a beneficial way without the need of accessing any prescribed realization of the original model. The balanced truncation (BT) method (introduced by Moore '81) and Loewner-matrix (LM) methodologies (introduced by Mayo/Antoulas '07) are common model reduction approaches. However, only the latter approach (LM) is purely data-driven, while the first is not. We show how to implement a data-driven counterpart of the classical BT approach by using only frequency response data, without explicitly using the model (system's matrices). This recent method (G./Gugercin/Beattie '22) is based on implicitly imposing quadrature approximations on the infinite Gramians, and on constructing the reduced-order model by accessing the transfer function values only.
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Wednesday, June 1, 2022
  • 11:00 - 11:45 am EDT
    Stabilizing Dynamical Systems in the Scarce Data Regime
    11th Floor Lecture Hall
    • Steffen Werner, Courant Institute, New York University
    Abstract
    Stabilizing dynamical systems in science and engineering is challenging, especially in edge cases and limit states where typically little data are available. In this work, we propose a data-driven approach that guarantees finding stabilizing controllers from as few data samples as the dimension of the unstable dynamics, which typically is orders of magnitude lower than the state dimension of the system. The key is learning stabilizing controllers directly from data without learning models of the systems, which would require larger numbers of data points. Numerical experiments with chemical reactors and fluid dynamics behind obstacles demonstrate that the proposed approach stabilizes systems after observing fewer than five data samples even though the dimension of states is orders of magnitude higher.
  • 2:00 - 3:00 pm EDT
    Tensor-tensor Algebra for Optimal Representation and Compression of Multiway Data
    11th Floor Lecture Hall
    • Elizabeth Newman, Emory University
    Abstract
    With ever-growing data resources and modern advancements of data-driven methods, it is imperative that we represent large datasets efficiently while preserving intrinsic features necessary for subsequent analysis. Traditionally, the primary workhorse for data dimensionality reduction and feature extraction has been the matrix singular value decomposition (SVD), which presupposes that data have been arranged in matrix format. However, many data are natively multidimensional and can be more compressible when treated as tensors (i.e., multiway arrays). In this talk, we will provide a brief overview of a particular compressed tensor representation, the t-SVDM, which is formed under an algebraic tensor-tensor product. We will demonstrate that compressed representations obtained from the t-SVDM satisfy Eckart-Young-like optimality results. Moreover, we will show that an optimal t-SVDM representation is provably better than its matrix counterpart and two tensor-based analogs. We will support these theoretical findings with some empirical studies.
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Thursday, June 2, 2022
  • 11:00 am - 12:00 pm EDT
    Reduced-order modeling and inversion for large-scale problems of geophysical exploration
    11th Floor Lecture Hall
    • Mikhail Zaslavsky, Schlumberger
    Abstract
    Geophysical exploration using electromagnetic and seismic method involves large-scale forward and nonlinear inverse problems that often have to be solved in real-time. We address this challenge by employing reduced-order models (ROMs) that represent low-dimensional surrogate of full-scale problem. In this talk I will provide an overview of model-driven ROMs we developed for forward modeling as well as structure-preserving data-driven ROMs that are crucial for inverse problems. The latter enable constructing physics-preserved low-order models with direct access to unknowns of inverse problem and, consequently, optimization-free inversion algorithm. I will show numerical examples confirming the advantage of our approaches compared to state-of-the art algorithms.
    Contributors: Liliana Borcea, Vladimir Druskin, Alexander Mamonov, Shari Moskow, Jorn Zimmerling
    Schlumberger-Private
  • 2:00 - 3:00 pm EDT
    Stein-based Preconditioners for Weak-constraint 4D-var
    11th Floor Lecture Hall
    • Davide Palitta, Alma Mater Studiorum, Universita' di Bologna
    • Jemima Tabeart, University of Edinburgh
    Abstract
    Algorithms for data assimilation try to predict the most likely state of a dynamical system by combining information from observations and prior models. One of the most successful data assimilation frameworks is the linearized weak-constraint four-dimensional variational assimilation problem (4D-Var), that can be ultimately seen as a minimization problem. One of the main challenges of such approach is the solution of large saddle point linear systems arising as inner linear step within the adopted nonlinear solver. The linear algebraic problem can be solved by means of a Krylov method, like MINRES or GMRES, that needs to be preconditioned to ensure fast convergence in terms of number of iterations. In this talk we will illustrate novel, efficient preconditioning operators which involve the solution of certain Stein matrix equations. In addition to achieving better computational performance, the latter machinery allows us to derive tighter bounds for the eigenvalue distribution of the preconditioned saddle point linear system. A panel of diverse numerical examples displays the effectiveness of the proposed methodology compared to current state-of-the-art approaches.#
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Friday, June 3, 2022
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Monday, June 6, 2022
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Tuesday, June 7, 2022
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Wednesday, June 8, 2022
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Thursday, June 9, 2022
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space

All event times are listed in ICERM local time in Providence, RI (Eastern Daylight Time / UTC-4).

All event times are listed in .