Organizing Committee
Abstract

Solving systems of nonlinear equations and optimization problems are pervasive issues throughout the mathematical sciences with applications in many areas. Acceleration and extrapolation methods have emerged as a key technology to solve these problems efficiently and robustly. The simple underlying idea of these methods is to recombine previous approximations in a sequence to determine the next term or approximation.

This approach has been applied repeatedly and from different angles to numerous problems over the last several decades. Important methods including epsilon algorithms and Anderson acceleration were introduced throughout the early and mid-20th century, and are now common in many applied fields including optimization, machine learning, computational chemistry, materials, and climate sciences. Within the last decade, theoretical advances on convergence, acceleration mechanisms, and the development of unified frameworks to understand these methods have come to light, yet our understanding remains incomplete. Fascinating links exist with methods such as Nesterov acceleration and other momentum-based approaches that have been developed in the fields of optimization and machine learning in the past decades. These links and connections with dynamical systems appear to be promising directions for further insight that remain largely unexplored.

The goals of this workshop include assessing the state of the art and exploring connections between closely related methods that may have been independently developed; connecting theory to practice by fostering interaction between theorists and applied practitioners; and, encouraging new and continuing collaborations between participants.

Image for "Acceleration and Extrapolation Methods"

Confirmed Speakers & Participants

Talks will be presented virtually or in-person as indicated in the schedule below.

  • Speaker
  • Poster Presenter
  • Attendee
  • Virtual Attendee

Workshop Schedule

Monday, July 24, 2023
  • 8:50 - 9:00 am EDT
    Welcome
    11th Floor Lecture Hall
    • Session Chair
    • Brendan Hassett, ICERM/Brown University
  • 9:00 - 9:45 am EDT
    TBD
    11th Floor Lecture Hall
    • Speaker
    • Agnieszka Miedlar, Virginia Tech
    • Session Chair
    • David Gardner, Lawrence Livermore National Laboratory
  • 10:00 - 10:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:30 - 11:15 am EDT
    Asymptotic convergence speed of windowed Anderson acceleration: an overview of results and open problems
    11th Floor Lecture Hall
    • Speaker
    • Hans De Sterck, University of Waterloo
    • Session Chair
    • David Gardner, Lawrence Livermore National Laboratory
    Abstract
    Anderson acceleration is widely used to speed up convergence of fixed-point iterative methods in scientific computing and optimization. Almost all implementations use a sliding window approach with window size m. For many applications AA(m) dramatically improves the convergence speed, both when iteration counts are small, and asymptotically for large iteration counts. Nevertheless, there are still no known results yet that can bound or quantify the improvement in asymptotic convergence speed provided by windowed AA(m). In this talk I will give an overview of what is known about the asymptotic convergence speed of windowed AA(m). Numerical results show that the root-linear asymptotic convergence factor of AA(m) often strongly depends on the initial guess, and that the worst-case root-linear convergence factor is often substantially faster than the convergence factor of the underlying fixed-point method that AA(m) accelerates. Analysis of AA(m) written as a fixed-point method and the continuity and differentiability of its fixed-point iteration function provides some insight, but general results on quantifying the asymptotic convergence acceleration remain elusive. In the linear case, windowed AA(m) is a Krylov method with some interesting properties, which lead to useful per-iteration convergence bounds, but it appears difficult to translate these to sharp asymptotic bounds. A recent result for the simplest non-trivial linear case of AA(1) provides, for the first time, a full characterization of the root-linear convergence factor of AA(1) as a function of the initial guess, which allows us to compute the average convergence factor gain AA(1) provides over a distribution of initial conditions.
  • 11:30 am - 12:15 pm EDT
    Anderson-Pulay Acceleration: Convergence of Adaptive Algorithms and Applications to Quantum Chemistry
    11th Floor Lecture Hall
    • Speaker
    • Mi-Song Dupuy, Sorbonne University
    • Session Chair
    • David Gardner, Lawrence Livermore National Laboratory
    Abstract
    In this talk, a general class of algorithms for solving fixed-point problems, named Anderson-Pulay acceleration, is introduced. This family brings together the DIIS technique (Pulay, 1980) to accelerate the convergence of self-consistent field procedures in quantum chemistry, as well as the Anderson acceleration (Anderson 1960), and their variations. Such methods aim at accelerating the convergence of fixed-point problems by combining at each step several of the successive approximations to generate the next one. This process of extrapolation is characterized by its depth, i.e. the number of previous approximations stored. While this parameter is decisive in the efficiency of the method, in practice, the depth is fixed without any guarantee of convergence. In this presentation, we consider two mechanisms to vary the depth during the course of the method. A first way is to let the depth grow until the rejection of all the stored approximations (except the last one) and restart the method. Another way is to adapt the depth by eliminating some less relevant approximations at each step. In a general framework and under natural assumptions, the local convergence and acceleration of Anderson-Pulay acceleration methods can be proved. These algorithms are tested for the numerical resolution of the Hartree-Fock method and the DFT Kohn-Sham model. These numerical experiments show a faster convergence and significantly lower computational costs of the proposed approaches.
  • 12:30 - 2:00 pm EDT
    Lunch/Free Time
  • 2:00 - 2:45 pm EDT
    Constrained Multimodal Data Mining using Coupled Matrix and Tensor Factorizations
    11th Floor Lecture Hall
    • Speaker
    • Evrim Acar Ataman, Simula Research Laboratory
    • Session Chair
    • David Gardner, Lawrence Livermore National Laboratory
    Abstract
    There is an emerging need to jointly analyze heterogeneous multimodal data sets and capture the underlying patterns in an interpretable way. For instance, joint analysis of omics measurements (e.g., about the metabolome, microbiome, genome) holds the promise to provide a more complete picture of human health, and reveal better stratifications of people improving precision medicine and nutrition. Some of these measurements are dynamic and can be arranged as a higher-order tensor (e.g., subjects by metabolites by time) while some are static data sets in the form of matrices (e.g., subjects by features). Tensor factorizations have proved useful in terms of revealing the underlying patterns from higher-order tensors, and have been extended to joint analysis of data from multiple sources through coupled matrix and tensor factorizations (CMTF). While CMTF-based methods are effective for multimodal data mining, there are various challenges, in particular, in terms of capturing the underlying patterns in an interpretable way and understanding the temporal evolution of those patterns. In this talk, we first introduce a flexible algorithmic framework relying on Alternating Optimization (AO) and the Alternating Direction Method of Multipliers (ADMM) in order to facilitate the use of a variety of constraints, loss functions and couplings with linear transformations when fitting CMTF models. Numerical experiments on simulated and real data demonstrate that the proposed AO-ADMM-based approach is accurate, flexible and computationally efficient with comparable or better performance than available CMTF algorithms. We then discuss the extension of the framework to joint analysis of dynamic and static data sets by incorporating alternative tensor factorization approaches, which have shown promising performance in terms of revealing evolving patterns in temporal data analysis.
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 4:15 pm EDT
    Predictive DFT Mixing: successes and opportunities
    11th Floor Lecture Hall
    • Speaker
    • Laurence Marks, Northwestern University
    • Session Chair
    • David Gardner, Lawrence Livermore National Laboratory
    Abstract
    A substantial fraction of the world’s computational resources are occupied by density functional theory (DFT) calculations, and this is likely to continue. A key component of these is an iterative fixed-point problem, far too large for numerical differentiation and analytic forms are not feasible. In the DFT history these have been approached by multisecant “Bad Broyden” methods – “Good Broyden” did not work, the opposite of classic mathematical thinking. Even today most have user-adjustable parameters (fudge factors), and only one or two have simple trust-region controls, which we introduced some years ago.[1] I will start with an outline of some of the general features of such problems, focusing upon the Wien2k code [2] used by more than 3000 groups internationally. I will explain how they compare to water in the Colorado river moving from the La Poudre Pass in the Rock Mountains (initial densities) to the Gulf of California (converged density). Sometimes the water (convergence) is fast, sometimes it can hit walls (Grand Canyon) or traverse the Hoover dam (phase transition). At other times it moves slowly and, today, may fade into the desert sands (not converge). I will then move to discuss first a hybrid approach [3] which can smoothly transition between the limits of Bad and Good Broyden. Finally I will describe a more recent predictive approach for trust region and unpredicted step control [4], which appears to handle problems that defeat other approaches. I will end by speculating that a predictive approach may be of wider application, and also comment that there is still plenty to do and space for collaborations. References 1. Marks, L.D. and D.R. Luke, Robust mixing for ab initio quantum mechanical calculations. Physical Review B, 2008. 78(7): p. 075114-12 http://doi.org/10.1103/PhysRevB.78.075114. 2. Blaha, P., et al., WIEN2k: An APW+lo program for calculating the properties of solids. The Journal of Chemical Physics, 2020. 152(7): p. 074101 http://doi.org/10.1063/1.5143061. 3. Marks, L.D., Fixed-Point Optimization of Atoms and Density in DFT. J Chem Theory Comput, 2013. 9(6): p. 2786-800 http://doi.org/10.1021/ct4001685. 4. Marks, L.D., Predictive Mixing for Density Functional Theory (and Other Fixed-Point Problems). J Chem Theory Comput, 2021. 17(9): p. 5715-5732 http://doi.org/10.1021/acs.jctc.1c00630.
  • 4:30 - 6:00 pm EDT
    Reception
    11th Floor Collaborative Space
Tuesday, July 25, 2023
  • 9:00 - 9:45 am EDT
    Filtering and residual bounds for Anderson acceleration
    11th Floor Lecture Hall
    • Speaker
    • Sara Pollock, University of Florida
    • Session Chair
    • Agnieszka Miedlar, Virginia Tech
    Abstract
    Anderson acceleration (AA) has become increasingly popular in recent years due to its efficacy on a wide range of problems, including optimization, machine learning and complex multiphysics simulations. In this talk, we will discuss recent innovations in the theory and implementation of the algorithm. AA requires the storage of a (usually) small number of solution and update vectors, and the solution of an optimization problem that is generally posed as least-squares and solved efficiently by a thin QR decomposition. On any given problem, how successful it is depends on the details of its implementation, including how many and which of the solution and update vectors are used. We will introduce a filtered variant of the algorithm that improves both numerical stability and convergence by selectively removing columns from the least-squares matrix at each iteration. We will discuss the theory behind the introduced filtering strategy and connect it to one-step residual bounds for AA using standard tools and techniques from numerical linear algebra. We will demonstrate the method on discretized nonlinear PDE.
  • 9:55 - 10:00 am EDT
    Group Photo (Immediately After Talk)
    11th Floor Lecture Hall
  • 10:00 - 10:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:30 am - 12:30 pm EDT
    Poster Session
    11th Floor Collaborative Space
  • 12:30 - 2:00 pm EDT
    Post-Poster Session Discussions
    Working Lunch
  • 2:00 - 2:45 pm EDT
    Towards the use of Anderson Acceleration in Fusion and Combustion Simulations
    11th Floor Lecture Hall
    • Speaker
    • Katarzyna Swirydowicz, Pacific Northwest National Laboratory
    • Session Chair
    • Agnieszka Miedlar, Virginia Tech
    Abstract
    One of the most significant bottlenecks when implementing linear solvers such as GMRES in parallel is the cost of keeping the Krylov vectors orthogonal to each other. For example, modified Gram-Schmidt, as traditionally used in GMRES, requires k synchronizations when the k-th vector is orthogonalized. It is possible to reduce this cost to one (or two) synchronizations per each vector added if using so-called low synchronization Gram-Schmidt variants. From the time these methods were proposed, they have been applied in multiple areas, including Anderson acceleration. In my talk, I will introduce low-synchronization Gram-Schmidt variants, explain how they were extended to be used in block Krylov methods and Krylov subspace recycling, and in Anderson acceleration.
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 4:15 pm EDT
    Accelerated First-Order Optimization under Nonlinear Constraints
    11th Floor Lecture Hall
    • Speaker
    • Michael Muehlebach, Max Planck Institute for Intelligent Systems
    • Session Chair
    • Agnieszka Miedlar, Virginia Tech
    Abstract
    My talk will explore analogies between first-order algorithms for constrained optimization and non-smooth dynamical systems for designing a new class of accelerated first-order algorithms for constrained optimization. Unlike Frank-Wolfe or projected gradients, these algorithms avoid optimization over the entire feasible set at each iteration. I will highlight various convergence results in convex and nonconvex settings and derive rates for the convex setting. An important property of these algorithms is that constraints are expressed in terms of velocities instead of positions, which naturally leads to sparse, local and convex approximations of the feasible set (even if the feasible set is nonconvex). Thus, the complexity tends to grow mildly in the number of decision variables and in the number of constraints, which makes the algorithms suitable for machine learning applications. To that extent, I will discuss numerical results from applying our algorithms to compressed sensing and sparse regression problems, highlighting the fact that nonconvex lp constraints (p<1) can be treated efficiently, while state-of-the-art performance is recovered for p=1.
Wednesday, July 26, 2023
  • 9:00 - 9:45 am EDT
    Acceleration Methods for Solving Nonlinear Equations and Eigenvalue Problems
    11th Floor Lecture Hall
    • Speaker
    • Chao Yang, Lawrence Berkeley National Laboratory
    • Session Chair
    • Hans De Sterck, University of Waterloo
    Abstract
    We review the Anderson and Pulay acceleration methods which are also known as the direct inversion of iterative subspace (DIIS) methods for solving nonlinear equations and show how they are used in electronic structure calculations to solve the Kohn-Sham equation and the coupled cluster equation. We present numerical examples to compare the performance of acceleration methods with conventional Quasi-Newton and Newton Krylov methods. We also show that these acceleration methods can be used effectively to refine eigenvector approximations in iterative methods for solving linear eigenvalue problems.
  • 10:00 - 10:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:30 - 11:15 am EDT
    Acceleration and extrapolation methods for density functional theory
    11th Floor Lecture Hall
    • Speaker
    • Phanish Suryanarayana, Georgia Institute of Technology
    • Session Chair
    • Hans De Sterck, University of Waterloo
    Abstract
    Over the course of the past few decades, electronic structure calculations based on density functional theory (DFT) have become a cornerstone of materials research by virtue of the predictive power and fundamental insights they provide. The widespread use of the methodology can be attributed to its generality, simplicity, and high accuracy-to-cost ratio relative to other such ab initio approaches. However, while less expensive than wavefunction based methods, the solution of the DFT problem remains a formidable task. In this talk, the speaker will discuss various acceleration and extrapolation methods for reducing the time to solution in DFT simulations.
  • 11:30 am - 12:15 pm EDT
    Anderson Acceleration: Software, Storage, and a Multi-Physics Example
    11th Floor Lecture Hall
    • Speaker
    • Carl Kelley, NCSU
    • Session Chair
    • Hans De Sterck, University of Waterloo
    Abstract
    This talk is about three related things that arose from a book project. The first is new code in Julia for Anderson Acceleration. This is part of the book project and the implementation forces one to think about storage. The storage requirements for Anderson Acceleration are difficult (ask a physicist) and we look at some ways to keep the storage under control. Our code uses the Walker-Ni approach, and a simple-minded implementation of that needs to store 3m vectors, where m is the depth. Contrast this with the normal equations approach, used in many physics codes, which needs 2m vectors. We talk about one way to fix this. Finally, we look at a problem in conductive radiative transport and report results for the non-contractive case.
  • 12:30 - 2:00 pm EDT
    Lunch/Free Time
  • 2:00 - 2:45 pm EDT
    Solving Systems in Droplet Mechanics
    11th Floor Lecture Hall
    • Speaker
    • Matt Knepley, University at Buffalo
    • Session Chair
    • Hans De Sterck, University of Waterloo
    Abstract
    Droplet mechanics benefit from fully implicit timestepping for stability, but the the resulting systems are strongly coupled and highly nonlinear. We present a nonlinear elimination preconditioner for these equations, implemented in PETSc, and a possible route to analysis of the preconditioned system.
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 4:15 pm EDT
    Anderson Acceleration in the KINSOL Nonlinear Solver Package
    11th Floor Lecture Hall
    • Speaker
    • Carol Woodward, Lawrence Livermore National Laboratory
    • Session Chair
    • Hans De Sterck, University of Waterloo
    Abstract
    Carol S. Woodward and David J. Gardner Anderson acceleration (AA) has emerged as an alternative to Newton’s method for solving systems of nonlinear, algebraic equations. This talk will discuss the implementation of Anderson acceleration within the KINSOL nonlinear solver package contained in SUNDIALS. As part of SUNDIALS, this implementation is equipped with optoinal support on several high performance computing platforms, including distributed memory, threaded, GPU-based, and hybrid distributed memory-GPU systems. An overview of the implementation, how to use it, and examples of its use will be included.
  • 4:25 - 4:35 pm EDT
    Efficient spin-up of Earth System Models using Anderson Acceleration
    Lightning Talks - 11th Floor Lecture Hall
    • Virtual Speaker
    • Samar Khatiwala, University of Oxford
    • Session Chair
    • Hans De Sterck, University of Waterloo
    Abstract
    Earth System Models (ESMs) are the primary tool used for understanding the global climate system and predicting its future evolution under anthropogenic forcing. However, these models are computationally very expensive, a problem especially acute for the simulations that underpin IPCC assessments of climate change. Before such simulations can be performed, ESMs must be “spun-up” to a stable, quasi-periodic pre-industrial state so that the impact of human forcing can be accurately determined. Such “spin-up” runs require several thousand years of simulation. This is due to the slow adjustment time scale of the ocean and terrestrial carbon cycle. Even on some of the world’s most powerful supercomputers, a single spin-up can take over 2 years of compute time. Besides the enormous cost in time and resources, this has important scientific and policy implications as it is prohibitively expensive to perform more than one such spin-up, increase resolution or propagate the large parametric uncertainty inherent in all ESMs into future projections. A robust and efficient solution to this so-called “spin-up problem” has long proved elusive. Here, I present a new approach based on Anderson Acceleration (AA) that is up to 10 times faster than conventional direct integration. A particular advantage of AA over methods that have been previously proposed, such as matrix-free Newton Krylov, is that it is entirely black-box, preserves conservation properties of the model, and is fully consistent with the models’ numerical time-stepping scheme. I will also describe Matlab and python implementations with checkpointing and restart capabilities, that are tailored for the batch HPC systems on which ESMs are typically run.
  • 4:35 - 4:45 pm EDT
    The Anderson of the Acceleration
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • David Keyes, King Abdullah University of Science and Technology
    • Session Chair
    • Hans De Sterck, University of Waterloo
    Abstract
    The Anderson of the Acceleration Eleven (65%) of the presentations of the July 2023 ICERM “Acceleration and Extrapolation Methods” workshop concern Anderson Acceleration (AA), a now 61-year-old method that lay fallow in the numerical literature (though taken up in the chemical and materials literature) for about four decades. Indeed, the name “Anderson Acceleration” was first attached to it in 2011. In quoted form it currently leads to about 19,000 hits in Google and about 1,400 in Google Scholar. On August 31, 2015, partway through this renaissance of attention, Harvard Professor Emeritus Donald G. M. Anderson was invited to give the lead-off talk at an ICERM workshop on “Numerical Methods for Large-Scale Nonlinear Problems and Their Applications,” which led to him publishing a hundred pages of comments on “Anderson Acceleration, Mixing and Extrapolation” (Numer. Algor. 80:135-234, 2019), just prior to his death in January 2020. None of the speakers at the current workshop appear to have cited this paper, which Anderson intended as “historical, habilitative and hortatory remarks on the aforementioned family of algorithms and related literature.” I will bring out some interesting features of it, while commenting on Anderson and the trajectory of AA from his doctoral thesis to its early reception following publication (JACM 12:547-560, 1965). (Anderson was my doctoral thesis advisor.)
  • 4:45 - 4:55 pm EDT
    Anderson Accelerated Brinkman-Forchheimer Solver
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Lin Mu, University of Georgia
    • Session Chair
    • Hans De Sterck, University of Waterloo
    Abstract
    In this talk, we shall talk about a pressure-robust solver for Brinkman-Forchheimer equations. In order to solve the equation efficiently, the Anderson Accelerated nonlinear solver will be applied.
  • 4:55 - 5:05 pm EDT
    Is an accelerator enough to solve differential equations optimizations?
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Widodo Samyono, Jarvis Christian University
    • Session Chair
    • Hans De Sterck, University of Waterloo
    Abstract
    In solving KKT conditions for differential equation optimizations we did not only need an accelerator but also a preconditioner, since the KKT matrix is very large, sparse, and ill-conditioned. In this presentation we would like to discuss how to solve this problem.
  • 5:05 - 5:15 pm EDT
    BDDC algorithms for problems with HDG discretizations
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Xuemin Tu, University of Kansas
    • Session Chair
    • Hans De Sterck, University of Waterloo
    Abstract
    The balancing domain decomposition by constraints methods (BDDC) are one of the most popular non-overlapping domain decomposition methods. In this talk, the BDDC methods are applied to the linear systems arising from the hybridizable discontinuous Galerkin (HDG) discretization of the problems such as advection-diffusion, Oseen, and Brinkman. The original system is first reduced to a subdomain interface problem. The convergence of the algorithm is analyzed and the numerical experiments confirm the theoretical results.
Thursday, July 27, 2023
  • 9:30 - 10:15 am EDT
    Acceleration via a Non-Linear Truncated Generalized Conjugate Residual (nlTGCR) approach
    11th Floor Lecture Hall
    • Speaker
    • Yousef Saad, University of Minnesota
    • Session Chair
    • Sara Pollock, University of Florida
    Abstract
    There has been a surge of interest in recent years in general-purpose `acceleration' methods that take a sequence of vectors converging to the limit of a fixed point iteration, and produce from it a faster converging sequence. A prototype of these methods that attracted much attention recently is the Anderson Acceleration (AA) procedure. The nonlinear Truncated Generalized Conjugate Residual (nlTGCR) algorithm is an alternative to AA, which was designed from a careful adaptation of the Conjugate Residual method for solving linear systems of equations to the nonlinear context. The various links between nlTGCR and inexact Newton, quasi-Newton, and multisecant methods are exploited to build a method that has strong global convergence properties and that can also exploit symmetry when applicable.
  • 10:30 - 11:00 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 11:00 - 11:45 am EDT
    Towards the use of Anderson Acceleration in Fusion and Combustion Simulations
    11th Floor Lecture Hall
    • Speaker
    • David Gardner, Lawrence Livermore National Laboratory
    • Session Chair
    • Sara Pollock, University of Florida
    Abstract
    Efficient solvers for nonlinear systems are a critical component within many simulations of complex applications. In this talk we present the use of Anderson accelerated fixed-point solvers for problems in magnetically confined fusion plasmas and combustion. Predicting the long-time behavior of fusion plasma requires bridging the gap between processes with vastly different time scales. One approach to this challenge is evolving transport and gyrokinetic turbulence models each at their own time scales while coupling the processes through a relaxed fixed-point iteration. We show that Anderson acceleration offers increased robustness to the choice of relaxation parameter and can enable faster convergence in nonlinear diffusion test problems. Similarly, combustion simulations often utilize an operator splitting approach to advance fluid flow and chemical kinetics at different time step sizes. The reaction mechanisms are often highly stiff and necessitate using implicit time integration methods which in turn require efficient nonlinear solvers. In this case, we explore the use of Anderson accelerated fixed-point solvers an alternative to a modified Newton iteration with batched direct solvers on GPU systems.
  • 12:00 - 2:00 pm EDT
    Open Problems / Short term needs and long term goals
    Working Lunch
  • 2:00 - 2:45 pm EDT
    The effect of Anderson acceleration on superlinear and sublinear convergence
    11th Floor Lecture Hall
    • Speaker
    • Leo Rebholz, Clemson University
    • Session Chair
    • Sara Pollock, University of Florida
    Abstract
    This talk considers the effect of Anderson acceleration (AA) on the convergence order of nonlinear solvers in fixed point form x_{k+1}=g(x_{k}), that are looking for a fixed point of g. While recent work has addressed the fundamental question of how AA affects the convergence rate of linearly converging fixed point iterations (at a single step), we give the first analytical results for how AA affects the convergence order of solvers that do not converge linearly.
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 4:15 pm EDT
    Anderson Acceleration Based on the H^s Sobolev Norm
    11th Floor Lecture Hall
    • Speaker
    • Yunan Yang, Cornell University
    • Session Chair
    • Sara Pollock, University of Florida
    Abstract
    Anderson acceleration (AA) is a technique for accelerating the convergence of fixed-point iterations. In this paper, we apply AA to a sequence of functions and modify the norm in its internal optimization problem to the H^s norm for some integer s to bias it towards low-frequency spectral content in the residual. We analyze the convergence of AA by quantifying its improvement over Picard iteration. We find that AA based on the H^{-2} norm is well-suited to solve fixed-point operators derived from second-order elliptic differential operators, including the Helmholtz equation.
Friday, July 28, 2023
  • 9:00 - 9:45 am EDT
    A high order solver for the Grad-Shafranov free boundary problem
    11th Floor Lecture Hall
    • Speaker
    • Tonatiuh Sánchez-Vizuet, University of Arizona
    • Session Chair
    • Leo Rebholz, Clemson University
    Abstract
    In magnetic confinement fusion devices, the equilibrium configuration of a plasma is determined by the balance between the hydrostatic pressure in the fluid and the magnetic forces generated by an array of external coils and the plasma itself. The equilibrium configuration is determined by the solution to a nonlinear elliptic partial differential equation. However, since the location of the plasma is not known a priori, the domain of definition of the PDE must be determined as a problem unknown leading to a free boundary problem. In this talk we will discuss some recent advances in and interior/exterior iterative solution strategy. Computationally, this involves the coupling of a hybridizable discontinuous Galerkin solver for the solution of the problem inside the assumed plasma domain and a boundary integral equation solver for the solution of the exterior problem, and a minimization step. This is joint work with Antoine Cerfon (NYU), Manuel Solano (University of Concepción, Chile) and Evan Toler (NYU).
  • 10:00 - 10:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:30 - 11:15 am EDT
    Randomized orthogonalization techniques and their usage for solving linear systems of equations
    11th Floor Lecture Hall
    • Virtual Speaker
    • Laura Grigori, INRIA Paris
    • Session Chair
    • Leo Rebholz, Clemson University
    Abstract
    In this talk we discuss recent progress in using randomization for solving linear systems of equations and eigenvalue problems. We discuss randomized versions of algorithms for orthogonalizing a set of vectors and their usage in the Arnoldi iteration. This leads to introducing new Krylov subspace methods for solving large scale linear systems of equations and eigenvalue problems. The new methods retain the numerical stability of classic Krylov methods while reducing communication and being more efficient on modern massively parallel computers.
  • 11:30 am - 12:15 pm EDT
    Panel Discussion
    11th Floor Lecture Hall
    • Session Chair
    • Leo Rebholz, Clemson University
  • 12:00 - 2:00 pm EDT
    Lunch/Free Time

All event times are listed in ICERM local time in Providence, RI (Eastern Daylight Time / UTC-4).

All event times are listed in .

Request Reimbursement

This section is for general purposes only and does not indicate that all attendees receive funding. Please refer to your personalized invitation to review your offer.

ORCID iD
As this program is funded by the National Science Foundation (NSF), ICERM is required to collect your ORCID iD if you are receiving funding to attend this program. Be sure to add your ORCID iD to your Cube profile as soon as possible to avoid delaying your reimbursement.
Acceptable Costs
  • 1 roundtrip between your home institute and ICERM
  • Flights on U.S. or E.U. airlines – economy class to either Providence airport (PVD) or Boston airport (BOS)
  • Ground Transportation to and from airports and ICERM.
Unacceptable Costs
  • Flights on non-U.S. or non-E.U. airlines
  • Flights on U.K. airlines
  • Seats in economy plus, business class, or first class
  • Change ticket fees of any kind
  • Multi-use bus passes
  • Meals or incidentals
Advance Approval Required
  • Personal car travel to ICERM from outside New England
  • Multiple-destination plane ticket; does not include layovers to reach ICERM
  • Arriving or departing from ICERM more than a day before or day after the program
  • Multiple trips to ICERM
  • Rental car to/from ICERM
  • Flights on a Swiss, Japanese, or Australian airlines
  • Arriving or departing from airport other than PVD/BOS or home institution's local airport
  • 2 one-way plane tickets to create a roundtrip (often purchased from Expedia, Orbitz, etc.)
Travel Maximum Contributions
  • New England: $350
  • Other contiguous US: $850
  • Asia & Oceania: $2,000
  • All other locations: $1,500
  • Note these rates were updated in Spring 2023 and superseded any prior invitation rates. Any invitations without travel support will still not receive travel support.
Reimbursement Requests

Request Reimbursement with Cube

Refer to the back of your ID badge for more information. Checklists are available at the front desk and in the Reimbursement section of Cube.

Reimbursement Tips
  • Scanned original receipts are required for all expenses
  • Airfare receipt must show full itinerary and payment
  • ICERM does not offer per diem or meal reimbursement
  • Allowable mileage is reimbursed at prevailing IRS Business Rate and trip documented via pdf of Google Maps result
  • Keep all documentation until you receive your reimbursement!
Reimbursement Timing

6 - 8 weeks after all documentation is sent to ICERM. All reimbursement requests are reviewed by numerous central offices at Brown who may request additional documentation.

Reimbursement Deadline

Submissions must be received within 30 days of ICERM departure to avoid applicable taxes. Submissions after thirty days will incur applicable taxes. No submissions are accepted more than six months after the program end.