The fundamental problem of approximation theory is to resolve a possibly complicated function, called the target function, by simpler, easier to compute functions called approximants. Increasing the resolution of the target function can generally only be achieved by increasing the complexity of the approximants. The understanding of this trade-off between resolution and complexity is the main goal of approximation theory, a classical subject that goes back to the early results on Taylor's and Fourier's expansions of a function.
Modern problems in approximation, driven by applications in biology, medicine, and engineering, are being formulated in very high dimensions, which brings to the fore new phenomena. One aspect of the high-dimensional regime is a focus on sparse signals, motivated by the fact that many real world signals can be well approximated by sparse ones. The goal of compressed sensing is to reconstruct such signals from their incomplete linear information. Another aspect of this regime is the "curse of dimensionality" for standard smoothness classes, which means that the complexity of approximation depends exponentially on dimension. An important step in solving multivariate problems with large dimension has been made in the last 20 years: sparse representations are used as a way to model the corresponding function classes. This approach automatically entails a need for nonlinear approximation, and greedy approximation, in particular.
This program addresses a broad spectrum of approximation problems, from the approximation of functions in norm, to numerical integration, to computing minima, with a focus on sharp error estimates. It will explore the rich connections to the theory of distributions of point-sets in both Euclidean settings and on manifolds and to the computational complexity of continuous problems. It will address the issues of design of algorithms and of numerical experiments. The program will attract researchers in approximation theory, compressed sensing, optimization theory, discrepancy theory, and information based complexity theory.
To participate in a research cluster please apply through the
semester program visitors
application. Indicate which research cluster you are applying to in the "other comments"
section of the application.
Harmonic analysis provides the mathematical backbone for modern signal and image processing.
It also constitutes an important part of the foundation several scientific and engineering areas,
including communication theory, control science, fluid dynamics, and electromagnetics, that underpin a
much broader set of current applications. Although computer implementation of concepts from harmonic
analysis is prevalent, relatively little attention is given to computational and numerical aspects of
the discipline in its own literature. Further, many of the most capable young mathematicians working
in this area have only modest exposure to the roles of such crucial computational considerations as
finite data effects; e.g., How much error is introduced by truncating this infinite-series representation
of a function in terms of a frame, and where will it be manifested?
On the other hand, new tools and ideas have entered the mainstream of harmonic analysis in recent years
that have not yet become established in areas of applied mathematics where numerical and computational
issues are routinely treated as integral aspects of problem formulation and methodological development.
Among these are tools for non-orthogonal and overcomplete representations in linear spaces and the
exploitation of sparsity and related (e.g., low rank) assumptions in inverse problems of various types.
This research cluster seeks to bridge this perceived gap by (i) fostering understanding and appreciation
of the computational perspective among harmonic analysts and (ii) increasing awareness of emerging
mathematical tools and techniques in applied harmonic analysis among computational mathematicians.
Information-based complexity (IBC) deals with the computational complexity of continuous
problems for which available information is partial, priced and noisy. IBC provides a
methodological background for proving the curse of dimensionality as well as provides
various ways of vanquishing this curse.
Stochastic computation deals with computational problems that arise in
probabilistic models or can be efficiently solved by randomized algorithms.
Using IBC background, the complexity of stochastic ordinary (SDE) and
partial differential (SPDE) equations have been studied.
Topics covered in the workshop will include: adaptive and nonlinear
approximation for SPDEs, infinite-dimensional problems, inverse and ill-
posed problems, quasi-Monte Carlo methods, PDEs with random coefficients,
sparse/Smolyak grids, stochastic multi-level algorithms, SDEs and SPDEs
with nonstandard coefficients, tractability of multivariate problems.
This workshop will bring together researchers from these different fields. The goal is to explore connections,
learn and share techniques, and build bridges.
Albert Cohen (Universite de Paris VI (Pierre et Marie Curie))
Ronald Devore (Texas A&M International University)
Robert Nowak (University of Wisconsin)
Vladimir Temlyakov (University of South Carolina)
Rachel Ward (University of Texas at Austin)
[Image courtesy of Gerhard Zumbusch]
The workshop is devoted to the following problem of fundamental importance throughout science and engineering:
how to approximate, integrate, or optimize multivariate functions.
The breakthroughs demanded by high dimensional problems may be at hand. Good methods of approximation arise as
solutions of optimization problems over certain function classes that are now well understood in small and modesty large dimensions.
In high dimensions, the appropriate models involve sparse representations, which give rise to issues in nonlinear
approximation methods such as greedy approximation. High dimensional optimization problems become intractable to solve exactly,
but substantial gains in efficiency can be made by allowing for a small probability of failure
(probabilistic recovery guarantees), and by seeking approximate solutions (up to a pre-specified threshold)
rather than exact solutions. The contemporary requirements of numerical analysis connect approximation, optimization,
and probabilistic analysis.
The workshop will bring together leading experts in approximation, compressed sensing and optimization.
Discrepancy theory deals with the problem of distributing points uniformly over some geometric object
and evaluating the inevitably arising errors. The theory was ignited by such famous early results as
Herman Weyl's equidistribution theorem and Klaus Roth's theorem on the irregularities of point distributions.
The subject has now grown into a broad field with deep connections tomany areas such as number theory,
combinatorics, approximation theory, harmonic analysis, and probability theory, in particular empirical and Gaussian processes.
The computational aspects of the subject include searching for well-distributed sets and numerical integration rules.
Despite years of research, many fundamental questions, especially in high dimensions, remain wide open, although several
important advances have been achieved recently.
The participants of this workshop will share a wide range of views on topics related to discrepancy with
an eye towards the recent developments in the subject. The workshop will bring together different communities
working on various aspects of discrepancy theory. The exchange of ideas and approaches, the cross-fertilization
of viewpoints, sharing the visions of near and far term goals of the field will be the highlight of the conference.
Ali Ahmed (Georgia Institute of Technology)
Christoph Aistleitner* (Technische Universität Graz)