Trends in Computational Discrete Optimization

Institute for Computational and Experimental Research in Mathematics (ICERM)

April 24, 2023 - April 28, 2023
Monday, April 24, 2023
  • 8:30 - 8:50 am EDT
    Check In
    11th Floor Collaborative Space
  • 8:50 - 9:00 am EDT
    Welcome
    11th Floor Lecture Hall
    • Brendan Hassett, ICERM/Brown University
  • 9:00 - 9:45 am EDT
    Solving Mixed-Integer Semidefinite Programs
    11th Floor Lecture Hall
    • Speaker
    • Marc Pfetsch, TU Darmstadt
    • Session Chair
    • Santanu Dey, Georgia Institute of Technology.
    Abstract
    Mixed-integer semidefinite programs form an interesting generalization of mixed-integer linear programs with many applications. Here, some of the variables of a semidefinite program are required to be integer. This talk will introduce new methods as well as techniques that can be generalized from the linear to the semidefinite world. This includes presolving methods, e.g., fixing of variables, bound strengthening etc. Moreover, handling of symmetries and conflict analysis can be adapted and have a positive impact on the performance. The talk will also try to highlight the differences between the linear and semidefinite setting. The impact of the methods will be illustrated computationally, using SCIP-SDP, an open-source solver for mixed-integer semidefinite programs based on SCIP.
  • 10:00 - 10:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:30 - 11:15 am EDT
    Measuring the Importance of Facets of Cyclic Group Polyhedra Using Solid Angles
    11th Floor Lecture Hall
    • Speaker
    • Yuan Zhou, University of Kentucky
    • Session Chair
    • Santanu Dey, Georgia Institute of Technology.
    Abstract
    Solid angle of a polyhedral cone, which indicates the proportion of space occupied by the cone, is of significant interest in discrete optimization. Gomory introduced the cyclic group relaxation of IP. The facets of the cyclic group polyhedra are useful in generating cutting planes. To predict the importance or relative size of each facet, researchers have used the shooting experiment, which provides an estimate of the solid angle subtended by the facet at the origin. However, the obtained sizes were not always consistent due to randomness. We propose to use the solid angles measure, whose closed formulas were well established in two and three dimensions. For higher dimensions, Aomoto and Ribando showed that the solid angle of a simplicial cone can be computed using a multivariable hypergeometric series, provided that the cone satisfies a certain condition related to positive-definiteness. We provide decomposition methods to ensure that the positive-definite criterion is met. We present our results of the solid angle measures and compare them with those obtained from the shooting experiments.
  • 11:30 am - 12:15 pm EDT
    Safe and verified cutting planes in an exact MIP framework
    11th Floor Lecture Hall
    • Speaker
    • Ambros Gleixner, HTW & Zuse Institute Berlin
    • Session Chair
    • Santanu Dey, Georgia Institute of Technology.
    Abstract
    The presence of floating-point roundoff errors compromises the results of virtually all fast mixed-integer programming solvers available today. In this talk we present recent advances in our endeavour to craft a performant mixed-integer optimizer that is not only free from roundoff errors, but also produces certificates of optimality that can be verified independently of the solving process. We provide an overview of the entire framework, which is an extension of the open solver SCIP. We focus in particular on the safe generation and verification of Gomory mixed-integer cuts via mixed-integer rounding. This is joint work with Sander Borst, Leon Eifler, Fabian Frickenstein, and Jules Nicolas-Thouvenin.
  • 12:30 - 2:30 pm EDT
    Lunch/Free Time
  • 2:30 - 3:15 pm EDT
    Linear lexicographic optimization and preferential bidding system
    11th Floor Lecture Hall
    • Speaker
    • Frédéric Meunier, École Nationale des Ponts et Chaussées, CERMICS
    • Session Chair
    • Amitabh Basu, Johns Hopkins University
    Abstract
    Some airlines use the preferential bidding system to construct the schedules of their pilots. In this system, the pilots bid on the different activities and the schedules that lexicographically maximize the scores of the pilots according to their seniority are selected. A sequential approach to solve this maximization problem is natural: the problem is first solved with the bids of the most senior pilot; then it is solved with those of the second most senior without decreasing the score of the most senior, and so on. The literature admits that the structure of the problem somehow imposes such an approach. The problem can be modeled as an integer linear lexicographic program. We propose a new exact method, which relies on column generation for solving its continuous relaxation. To design this column generation, we prove that bounded linear lexicographic programs admit “primal-dual” feasible bases and we show how to compute such bases efficiently. Another contribution on which our exact method relies consists in the extension of standard tools for resource-constrained longest path problems to their lexicographic versions. Numerical experiments show that this new method is already able to solve industrial instances provided by Air France, with up to 150 pilots. Joint work with Nour ElHouda Tellache and Axel Parmentier
  • 3:30 - 4:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 4:00 - 4:45 pm EDT
    Mathematical Optimization for Traffic Management in Urban Air Mobility
    11th Floor Lecture Hall
    • Speaker
    • Claudia D'Ambrosio, LIX - Ecole Polytechnique
    • Session Chair
    • Amitabh Basu, Johns Hopkins University
    Abstract
    We focus on how mathematical optimization (MO) could help build the bricks to develop Urban Air Mobility (UAM). In particular, we focus on passengers’ transportation via eVTOLs, i.e., electric flying vehicles that will allow exploiting the sky to help smooth ground traffic in densely populated areas. Clearly, one of the biggest challenges in UAM is to ensure safety. From an operational viewpoint, the flights planning can be highly affected by different kinds of disruptions, which have to be solved at the tactical deconfliction level. Inspired by the classical aircraft deconfliction, we propose a MO formulation based on a mathematical definition of vehicles separation, specialized in the UAM context. The deconfliction is based on speed changes or delayed takeoff, when possible. To the best of our knowledge, our MO model is the first that considers the whole set of conflicts at the same time. In the computational study, we, thus, compare our approach against a variant considering only pairwise conflicts, on three sets of realistic scenarios. More details can be found in Pelegrin et. al (2021).
  • 5:00 - 6:30 pm EDT
    Reception
    11th Floor Collaborative Space
Tuesday, April 25, 2023
  • 9:00 - 9:45 am EDT
    A NASA Perspective on Quantum Computing, with Emphasis on Recent Results in Distributed Computing
    11th Floor Lecture Hall
    • Speaker
    • Eleanor Rieffel, NASA
    • Session Chair
    • Giacomo Nannicini, University of Southern California
    Abstract
    The NASA Quantum Artificial Intelligence Laboratory (QuAIL) performs research to assess the potential impact of quantum computers on challenging computational problems relevant to NASA missions of the future, with particular emphasis on discrete optimization. The talk will begin with a brief overview of the QuAIL team and some general remarks on the status of and prospects for quantum computing. The talk will then transition to a discussion of distributed quantum computing, with an emphasis on recent results in the quantum CONGEST-CLIQUE Model (qCCM). I’ll introduce both the classical and quantum CONGEST-CLIQUE Models of distributed computation, and then outline two quantum algorithms that succeed with high probability; one yields an approximately optimal Steiner Tree, and the other an exact directed minimum spanning tree, each using asymptotically fewer rounds of communication than any known algorithms in the classical CONGEST-CLIQUE model. We achieve these results by combining classical algorithms with fast quantum subroutines. Additionally, we characterize the constants and logarithmic factors involved in our algorithms, as well as related prior classical and quantum algorithms, revealing the importance of “small” factors and highlighting that advances are needed to render both practical. The talk will conclude with some open questions.
  • 10:00 - 10:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:30 - 11:15 am EDT
    Compiled nonlocal games: sum-of-squares optimization meets cryptography?
    11th Floor Lecture Hall
    • Speaker
    • Anand Natarajan, MIT
    • Session Chair
    • Giacomo Nannicini, University of Southern California
    Abstract
    Nonlocal games are a fundamental tool for testing quantum systems, but their power depends on spatial separation: a verifier must be able to separately interact with two quantum systems that cannot communicate with each other. In 2022, Kalai, Lombardi, Vaikuntanathan, and Yang proposed a method to "compile" any nonlocal game into an interaction with a *single* quantum device, using cryptography to simulate spatial separation. However, they could not characterize the behavior of quantum devices in their protocols (only classical devices). In this work, we make progress on this question by showing that the compiled version of the CHSH game forces the device to use two anti-commuting observables, just as the nonlocal version does, and as a consequence obtain an efficient argument system for polynomial-time quantum computations. Our proof is based on a modification of the noncommutative sum-of-squares method, which is one of the few general techniques known to analyze nonlocal games. Joint work with Tina Zhang, MIT.
  • 11:30 am - 12:15 pm EDT
    Approximation and Hardness Results for Quantum Max Cut
    11th Floor Lecture Hall
    • Speaker
    • Ojas Parekh, Sandia National Laboratories
    • Session Chair
    • Giacomo Nannicini, University of Southern California
    Abstract
    We will explore synergies between classical discrete optimization problems and the Local Hamiltonian problem, a natural quantum generalization of classical constraint satisfaction problems (CSPs). We will consider Quantum Max Cut, a QMA-hard 2-Local Hamiltonian problem. Quantum Max Cut is emerging as a testbed for approximating 2-Local Hamiltonian problems analogously to Max Cut’s role as a canonical CSP and discrete optimization problem that has driven development of classical heuristics, approximation algorithms, and hardness results.  We will discuss several recent results including an optimal product-state approximation for Quantum Max Cut based on a quantum version of the Lasserre hierarchy, as well as prospects for Unique-Games hardness. This talk is aimed at a broad audience and assumes no background in quantum physics.
  • 12:30 - 2:30 pm EDT
    Networking Lunch
    Lunch/Free Time - 11th Floor Collaborative Space
  • 2:30 - 3:15 pm EDT
    Quantum Annealing and Combinatorial Optimization
    11th Floor Lecture Hall
    • Speaker
    • Carleton Coffrin, Los Alamos National Laboratory
    • Session Chair
    • Swati Gupta, Georgia Tech
    Abstract
    Over the past decade the usefulness of quantum computing hardware for combinatorial optimization has been the subject of much debate. One of the central challenges of this discussion is that theoretical analysis thus far only predicts a quadratic improvement in computational complexity for quantum-accelerated optimization algorithms. In this regime runtime performance benchmarking plays an important role in understanding and quantifying the potential benefits of quantum computing for optimization tasks. In this work, we will review the mathematical foundations of the Quantum Annealing algorithm for combinatorial optimization and conduct a performance assessment of D-Wave Systems' most recent “Advantage Performance Update” computer. We show that classes of contrived problems exist where this quantum annealer can provide run time benefits over a collection of established solution methods. Although this work does not present strong evidence of an irrefutable performance benefit for this emerging quantum optimization technology, it does exhibit encouraging progress, signaling potential impacts on practical optimization tasks in the future.
  • 3:30 - 4:00 pm EDT
    Poster Session Blitz
    Lightning Talks - 11th Floor Collaborative Space
    • Speakers
    • Rachael Alfant, Rice University
    • Cristina Molero-Río, École Polytechnique
    • Renan Spencer Trindade, École Polytechnique, France
    • Nagisa Sugishita, University of Edinburgh
    • William Wesley, University of California Davis
    • Session Chair
    • Swati Gupta, Georgia Tech
  • 4:00 - 5:00 pm EDT
    Poster Session / Coffee Break
    Poster Session - 10th Floor Collaborative Space
Wednesday, April 26, 2023
  • 9:00 - 9:45 am EDT
    Milestones on the Quantum Utility Highway
    11th Floor Lecture Hall
    • Speaker
    • Catherine McGeoch, D-Wave Systems Inc.
    • Session Chair
    • Swati Gupta, Georgia Tech
    Abstract
    We introduce Quantum Utility, an approach to quantum performance evaluation that aims to capture the user experience by considering overhead costs associated with the quantum computation. We define Milestones, which may be thought of as demonstrations of quantum utility in restricted contexts associated with subsets of overheads. We evaluate performance of a D-Wave Advantage system with respect to three Milestones, and discuss these results in the context of technological progress of annealing based QPUs.
  • 10:00 - 10:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:30 - 11:15 am EDT
    First-order methods for LP
    11th Floor Lecture Hall
    • Speaker
    • Miles Lubin, Hudson River Trading
    • Session Chair
    • Juan Pablo Vielma, Google
    Abstract
    We present new approaches for solving linear programming problems based on first-order methods and discuss some of the implications for computational discrete optimization. Among other advantages over simplex and barrier methods, these approaches have demonstrated scalability to very large instances (billions of variables) and have the potential to run on new hardware platforms. The talk covers recent theoretical and computational developments, including the release of PDLP in the OR-Tools library, and reviews preliminary applications in integer programming. This talk describes work completed while the speaker was at Google Research.
  • 11:30 am - 12:15 pm EDT
    Neural Network Verification As Piecewise Linear Optimization
    11th Floor Lecture Hall
    • Speaker
    • Joey Huchette, Google
    • Session Chair
    • Juan Pablo Vielma, Google
    Abstract
    In this talk we discuss how neural network verification (and related tasks) can be fruitfully modeled and solved using piecewise linear optimization techniques from operations research.
  • 12:25 - 12:30 pm EDT
    Group Photo (Immediately After Talk)
    11th Floor Lecture Hall
  • 12:30 - 2:30 pm EDT
    Lunch/Free Time
  • 2:30 - 3:15 pm EDT
    Machine Learning for discrete optimization: Graph Neural Networks, generalization under shifts, and loss functions
    11th Floor Lecture Hall
    • Speaker
    • Stefanie Jegelka, MIT
    • Session Chair
    • Andrea Lodi, Cornell Tech
    Abstract
    Graph Neural Networks (GNNs) have become a popular tool for learning algorithmic tasks, in particular related to combinatorial optimization. In this talk, we will focus on the “algorithmic reasoning” task of learning a full algorithm. While GNNs have shown promising empirical results, their generalization properties are less well understood. We will try to understand in particular out-of-distribution generalization in widely used message passing GNNs, with an eye on applications in learning for optimization: what may be an appropriate metric for measuring shift? Under what conditions will a GNN generalize to larger graphs? In the last part of the talk, we will take a brief look at objective (loss) functions for learning with discrete objects, beyond GNNs. In particular, neural networks work best with continuous, high-dimensional spaces. Can we integrate this into appropriate loss functions?
  • 3:30 - 4:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 4:00 - 4:45 pm EDT
    Understanding Neural Network Expressivity via Polyhedral Geometry
    11th Floor Lecture Hall
    • Speaker
    • Christoph Hertrich, The London School of Economics and Political Science
    • Session Chair
    • Andrea Lodi, Cornell Tech
    Abstract
    Neural networks with rectified linear unit (ReLU) activations are one of the standard models in modern machine learning. Despite their practical importance, fundamental theoretical questions concerning ReLU networks remain open until today. For instance, what is the precise set of (piecewise linear) functions representable by ReLU networks with a given depth? Even the special case asking for the number of layers to compute a function as simple as max{0, x1, x2, x3, x4} has not been solved yet. In this talk we will explore the relevant background to understand this question and report about recent progress using polyhedral geometry as well as a computer-aided approach based on mixed-integer programming.
Thursday, April 27, 2023
  • 9:00 - 9:45 am EDT
    Using Trained Machine Learning Predictors in Gurobi
    11th Floor Lecture Hall
    • Speaker
    • Tobias Achterberg, Gurobi Optimization
    • Session Chair
    • Volker Kaibel, Otto-von-Guericke Universität Magdeburg
    Abstract
    In this talk, we are interested in using machine learning predictors to model relationships between variables of a MIP optimization model in Gurobi. The inputs of the trained machine learning model are MIP decision variables, and the predicted output can be used in the objective function or in other constraints. In November 2022 we released the first version of the open-source ""gurobi-machinelearning"" Python package on github.com. This package aims at making it easy to insert regression models trained by popular frameworks (e.g., scikit-learn, Keras, PyTorch) into a Gurobi model. The regression model may be a linear or logistic regression, a neural network, or based on decision trees. While the resulting optimization models have interesting potential applications, unfortunately they are often intractable with current technology. We present algorithmic improvements to Gurobi that improve performance on MIP models that include regression models--in particular optimization models with embedded ReLU neural networks. Following Fischetti and Jo (2018), we apply optimization based bound tightening on the aggregated input variable of each neuron, processing neurons in layers from the input to the output layer. But since the MIP solver is unaware of the neural network structure, we first need to identify the layers of the network from the MIP formulation. This is done by a clustering algorithm, which is also discussed in this talk.
  • 10:00 - 10:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:30 - 11:15 am EDT
    Global optimization with integers: model-based and data-driven approaches
    11th Floor Lecture Hall
    • Speaker
    • Nick Sahinidis, Georgia Institute of Technology
    • Session Chair
    • Volker Kaibel, Otto-von-Guericke Universität Magdeburg
    Abstract
    We present recent methodological developments in the context of three closely related computational optimization projects: 1. The BARON project for algebraic optimization through a spatial branch-and-bound algorithm that exploits various convexification techniques. 2. The ALAMO project, which relies on algebraic global optimization (BARON) to build simple models from data while enforcing shape constraints. 3. The BAM project for data-driven optimization through the integration of sampling, partitioning, and global optimization (BARON) of local algebraic surrogates constructed from data (ALAMO). We discuss connections between the three projects, report on the status of each project and their integration, and present computations on established benchmarks and applications.
  • 11:30 am - 12:15 pm EDT
    Convex hull of a monomial on a two-variable conic domain
    11th Floor Lecture Hall
    • Speaker
    • Pietro Belotti, Politecnico di Milano
    • Session Chair
    • Volker Kaibel, Otto-von-Guericke Universität Magdeburg
    Abstract
    We consider a monomial function with real exponents, which is of interest in optimization. Specifically, global mixed-integer nonlinear optimization solvers need tight convex relaxations of sets defined by nonconvex functions to find a valid lower bound. The convex hull of the monomial in two variables on a bounding box is known for some special cases. We discuss the convex hull of a generic monomial in two variables restricted to a cone rather than a bounding box. We then look at how to compute the volume of such convex hull, which is also of interest in global optimization: branching operations of branch-and-bound solvers have a great impact in solver efficiency, in particular some branching techniques that aim at minimizing the total resulting volume of the two new subproblems.
  • 12:30 - 2:30 pm EDT
    Networking Lunch
    Lunch/Free Time - 11th Floor Collaborative Space
  • 2:30 - 3:15 pm EDT
    Modeling and Duality in Domain Specific Languages for Mathematical Optimization
    11th Floor Lecture Hall
    • Speaker
    • Juan Pablo Vielma, Google
    • Session Chair
    • Marc Pfetsch, TU Darmstadt
    Abstract
    Domain specific languages (DSL) for mathematical optimization allow users to write problems in a natural algebraic format. In this talk we explore the design and implementation of OR-Tools’ MathOpt DSL.
  • 3:30 - 4:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 4:00 - 4:45 pm EDT
    Solving a family of mixed integer nonlinear programs with convex first order method roots
    11th Floor Lecture Hall
    • Speaker
    • Rahul Mazumder, Massachusetts Institute of Technology
    • Session Chair
    • Marc Pfetsch, TU Darmstadt
    Abstract
    We consider a family of convex integer programming problems arising in statistics/machine learning with sparsity structures. Unlike a line of work which relies on commercial MILP solvers, we design a specialized nonlinear branch-and-bound (BnB) framework, by critically exploiting the problem structure. An important component in our framework lies in efficiently solving the node relaxations using a specialized first-order methods (e.g., coordinate descent). Our first-order methods effectively leverage information across the BnB nodes through using warm starts, active sets, and screening---some old, and some new tools developed in the context of large-scale convex optimization.
Friday, April 28, 2023
  • 9:00 - 9:45 am EDT
    Continuous cutting plane algorithms in integer programming
    11th Floor Lecture Hall
    • Speaker
    • Andrea Lodi, Cornell Tech
    • Session Chair
    • Tobias Achterberg, Gurobi Optimization
    Abstract
    Cutting planes for mixed-integer linear programs (MILPs) are typically computed in rounds by iteratively solving optimization problems, the so-called separation. Instead, we reframe the problem of finding good cutting planes as a continuous optimization problem over weights parametrizing families of valid inequalities. This problem can also be interpreted as optimizing a neural network to solve an optimization problem over subadditive functions, which we call the subadditive primal problem of the MILP. To do so, we propose a concrete two-step algorithm, and demonstrate empirical gains when optimizing generalized Gomory mixed-integer inequalities over various classes of MILPs. (Joint work with Didier Chételat.)
  • 10:00 - 10:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:30 - 11:15 am EDT
    Large Neighborhood Search for ILPs with Relaxation-based and Contrastive-Learning-based Heuristics
    11th Floor Lecture Hall
    • Virtual Speaker
    • Bistra Dilkina, University of Southern California
    • Session Chair
    • Tobias Achterberg, Gurobi Optimization
    Abstract
    Large Neighborhood Search (LNS) is an effective heuristic algorithm for finding high quality solutions of combinatorial optimization problems, including for large-scale Integer Linear Programs (ILPs). It starts with an initial solution to the problem and iteratively improves it by searching a large neighborhood around the current best solution. LNS relies on destroy heuristics to select neighborhoods to search in by un-assigning a subset of the variables and then re-optimizing their assignment. We focus on designing effective and efficient heuristics in LNS for integer linear programs (ILP). Local Branching (LB) is an heuristic that selects the neighborhood that leads to the largest improvement over the current solution in each iteration of LNS (the optimal subset of variables to ‘destroy’ within the Hamming ball of the incumbent solutions). LB is often slow since it needs to solve an ILP of the same size as input. First we propose LB-relaxation-based heuristics that compute as effective neighborhoods as LB but run faster and achieve state-of-the-art anytime performance on several ILP benchmarks. Next, we propose a novel ML-based LNS for ILPs, namely CL, that uses contrastive learning (CL) to learn to imitate the LB heuristic. We not only use the optimal subsets provided by LB as the expert demonstration, but also leverage intermediate solutions and perturbations to collect sets of positive and negative samples. We use graph attention networks and a richer set of features to further improve its performance. Empirically, CL outperforms state-of-the-art ML and non-ML approaches at different runtime cutoffs in terms of multiple metrics, including the primal gap, the primal integral, the best performing rate and the survival rate, and achieves good generalization performance.
  • 12:30 - 2:30 pm EDT
    Lunch/Free Time

All event times are listed in ICERM local time in Providence, RI (Eastern Daylight Time / UTC-4).

All event times are listed in .