Combinatorial Optimization and Learning

Workshop ● Aachen, Germany ● 6-7 November 2025

The Workshop

This workshop aims to foster scientific exchange at the intersection of combinatorial optimization and machine learning. Combinatorial optimization provides rigorous algorithmic frameworks with provable guarantees, while machine learning leverages empirical data to achieve strong performance in practice. The focus is on algorithmic and theoretical advances that combine both paradigms, including, but not limited to learning-augmented algorithms, data-driven optimization, ML-accelerated exact methods and optimization under uncertainty. The workshop serves as a forum for presenting novel models, algorithmic strategies, and analytical insights in this emerging research area.

The Speakers

The workshop will feature a diverse lineup of speakers, each bringing unique perspectives on the intersection of combinatorial optimization and machine learning. The confirmed speakers include:

Additionally, we will have a series of elevator pitches (each 5 minutes) from early-career researchers and PhD students, providing a platform for emerging voices in the field to share their latest findings and ideas.

The Schedule

The workshop will take place over two days, with a mix of keynote talks, elevator pitches, and time for discussions. This is the preliminary schedule:

Thursday
14:30Coffee & Registration
15:00Welcome
15:15Martin Skutella
16:15Elevator Pitches
16:45Coffee Break
17:15Yingqian Zhang
18:15Guido Schäfer
19:15Conference Dinner
Friday
08:30Coffee
09:00Jan Tönshoff
10:00Coffee Break
10:30Nicole Megow
11:30Elevator Pitches
12:00Lunch
13:30Andrea Lodi
14:30Closing Remarks

The Location

Burg Frankenberg - Venue & Spirit of the Conference

Burg Frankenberg, a 13th-century water castle in the heart of Aachen's Frankenberger Viertel, now serves as a vibrant cultural and community center. With its rich history and inspiring atmosphere, it offers the perfect setting for our workshop. All sessions will take place in the main hall of the Burg, which accommodates up to 50 participants. The conference dinner will also be held on-site, inviting all attendees to enjoy an evening of exchange and local charm within the historic walls.

The Committee

Sponsors

The workshop is made possible through the support of different institutions. Their contributions help us to cover the costs of the venue, catering, and other logistical aspects of the event.

Registration

To register for the workshop, please fill out the registration form linked below.
Early registration is encouraged as space is limited to 50 seats and slots will be assigned based on a first-come, first-serve basis.

×

The differentiable Feasibility Pump

Prof. Dr. Andrea Lodi

Although nearly 20 years have passed since its conception, the feasibility pump algorithm remains a widely used heuristic to find feasible primal solutions to mixed-integer linear problems. Many extensions of the initial algorithm have been proposed. Yet, its core algorithm remains centered around two key steps: solving the linear relaxation of the original problem to obtain a solution that respects the constraints, and rounding it to obtain an integer solution. This paper shows that the traditional feasibility pump and many of its follow-ups can be seen as gradient-descent algorithms with specific parameters. A central aspect of this reinterpretation is observing that the traditional algorithm differentiates the solution of the linear relaxation with respect to its cost. This reinterpretation opens many opportunities for improving the performance of the original algorithm. We study how to modify the gradient-update step as well as extending its loss function. We perform extensive experiments on MIPLIB instances and show that these modifications can substantially reduce the number of iterations needed to find a solution. (Joint work with M. Cacciola, Y. Emine, A. Forel, A. Frangioni).

×

One-Shot SAT Solver Guidance with Graph Neural Networks

Dr. Jan Tönshoff

Boolean Satisfiability (SAT) solvers are foundational to computer science, yet their performance typically hinges on hand-crafted heuristics. We present Reinforcement Learning from Algorithm Feedback (RLAF) as a paradigm for learning to guide SAT solver branching heuristics with Graph Neural Networks (GNNs).

Central to this approach is a novel and generic mechanism for injecting inferred variable weights and polarities into the branching heuristics of existing SAT solvers. In a single forward pass, a GNN assigns these parameters to all variables. Casting this one-shot guidance as a reinforcement learning problem lets us train the GNN with off-the-shelf policy-gradient methods, such as GRPO, directly using the solver's computational cost as the sole reward signal. The learned policies consistently accelerate a range of base solvers and outperform expert-supervised approaches based on learning handcrafted weighting heuristics, offering a promising path towards data-driven heuristic design in combinatorial optimization.

×

From Random Heuristics to Smarter Policies: Exploring Offline RL in Combinatorial Optimization

Prof. Dr. Yingqian Zhang

The application of deep reinforcement learning (RL) to combinatorial optimization problems (COPs) has become an active research area at the intersection of AI and OR. Most existing approaches rely on online RL, where an agent learns by interacting with a surrogate environment. Such methods typically require an enormous number of interactions to learn effective policies, as they often start from random behavior. This seems inefficient, given that many well-established algorithms already exist for generating good-quality solutions to COPs.

In this line of work, we explore an offline RL approach to address the following question: Can we leverage existing algorithms to generate solutions for COPs that help train better heuristics?

Our initial findings suggest the answer is yes. In addition, the proposed offline RL method is highly sample efficient, requiring only 10-20 training instances to learn high-quality policies for (flexible) job shop scheduling problems. Interestingly, we also observe that the offline RL method performs better when trained on data produced by a random heuristic than when trained on higher-quality data generated by genetic algorithms or priority dispatching rules. These insights open new directions for understanding how data quality influences learning to solve COPs.

×

Generalized Assignment Mechanisms with Predictions

Prof. Dr. Guido Schäfer

The growing field of algorithms with predictions explores how algorithmic decisions can be guided by data-driven predictions while maintaining rigorous performance guarantees. Recently, this perspective has also emerged as a new line of research in mechanism design, where agents are strategic and may manipulate the inputs to the algorithm. In this talk, we discuss recent progress at the intersection of learning-augmented algorithms and incentive-compatible mechanism design.

We study how predictions can be leveraged to design mechanisms with enhanced performance guarantees, balancing the trade-off between consistency (performing optimally when predictions are accurate) and robustness (preserving guarantees when predictions are imperfect). As a main example, we focus on strategic variants of the generalized assignment problem, where predictions of optimal outcomes guide the design of truthful mechanisms that achieve an optimal consistency-robustness trade-off. Remarkably, our core mechanism builds on the celebrated Gale-Shapley algorithm for stable matchings, illustrating that predictions can be incorporated into strategyproof mechanisms in a surprising and elegant way.

×

On the Expressivity and Complexity of ReLU Neural Networks

Prof. Dr. Martin Skutella

Neural networks with ReLU activation play a key role in modern machine learning. We study basic properties of piece-wise linear functions represented by such networks, revealing interesting connections to problems in combinatorial optimization and polyhedral theory.

Classical problems in combinatorial optimization can be solved via ReLU networks of bounded size. One example is the Knapsack Problem for which we discuss an explicit trade-off between network size and approximation ratio, thus illuminating how expressivity in neural networks maps to solution quality in a combinatorial setting.

Using techniques from mixed-integer optimization, polyhedral theory, and tropical geometry, progress has been made on understanding the class of functions that can be represented by a neural network with ReLU activations and a given architecture. We focus on the required depth of neural networks for representing piecewise-wise linear functions and, in particular, discuss recent lower bounds.

In addition, we study the computational complexity of deciding whether a given ReLU network is injective, surjective, or satisfies certain verification properties. Our results demonstrate that these problems are computationally intractable in general, thereby shedding light on fundamental limitations of analyzing and verifying neural networks from a complexity-theoretic perspective.

The lecture is based on joint work with Christoph Hertrich, Amitabh Basu, Marco Di Summa, Vincent Froese, and Moritz Grillo.

×

The Power of Two Oracles: Minimum Spanning Tree and Matroid Optimization

Prof. Dr. Nicole Megow

The performance of learning-based algorithms benefits from high-quality predictions, while remaining robust under unreliable predictions. Querying large models such as neural networks can provide highly accurate predictions, but often at significant computational cost. At the same time, lightweight heuristics or smaller models can return approximate information much faster, albeit with reduced reliability. In this talk, we introduce a two-oracle model that formalizes this setting by giving algorithms access to both a fast but noisy oracle and a slow but accurate one, enabling them to balance efficiency and solution quality. We discuss this model in the context of matroid optimization problems, which generalize many classical problems in combinatorial optimization. Our focus is on the maximum-weight basis problem and its special case, the minimum spanning tree. We analyze scenarios involving two different kinds of information: a structural oracle (the independence oracle) and a numerical oracle (weights of elements). We design algorithms that make provably few calls to the clean oracle while remaining robust against arbitrarily poor dirty oracles, thereby approaching the performance of classic algorithms.