Combinatorial Optimization and Learning

Workshop ● Aachen, Germany ● 6-7 November 2025

The Workshop

This workshop aims to foster scientific exchange at the intersection of combinatorial optimization and machine learning. Combinatorial optimization provides rigorous algorithmic frameworks with provable guarantees, while machine learning leverages empirical data to achieve strong performance in practice. The focus is on algorithmic and theoretical advances that combine both paradigms, including, but not limited to learning-augmented algorithms, data-driven optimization, ML-accelerated exact methods and optimization under uncertainty. The workshop serves as a forum for presenting novel models, algorithmic strategies, and analytical insights in this emerging research area.

The Speakers

The workshop will feature a diverse lineup of speakers, each bringing unique perspectives on the intersection of combinatorial optimization and machine learning. The confirmed speakers include:

Additionally, we will have a series of elevator pitches (each 5 minutes) from early-career researchers and PhD students, providing a platform for emerging voices in the field to share their latest findings and ideas.

The Schedule

The workshop will take place over two days, with a mix of keynote talks, elevator pitches, and time for discussions. This is the preliminary schedule:

Thursday
14:30Coffee & Registration
15:00Welcome
15:15Invited Talk
16:15Elevator Pitches
16:45Coffee Break
17:15Invited Talk
18:15Invited Talk
19:15Conference Dinner
Friday
08:30Coffee
09:00Invited Talk
10:00Invited Talk
11:00Coffee Break
11:30Elevator Pitches
12:00Lunch
13:30Invited Talk
14:30Closing Remarks

The Location

Burg Frankenberg - Venue & Spirit of the Conference

Burg Frankenberg, a 13th-century water castle in the heart of Aachen's Frankenberger Viertel, now serves as a vibrant cultural and community center. With its rich history and inspiring atmosphere, it offers the perfect setting for our workshop. All sessions will take place in the main hall of the Burg, which accommodates up to 50 participants. The conference dinner will also be held on-site, inviting all attendees to enjoy an evening of exchange and local charm within the historic walls.

The Committee

Sponsors

The workshop is made possible through the support of different institutions. Their contributions help us to cover the costs of the venue, catering, and other logistical aspects of the event.

Registration

To register for the workshop, please fill out the registration form linked below.
Early registration is encouraged as space is limited to 50 seats and slots will be assigned based on a first-come, first-serve basis.

×

The differentiable Feasibility Pump

Prof. Dr. Andrea Lodi

Although nearly 20 years have passed since its conception, the feasibility pump algorithm remains a widely used heuristic to find feasible primal solutions to mixed-integer linear problems. Many extensions of the initial algorithm have been proposed. Yet, its core algorithm remains centered around two key steps: solving the linear relaxation of the original problem to obtain a solution that respects the constraints, and rounding it to obtain an integer solution. This paper shows that the traditional feasibility pump and many of its follow-ups can be seen as gradient-descent algorithms with specific parameters. A central aspect of this reinterpretation is observing that the traditional algorithm differentiates the solution of the linear relaxation with respect to its cost. This reinterpretation opens many opportunities for improving the performance of the original algorithm. We study how to modify the gradient-update step as well as extending its loss function. We perform extensive experiments on MIPLIB instances and show that these modifications can substantially reduce the number of iterations needed to find a solution. (Joint work with M. Cacciola, Y. Emine, A. Forel, A. Frangioni).

×

One-Shot SAT Solver Guidance with Graph Neural Networks

Dr. Jan Tönshoff

Boolean Satisfiability (SAT) solvers are foundational to computer science, yet their performance typically hinges on hand-crafted heuristics. We present Reinforcement Learning from Algorithm Feedback (RLAF) as a paradigm for learning to guide SAT solver branching heuristics with Graph Neural Networks (GNNs).

Central to this approach is a novel and generic mechanism for injecting inferred variable weights and polarities into the branching heuristics of existing SAT solvers. In a single forward pass, a GNN assigns these parameters to all variables. Casting this one-shot guidance as a reinforcement learning problem lets us train the GNN with off-the-shelf policy-gradient methods, such as GRPO, directly using the solver's computational cost as the sole reward signal. The learned policies consistently accelerate a range of base solvers and outperform expert-supervised approaches based on learning handcrafted weighting heuristics, offering a promising path towards data-driven heuristic design in combinatorial optimization.