09:20  09:50 
Coffee and registration 
09:50  10:00 
Opening Remarks 
10:00  11:00 
Keynote Lecture 1: Couples can be tractable: New algorithms and hardness results for the Hospitals / Residents problem with Couples
David Manlove, University of Glasgow
Abstract
The Hospitals / Residents problem with Couples (HRC) models the problem of allocating junior doctors to hospital posts, which has applications in centralised clearinghouses around the world. One such example is the National Resident Matching Program (NRMP) in the US, which handles over 40,000 applicants annually. In HRC, a solution is a stable matching or a report that none exists. Informally, a stable matching ensures that no single doctor or couple, and no single hospital or to a pair of hospitals, would prefer to be matched to one another than to remain with their current assignment/s.
A key challenge is that an HRC instance need not admit a stable matching. We present a novel polynomialtime algorithm that can always find a nearfeasible stable matching (adjusting the hospitals' capacities by at most 1) in an HRC instance where the couples' preferences are subresponsive (i.e., if one member switches to a better hospital, than the couple also improves) and subcomplete (i.e., each pair of hospitals that are individually acceptable to both members are jointly acceptable for the couple) by reducing it to an instance of the Stable Fixtures problem. We also present a polynomialtime algorithm for HRC in a subresponsive, subcomplete instance that is a Dual Market, or where all couples are one of several possible types.
We complement our algorithms with several hardness results. We show that HRC with subresponsive and subcomplete couples is NPhard, even with other strong restrictions. We also show that HRC with a Dual Market is NPhard under several simultaneous restrictions. Finally, we show that the problem of finding a matching with the minimum number of blocking pairs in HRC is not approximable within m1−ε, for any ε>0, where m is the total length of the hospitals' preference lists, unless P=NP, even if each couple applies to only one pair of hospitals. Our polynomialtime solvability results greatly expand the class of known tractable instances of HRC and provide additional evidence as to why longstanding entrylevel labour markets that allow couples such as the NRMP remain successful to this day.
This is joint work with Gergely Csáji, Iain McBride and James Trimble. An extended abstract appears in the Proceedings of IJCAI 2024. The full paper can be found at https://arxiv.org/abs/2311.00405.

11:00  11:30 
Coffee break 
11:30  12:50 
Session 1:

Assignment Mechanisms with Predictions in the Private Graph Model
Artem Tsikiridis, CWI
Abstract
The realm of algorithms with predictions has led to the development of several new algorithms that leverage predictions to enhance their performance guarantees. The challenge is to devise algorithms that achieve optimal approximation guarantees as the prediction quality varies from perfect (consistency) to imperfect (robustness). This framework is particularly appealing in mechanism design contexts, where predictions might convey private information about the agents.
In this paper, we design strategyproof mechanisms that leverage predictions to achieve improved approximation guarantees for several variants of the Generalized Assignment Problem (GAP) in the private graph model. In this model, first introduced by Dughmi \& Ghosh (2010), the set of resources that an agent is compatible with is private information.
For the Bipartite Matching Problem (BMP), we give a deterministic groupstrategyproof (GSP) mechanism that is $(1+\frac{1}{\gamma})$consistent and $(1+\gamma)$robust, where $\gamma \ge 1$ is some confidence parameter. We also prove that this is best possible. Remarkably, our mechanism draws inspiration from the renowned GaleShapley algorithm, incorporating predictions as a crucial element. Additionally, we give a randomized mechanism that is universally GSP and improves on the guarantees in expectation.
The other GAP variants that we consider all make use of a unified greedy mechanism that adds edges to the assignment according to a specific order.
For a special case of Restricted Multiple Knapsack, this results in a deterministic strategyproof mechanism that is $(1+\frac{1}{\gamma})$consistent and $(2+\gamma)$robust.
We then focus on two variants: Agent Size GAP (where each agent has one size) and Value Consensus GAP (where all agents have the same preference order over resources). For both variants, our universally GSP mechanisms randomize over the greedy mechanism, our mechanism for BMP and the predicted assignment, leading to $(1+\frac{3}{\gamma})$consistency and $(3+\gamma)$robustness in expectation. All our mechanisms also provide more finegrained approximation guarantees that interpolate between the consistency and robustness guarantees, depending on some natural error measure of the prediction.
Joint work with: Riccardo ColiniBaldeschi (Meta), Sophie Klumper (CWI & University of Amsterdam) and Guido Schaefer (CWI & University of Amsterdam)

Laminar Matroid Secretary via Parking Functions
Vasilis Livanos, University of Chile / EPFL

On The Pursuit of EFX for Chores: NonExistence and
Approximations
Vasilis Christoforidis, Aristotle University of Thessaloniki &
Archimedes/Athena RC
Abstract
We study the problem of fairly allocating a set of chores to a group of agents. The existence of envyfree up to any item (EFX) allocations is a longstanding open question for both goods and chores. We resolve this question by providing a negative answer for the latter, presenting a simple construction that admits no EFX solutions for allocating six items to three agents equipped with superadditive cost functions, thus proving a separation result between goods and bads. In fact, we uncover a deeper insight, showing that the instance has unbounded approximation ratio. Moreover, we show that deciding whether an EFX
allocation exists is NPcomplete. On the positive side, we establish the existence of EFX allocations under general monotone cost functions when the number of items is at most n+2. We then shift our attention to additive cost functions. We employ a general framework in order to improve the approximation guarantees in the wellstudied case of three additive agents, and provide several conditional approximation bounds that leverage ordinal information.

Mechanism design augmented with output advice
Ioannis Vlachos, Athens University of Economics and Business &
Archimedes/Athena RC
Abstract
Our work revisits the design of mechanisms via the learningaugmented framework. In this model, the algorithm is enhanced with imperfect (machinelearned) information concerning the input, usually referred to as prediction. The goal is to design algorithms whose performance degrades gently as a function of the prediction error and, in particular, perform well if the prediction is accurate, but also provide a worstcase guarantee under any possible error. This framework has been successfully applied recently to various mechanism design settings, where in most cases the mechanism is provided with a prediction about the types of the agents.
We adopt a perspective in which the mechanism is provided with an output recommendation. We make no assumptions about the quality of the suggested outcome, and the goal is to use the recommendation to design mechanisms with low approximation guarantees whenever the recommended outcome is reasonable, but at the same time to provide worstcase guarantees whenever the recommendation significantly deviates from the optimal one. We propose a generic, universal measure, which we call quality of recommendation, to evaluate mechanisms across various information settings. We demonstrate how this new metric can provide refined analysis in existing results.
This model introduces new challenges, as the mechanism receives limited information comparing to settings that use predictions about the types of the agents. We study, through this lens, several wellstudied mechanism design paradigms, devising new mechanisms, but also providing refined analysis for existing ones, using as a metric the quality of recommendation. We complement our positive results, by exploring the limitations of known classes of strategyproof mechanisms that can be devised using output recommendation.

13:00  14:30 
Lunch break 
14:30  15:30 
Keynote Lecture 2: The LongRun Distribution of Stochastic Gradient Descent: A Large
Deviations Analysis
Panagiotis Mertikopoulos, National and Kapodistrian University of Athens
& Archimedes/Athena RC
Abstract
This talk investigates the longrun state distribution of stochastic gradient descent (SGD) in general, nonconvex problems  namely, which regions of the problem's state space are more likely to be visited by SGD, and by how much. Using an approach based on the theory of large deviations and randomly perturbed dynamical systems, we show that the longrun distribution of SGD resembles the BoltzmannGibbs distribution of equilibrium thermodynamics with temperature equal to the method's stepsize and energy levels determined by the problem's objective and the statistics of the noise. In particular, we show that, in the long run (i) the problem's critical region is visited exponentially more often than any noncritical region; (ii) the iterates of SGD are exponentially concentrated around the problem's minimum energy state (which does not always coincide with the global minimum of the objective); (iii) all other components of critical points are visited with frequency that is exponentially proportional to their energy level; and, finally, (iv) every nonminimizing component is "dominated" by a minimizing component that is visited exponentially more often.

15:30  16:00 
Coffee break 
16:00  17:15 
Session 2:

Large Scale Consensus Protocols
Aggelos Kiayias, University of Edinburgh & IOG

Deterministic Byzantine Agreement with Adaptive O(n · f)
Communication
Georgios Tsimos, University of Maryland

Deterministic Leader Election for Stationary Programmable Matter
with Common Direction
Maria Kokkou, AixMarseille University
Abstract
Leader Election is an important primitive for programmable matter, since it is often an intermediate step for the solution of more complex problems. Although the leader election problem itself is well studied even in the specific context of programmable matter systems, research on fault tolerant approaches is more limited. We consider the problem in the previously studied Amoebot model on a triangular grid, when the configuration is connected but contains nodes the particles cannot move to (e.g., obstacles). We assume that particles agree on a common direction (i.e., the horizontal axis) but do not have chirality (i.e., they do not agree on the other two directions of the triangular grid). We begin by showing that an election algorithm with explicit termination is not possible in this case, but we provide an implicitly terminating algorithm that elects a unique leader without requiring any movement. These results are in contrast to those in the more common model with chirality but no agreement on directions, where explicit termination is always possible but the number of elected leaders depends on the symmetry of the initial configuration. Solving the problem under the assumption of one common direction allows for a unique leader to be elected in a stationary and deterministic way, which until now was only possible for simply connected configurations under a sequential scheduler.
This is joint work with Jérémie Chalopin and Shantanu Das, arXiv: 2402.10582

17:15  17:30 
Short break 
17:30  18:10 
Session 3:

Fast Algorithms for Hypergraph PageRank with Applications to Semi
Supervised Learning
Konstantinos Ameranis, University of Chicago
Abstract
A fundamental approach to semisupervised learning is to leverage the structure of the sample space to diffuse label information from annotated examples to unlabeled points. Traditional methods model the input data points as a graph and rely on fast algorithms for solving Laplacian systems of equations, such as those defining PageRank. However, previous work has demonstrated that graphbased models fail to capture higherorder relations, such as group membership, which are better modeled by hypergraphs. Unfortunately, the scalable application of hypergraph models has been hampered by the nonlinearity of the hypergraph Laplacian. In this paper, we present highly scalable algorithms for hypergraph primitives, such as hypergraph PageRank vectors and hypergraph Laplacian systems, over general families of hypergraphs. In addition to giving strong theoretical guarantees, we empirically showcase the speed of our algorithms on benchmark instances of semisupervised learning on categorical data. We exploit their generality to improve semisupervised manifold clustering via hypergraph models. By providing significant speedups on fundamental hypergraph tasks, our algorithms enable the deployment of hypergraph models on a massive scale.

Optimization can learn Johnson Linderstrauss embeddings
Nikos Tsikouras, National and Kapodistrian University of Athens &
Archimedes/Athena RC
Abstract
Embeddings play a pivotal role across various disciplines, offering compact representations of complex data structures. Randomized methods like JohnsonLindenstrauss (JL) provide stateoftheart and essentially unimprovable theoretical guarantees for achieving such representations. These guarantees are worstcase and in particular, neither the analysis, nor the algorithm, takes into account any potential structural information of the data. The natural question is: must we randomize? Could we instead use an optimizationbased approach, working directly with the data? A first answer is no: as we show, the distancepreserving objective of JL has a nonconvex landscape over the space of projection matrices, with many bad stationary points. But this is not the final answer.
We present a novel method motivated by diffusion models, that circumvents this fundamental challenge: rather than performing optimization directly over the space of projection matrices, we use optimization over the larger space of random solution samplers, gradually reducing the variance of the sampler. We show that by moving through this larger space, our objective converges to a deterministic (zero variance) solution, avoiding bad stationary points.
This method can also be seen as an optimizationbased derandomization approach and is an idea and method that we believe can be applied to many other problems.


