09:20 - 09:50 |
Coffee and registration |
09:50 - 10:00 |
Opening Remarks |
10:00 - 11:00 |
Keynote Lecture 1: Couples can be tractable: New algorithms and hardness results for the Hospitals / Residents problem with Couples
David Manlove, University of Glasgow
Abstract
The Hospitals / Residents problem with Couples (HRC) models the problem of allocating junior doctors to hospital posts, which has applications in centralised clearinghouses around the world. One such example is the National Resident Matching Program (NRMP) in the US, which handles over 40,000 applicants annually. In HRC, a solution is a stable matching or a report that none exists. Informally, a stable matching ensures that no single doctor or couple, and no single hospital or to a pair of hospitals, would prefer to be matched to one another than to remain with their current assignment/s.
A key challenge is that an HRC instance need not admit a stable matching. We present a novel polynomial-time algorithm that can always find a near-feasible stable matching (adjusting the hospitals' capacities by at most 1) in an HRC instance where the couples' preferences are sub-responsive (i.e., if one member switches to a better hospital, than the couple also improves) and sub-complete (i.e., each pair of hospitals that are individually acceptable to both members are jointly acceptable for the couple) by reducing it to an instance of the Stable Fixtures problem. We also present a polynomial-time algorithm for HRC in a sub-responsive, sub-complete instance that is a Dual Market, or where all couples are one of several possible types.
We complement our algorithms with several hardness results. We show that HRC with sub-responsive and sub-complete couples is NP-hard, even with other strong restrictions. We also show that HRC with a Dual Market is NP-hard under several simultaneous restrictions. Finally, we show that the problem of finding a matching with the minimum number of blocking pairs in HRC is not approximable within m1−ε, for any ε>0, where m is the total length of the hospitals' preference lists, unless P=NP, even if each couple applies to only one pair of hospitals. Our polynomial-time solvability results greatly expand the class of known tractable instances of HRC and provide additional evidence as to why long-standing entry-level labour markets that allow couples such as the NRMP remain successful to this day.
This is joint work with Gergely Csáji, Iain McBride and James Trimble. An extended abstract appears in the Proceedings of IJCAI 2024. The full paper can be found at https://arxiv.org/abs/2311.00405.
|
11:00 - 11:30 |
Coffee break |
11:30 - 12:50 |
Session 1:
-
Assignment Mechanisms with Predictions in the Private Graph Model
Artem Tsikiridis, CWI
Abstract
The realm of algorithms with predictions has led to the development of several new algorithms that leverage predictions to enhance their performance guarantees. The challenge is to devise algorithms that achieve optimal approximation guarantees as the prediction quality varies from perfect (consistency) to imperfect (robustness). This framework is particularly appealing in mechanism design contexts, where predictions might convey private information about the agents.
In this paper, we design strategyproof mechanisms that leverage predictions to achieve improved approximation guarantees for several variants of the Generalized Assignment Problem (GAP) in the private graph model. In this model, first introduced by Dughmi \& Ghosh (2010), the set of resources that an agent is compatible with is private information.
For the Bipartite Matching Problem (BMP), we give a deterministic group-strategyproof (GSP) mechanism that is $(1+\frac{1}{\gamma})$-consistent and $(1+\gamma)$-robust, where $\gamma \ge 1$ is some confidence parameter. We also prove that this is best possible. Remarkably, our mechanism draws inspiration from the renowned Gale-Shapley algorithm, incorporating predictions as a crucial element. Additionally, we give a randomized mechanism that is universally GSP and improves on the guarantees in expectation.
The other GAP variants that we consider all make use of a unified greedy mechanism that adds edges to the assignment according to a specific order.
For a special case of Restricted Multiple Knapsack, this results in a deterministic strategyproof mechanism that is $(1+\frac{1}{\gamma})$-consistent and $(2+\gamma)$-robust.
We then focus on two variants: Agent Size GAP (where each agent has one size) and Value Consensus GAP (where all agents have the same preference order over resources). For both variants, our universally GSP mechanisms randomize over the greedy mechanism, our mechanism for BMP and the predicted assignment, leading to $(1+\frac{3}{\gamma})$-consistency and $(3+\gamma)$-robustness in expectation. All our mechanisms also provide more fine-grained approximation guarantees that interpolate between the consistency and robustness guarantees, depending on some natural error measure of the prediction.
Joint work with: Riccardo Colini-Baldeschi (Meta), Sophie Klumper (CWI & University of Amsterdam) and Guido Schaefer (CWI & University of Amsterdam)
-
Laminar Matroid Secretary via Parking Functions
Vasilis Livanos, University of Chile / EPFL
-
On The Pursuit of EFX for Chores: Non-Existence and
Approximations
Vasilis Christoforidis, Aristotle University of Thessaloniki &
Archimedes/Athena RC
Abstract
We study the problem of fairly allocating a set of chores to a group of agents. The existence of envy-free up to any item (EFX) allocations is a long-standing open question for both goods and chores. We resolve this question by providing a negative answer for the latter, presenting a simple construction that admits no EFX solutions for allocating six items to three agents equipped with superadditive cost functions, thus proving a separation result between goods and bads. In fact, we uncover a deeper insight, showing that the instance has unbounded approximation ratio. Moreover, we show that deciding whether an EFX
allocation exists is NP-complete. On the positive side, we establish the existence of EFX allocations under general monotone cost functions when the number of items is at most n+2. We then shift our attention to additive cost functions. We employ a general framework in order to improve the approximation guarantees in the well-studied case of three additive agents, and provide several conditional approximation bounds that leverage ordinal information.
-
Mechanism design augmented with output advice
Ioannis Vlachos, Athens University of Economics and Business &
Archimedes/Athena RC
Abstract
Our work revisits the design of mechanisms via the learning-augmented framework. In this model, the algorithm is enhanced with imperfect (machine-learned) information concerning the input, usually referred to as prediction. The goal is to design algorithms whose performance degrades gently as a function of the prediction error and, in particular, perform well if the prediction is accurate, but also provide a worst-case guarantee under any possible error. This framework has been successfully applied recently to various mechanism design settings, where in most cases the mechanism is provided with a prediction about the types of the agents.
We adopt a perspective in which the mechanism is provided with an output recommendation. We make no assumptions about the quality of the suggested outcome, and the goal is to use the recommendation to design mechanisms with low approximation guarantees whenever the recommended outcome is reasonable, but at the same time to provide worst-case guarantees whenever the recommendation significantly deviates from the optimal one. We propose a generic, universal measure, which we call quality of recommendation, to evaluate mechanisms across various information settings. We demonstrate how this new metric can provide refined analysis in existing results.
This model introduces new challenges, as the mechanism receives limited information comparing to settings that use predictions about the types of the agents. We study, through this lens, several well-studied mechanism design paradigms, devising new mechanisms, but also providing refined analysis for existing ones, using as a metric the quality of recommendation. We complement our positive results, by exploring the limitations of known classes of strategyproof mechanisms that can be devised using output recommendation.
|
13:00 - 14:30 |
Lunch break |
14:30 - 15:30 |
Keynote Lecture 2: The Long-Run Distribution of Stochastic Gradient Descent: A Large
Deviations Analysis
Panagiotis Mertikopoulos, National and Kapodistrian University of Athens
& Archimedes/Athena RC
Abstract
This talk investigates the long-run state distribution of stochastic gradient descent (SGD) in general, non-convex problems - namely, which regions of the problem's state space are more likely to be visited by SGD, and by how much. Using an approach based on the theory of large deviations and randomly perturbed dynamical systems, we show that the long-run distribution of SGD resembles the Boltzmann-Gibbs distribution of equilibrium thermodynamics with temperature equal to the method's step-size and energy levels determined by the problem's objective and the statistics of the noise. In particular, we show that, in the long run (i) the problem's critical region is visited exponentially more often than any non-critical region; (ii) the iterates of SGD are exponentially concentrated around the problem's minimum energy state (which does not always coincide with the global minimum of the objective); (iii) all other components of critical points are visited with frequency that is exponentially proportional to their energy level; and, finally, (iv) every non-minimizing component is "dominated" by a minimizing component that is visited exponentially more often.
|
15:30 - 16:00 |
Coffee break |
16:00 - 17:15 |
Session 2:
-
Large Scale Consensus Protocols
Aggelos Kiayias, University of Edinburgh & IOG
-
Deterministic Byzantine Agreement with Adaptive O(n · f)
Communication
Georgios Tsimos, University of Maryland
-
Deterministic Leader Election for Stationary Programmable Matter
with Common Direction
Maria Kokkou, Aix-Marseille University
Abstract
Leader Election is an important primitive for programmable matter, since it is often an intermediate step for the solution of more complex problems. Although the leader election problem itself is well studied even in the specific context of programmable matter systems, research on fault tolerant approaches is more limited. We consider the problem in the previously studied Amoebot model on a triangular grid, when the configuration is connected but contains nodes the particles cannot move to (e.g., obstacles). We assume that particles agree on a common direction (i.e., the horizontal axis) but do not have chirality (i.e., they do not agree on the other two directions of the triangular grid). We begin by showing that an election algorithm with explicit termination is not possible in this case, but we provide an implicitly terminating algorithm that elects a unique leader without requiring any movement. These results are in contrast to those in the more common model with chirality but no agreement on directions, where explicit termination is always possible but the number of elected leaders depends on the symmetry of the initial configuration. Solving the problem under the assumption of one common direction allows for a unique leader to be elected in a stationary and deterministic way, which until now was only possible for simply connected configurations under a sequential scheduler.
This is joint work with Jérémie Chalopin and Shantanu Das, arXiv: 2402.10582
|
17:15 - 17:30 |
Short break |
17:30 - 18:10 |
Session 3:
-
Fast Algorithms for Hypergraph PageRank with Applications to Semi-
Supervised Learning
Konstantinos Ameranis, University of Chicago
Abstract
A fundamental approach to semi-supervised learning is to leverage the structure of the sample space to diffuse label information from annotated examples to unlabeled points. Traditional methods model the input data points as a graph and rely on fast algorithms for solving Laplacian systems of equations, such as those defining PageRank. However, previous work has demonstrated that graph-based models fail to capture higher-order relations, such as group membership, which are better modeled by hypergraphs. Unfortunately, the scalable application of hypergraph models has been hampered by the non-linearity of the hypergraph Laplacian. In this paper, we present highly scalable algorithms for hypergraph primitives, such as hypergraph PageRank vectors and hypergraph Laplacian systems, over general families of hypergraphs. In addition to giving strong theoretical guarantees, we empirically showcase the speed of our algorithms on benchmark instances of semi-supervised learning on categorical data. We exploit their generality to improve semi-supervised manifold clustering via hypergraph models. By providing significant speed-ups on fundamental hypergraph tasks, our algorithms enable the deployment of hypergraph models on a massive scale.
-
Optimization can learn Johnson Linderstrauss embeddings
Nikos Tsikouras, National and Kapodistrian University of Athens &
Archimedes/Athena RC
Abstract
Embeddings play a pivotal role across various disciplines, offering compact representations of complex data structures. Randomized methods like Johnson-Lindenstrauss (JL) provide state-of-the-art and essentially unimprovable theoretical guarantees for achieving such representations. These guarantees are worst-case and in particular, neither the analysis, nor the algorithm, takes into account any potential structural information of the data. The natural question is: must we randomize? Could we instead use an optimization-based approach, working directly with the data? A first answer is no: as we show, the distance-preserving objective of JL has a non-convex landscape over the space of projection matrices, with many bad stationary points. But this is not the final answer.
We present a novel method motivated by diffusion models, that circumvents this fundamental challenge: rather than performing optimization directly over the space of projection matrices, we use optimization over the larger space of random solution samplers, gradually reducing the variance of the sampler. We show that by moving through this larger space, our objective converges to a deterministic (zero variance) solution, avoiding bad stationary points.
This method can also be seen as an optimization-based derandomization approach and is an idea and method that we believe can be applied to many other problems.
|
|
|
09:40 - 10:00 |
Coffee and registration |
10:00 - 11:00 |
Keynote Lecture 3: Algorithmic contract theory and contractible contracts
Ron Lavi, University of Bath
Abstract
Markets are an ancient tool for allocating goods; but they are also a tool for the allocation of effort. For example, in online labor markets, a freelancer puts in effort to complete tasks delegated to her by the platform's customers. In online advertising, a social-media influencer works to promote several brands. The celebrated principal-agent model, a.k.a. the theory of contract design, studies such issues. When there are multiple principals that simultaneously interact with the agent(s), the model is called ‘common agency’. This is the case, e.g., when a marketing agency bids for ad space on behalf of multiple advertisers, or when platforms like Airbnb act as an agent representing property owners. Classically, contract theory restricts attention to simple objects that describe payments to agents conditioned on their actions or on the obtained outcomes. In this talk I will describe ‘VCG contracts’ which are a type of ‘contractible contracts’ – contracts that can also depend on other contracts. Our main results show that VCG contracts improve social efficiency relative to classic contracts. Since contractible contracts are reminiscent of smart contracts (and vice versa), a more formal exploration of the connections between the two suggests itself. This talk is based on joint work with Elisheva Shamash.
|
11:00 - 11:30 |
Coffee break |
11:30 - 12:45 |
Session 4:
-
Exact Algorithms and Lower Bounds for Multiagent Path Finding
Foivos Fioravantes, Czech Technical University in Prague
Abstract
In the Multiagent Path Finding (MAPF) problem, we focus on efficiently finding non-colliding paths for a set of k agents on a given graph G, where each agent seeks a path from its source vertex to a target.
An important measure of the quality of the solution is the length of the proposed schedule l, that is, the length of a longest path (including the waiting time).
In this work, we propose a systematic study under the parameterized complexity framework.
We show that MAPF is W[1]-hard with respect to k (even if k is combined with the maximum degree of the input graph). The problem remains NP-hard in planar graphs even if the maximum degree and the makespan l are fixed constants.On the positive side, we show an FPT algorithm for k+l.
As we continue, the structure of G comes into play. We give an FPT algorithm for parameter k plus the diameter of the graph G. Finally, we show that the MAPF problem is W[1]-hard for cliquewidth of G plus l while it is FPT for treewidth of G plus l.
-
Minimum Monotone Spanning Trees
Eleni Katsanou, National Technical University of Athens
Abstract
Let $S$ be a finite set of points in the plane, and let $\cal D$ be a finite set of directions. A geometric spanning tree $T$ of $S$ is {${\cal D}$-monotone} if, for every pair $\{u,v\}$ of vertices of $T$, there exists a direction $d \in \cal D$ for which the unique path from $u$ to $v$ in $T$ is monotone with respect to $d$. We study the problem of computing, for a given set ${\cal D}$ of directions, a ${\cal D}$-monotone spanning tree of minimum length. We show that the problem belongs to the complexity class XP when parameterized by $|{\cal D}$. Our XP algorithm is based on a characterization of \D-monotone spanning trees. For the important special case $|{\cal D}|=2$, we speed up the XP algorithm to quadratic time. The problem of computing a ${\cal D}$-monotone spanning tree of minimum length over all
possible sets ${\cal D}$ of $k$ directions is also in XP when parameterized by $k$. For $k=1$ and $k=2$, we solve this problem in $O(n^2 \log n)$ and $O(n^6)$ time, respectively, where $n$ is the number of points. Furthermore, in contrast to the classical Euclidean minimum spanning tree, whose vertex degree is bounded by six, we show that for every even integer $k$, there exists a point set $S_k$ and a set $\cal D$ of $k$ directions such that any
minimum-length $\cal D$-monotone spanning tree of $S_k$ has maximum vertex degree $2k$.
-
Weighted Partial Positive Influence Connected Dominating Set
Ioannis Vaxevanakis, National and Kapodistrian University of Athens
Abstract
In the Weighted Partial Positive Influence Connected Dominating Set problem, we seek a minimum-size set of nodes $S \subseteq V$ such that the set $S$ must be connected and every node in $V$ is either a member of $S$ or the sum of the weights of its incident edges leading to nodes in $S$ is at least half of the sum of the weights of all its incident edges. We present the first logarithmic approximation algorithms for this problem, and to prove this result, we develop a general approximation framework for non-submodular functions by extending the standard techniques used for submodular functions.
|
12:45 - 14:30 |
Lunch break |
14:30 - 15:30 |
Keynote Lecture 4: The Power of Multi-versioning in Concurrent Data Structures
Panagiota Fatourou, University of Crete & ICS-FORTH
Abstract
Multi-versioning is widely used in databases, transactional memory, and other domains. This talk will examine the power of using multi-versioning to support multi-point queries on top of concurrent data structures.
Any system that maintains multiple versions of each object needs a way of efficiently reclaiming them. In this talk, we will also study theoretically and practically efficient schemes for multi-version garbage collection.
|
15:30 - 16:00 |
Coffee break |
16:00 - 16:55 |
Session 5:
-
Dynamic Blockchain Mechanisms
Stefanos Leonardos, King's College London
Abstract
In this talk, we will study dynamic mechanisms that emerge in blockchain based economic systems. Starting with Ethereum's EIP-1559 dynamic transaction fee market, we will analyse its properties and dynamic evolution over time. Then, we will try to abstract EIP-1559's building blocks and design principles to build a novel mechanism that aims to achieve a fair sharing of the Miner Extractable Value (MEV). We will show that the ideas behind EIP-1559 can be helpful in the development of a similar mechanism with strong efficiency guarantees. This provides a promising perspective in the design space of dynamic blockchain mechanisms which can be leveraged to tackle a wide range of cryptoeconomic problems.
-
Reward Sharing Mechanisms for Pool Formation: Shapley Value and
Proportional Schemes
Panagiotis Tsamopoulos, Athens University of Economics and Business
Abstract
In this work we study a game-theoretic model for pool formation in Proof of Stake blockchain protocols. The question we are interested in is the design of mechanisms, “reward sharing schemes,” that suitably split rewards among pool members and achieve favorable properties. With this in mind, we initiate the study of the well known Shapley value scheme from cooperative game theory into the context of blockchains. Additionally, we study three variations of proportional sharing. The first is the natural approach of splitting profits in proportion to the stake contributed by each member, and the other two versions are based on a superadditive and a subadditive function of the stake. We provide comparisons among these four schemes in terms of attained decentralization, via a Price of Stability analysis and in terms of Sybil-proofness and their “skin in the game” characteristics.
-
Exponential Lower Bounds for Fictitious Play in Potential Games
Nikolaos Patris, University of California, Irvine
Abstract
Fictitious Play (FP) is a simple and natural dynamic for repeated play with many applications in game theory and multi-agent reinforcement learning. It was introduced by Brown (1949,1951) and its convergence properties for two-player zero-sum games were established later by Robinson (1951). Potential games Monderer and Shapley (1996b) is another class of games that exhibit the FP property (Monderer and Shapley (1996a)), i.e., FP dynamics converge to a Nash equilibrium if all agents follow it. Nevertheless, except for two-player zero-sum games and for specific instances of payoff matrices (Abernethy et al. (2021)) or for adversarial tie-breaking rules (Daskalakis and Pan (2014)), the convergence rate of FP is unknown. In this work, we focus on the rate of convergence of FP when applied to potential games and more specifically identical payoff games. We prove that FP can take exponential time (in the number of strategies) to reach a Nash equilibrium, even if the game is restricted to two agents and for arbitrary tie-breaking rules. To prove this, we recursively construct a two-player coordination game with a unique Nash equilibrium. Moreover, every approximate Nash equilibrium in the constructed game must be close to the pure Nash equilibrium in ℓ1-distance.
|
17:00 - 17:15 |
Short break |
17:15 - 18:15 |
Session 6: RUMP Session
5-minute presentations on open problems, work in progress, etc
|
18:15 - 18:30 |
Closing remarks (plus announcements for phd/postdoc opportunities)
|
|
|