Heiner Ackermann
RWTH Aachen University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Heiner Ackermann.
foundations of computer science | 2006
Heiner Ackermann; Heiko Röglin; Berthold Vöcking
We study the impact of combinatorial structure in congestion games on the complexity of computing pure Nash equilibria and the convergence time of best response sequences. In particular, we investigate which properties of the strategy spaces of individual players ensure a polynomial convergence time. We show, if the strategy space of each player consists of the bases of a matroid over the set of resources, then the lengths of all best response sequences are polynomially bounded in the number of players and resources. We can also prove that this result is tight, that is, the matroid property is a necessary and sufficient condition on the players? strategy spaces for guaranteeing polynomial time convergence to a Nash equilibrium. In addition, we present an approach that enables us to devise hardness proofs for various kinds of combinatorial games, including first results about the hardness of market sharing games and congestion games for overlay network design. Our approach also yields a short proof for the PLS-completeness of network congestion games. In particular, we can show that network congestion games are PLS-complete for directed and undirected networks even in case of linear latency functions.
SIAM Journal on Computing | 2011
Heiner Ackermann; Paul W. Goldberg; Vahab S. Mirrokni; Heiko Röglin; Berthold Vöcking
Various economic interactions can be modeled as two-sided markets. A central solution concept for these markets is stable matchings, introduced by Gale and Shapley. It is well known that stable matchings can be computed in polynomial time, but many real-life markets lack a central authority to match agents. In those markets, matchings are formed by actions of self-interested agents. Knuth introduced uncoordinated two-sided markets and showed that the uncoordinated better response dynamics may cycle. However, Roth and Vande Vate showed that the random better response dynamics converges to a stable matching with probability one, but they did not address the question of convergence time. In this paper, we give an exponential lower bound for the convergence time of the random better response dynamics in two-sided markets. We also extend the results for the better response dynamics to the best response dynamics; i.e., we present a cycle of best responses and prove that the random best response dynamics converges to a stable matching with probability one, but its convergence time is exponential. Additionally, we identify the special class of correlated matroid two-sided markets with real-life applications for which we prove that the random best response dynamics converges in expected polynomial time.
Theoretical Computer Science | 2007
Heiner Ackermann; Alantha Newman; Heiko Röglin; Berthold Vöcking
We consider bicriteria optimization problems and investigate the relationship between two standard approaches to solving them: (i) computing the Pareto curve and (ii) the so-called decision makers approach in which both criteria are combined into a single (usually nonlinear) objective function. Previous work by Papadimitriou and Yannakakis showed how to efficiently approximate the Pareto curve for problems like Shortest Path, Spanning Tree, and Perfect Matching. We wish to determine for which classes of combined objective functions the approximate Pareto curve also yields an approximate solution to the decision makers problem. We show that an FPTAS for the Pareto curve also gives an FPTAS for the decision-makers problem if the combined objective function is growth bounded like a quasi-polynomial function. If the objective function, however, shows exponential growth then the decision-makers problem is NP-hard to approximate within any polynomial factor. In order to bypass these limitations of approximate decision making, we turn our attention to Pareto curves in the probabilistic framework of smoothed analysis. We show that in a smoothed model, we can efficiently generate the (complete and exact) Pareto curve with a small failure probability if there exists an algorithm for generating the Pareto curve whose worst-case running time is pseudopolynomial. This way, we can solve the decision-makers problem w.r.t. any non-decreasing objective function for randomly perturbed instances of, e.g. Shortest Path, Spanning Tree, and Perfect Matching.
workshop on internet and network economics | 2007
Heiner Ackermann; Alexander Skopalik
Network congestion games with player-specific delay functions do not necessarily possess pure Nash equilibria. We therefore address the computational complexity of the corresponding decision problem, and show that it is NP-complete to decide whether such games possess pure Nash equilibria. This negative result still holds in the case of games with two players only. In contrast, we show that one can decide in polynomial time whether an equilibrium exists if the number of resources is constant. In addition, we introduce a family of player-specific network congestion games which are guaranteed to possess equilibria. In these games players have identical delay functions, however, each player may only use a certain subset of the edges. For this class of games we prove that finding a pure Nash equilibrium is PLS-complete even in the case of three players. Again, in the case of a constant number of edges an equilibrium can be computed in polynomial time. We conclude that the number of resources has a bigger impact on the computation complexity of certain problems related to network congestion games than the number of players.
international conference on computational logistics | 2011
Heiner Ackermann; Hendrik Ewe; Herbert Kopfer; Karl-Heinz Küfer
Freight business is a huge market with strong competition. In many companies, planning and routing software has been introduced, and optimization potentials have been widely exploited. To further improve efficiency, especially the small and medium sized carriers have to cooperate beyond enterprise boundaries. A promising approach to exchange transportation requests between freight carriers is provided by combinatorial auctions and exchanges. They allow bundles of items to be traded, thereby allowing participants to express complex synergies. In this paper we discuss various goals for a combinatorial request exchange in freight logistics and provide the reasoning for our design decisions. All goals aim to improve usefulness in a practical environment of less-than-truckload (LTL) carriers.We provide experimental results for both generated and real-life data that show significant savings and are often close to a heuristic solution for the global optimization problem. We study how bundling and restricting the number of submitted bids affect the solution quality.
international symposium on algorithms and computation | 2005
Heiner Ackermann; Alantha Newman; Heiko Röglin; Berthold Vöcking
We consider bicriteria optimization problems and investigate the relationship between two standard approaches to solving them: (i) computing the Pareto curve and (ii) the so-called decision makers approach in which both criteria are combined into a single (usually non-linear) objective function. Previous work by Papadimitriou and Yannakakis showed how to efficiently approximate the Pareto curve for problems like Shortest Path, Spanning Tree, and Perfect Matching. We wish to determine for which classes of combined objective functions the approximate Pareto curve also yields an approximate solution to the decision makers problem. We show that an FPTAS for the Pareto curve also gives an FPTAS for the decision makers problem if the combined objective function is growth bounded like a quasi-polynomial function. If these functions, however, show exponential growth then the decision makers problem is NP-hard to approximate within any factor. In order to bypass these limitations of approximate decision making, we turn our attention to Pareto curves in the probabilistic framework of smoothed analysis. We show that in a smoothed model, we can efficiently generate the (complete and exact) Pareto curve with a small failure probability if there exists an algorithm for generating the Pareto curve whose worst case running time is pseudopolynomial. This way, we can solve the decision makers problem w.r.t. any non-decreasing objective function for randomly perturbed instances of, e.g., Shortest Path, Spanning Tree, and Perfect Matching.
Distributed Computing | 2011
Heiner Ackermann; Simon Fischer; Martin Hoefer; Marcel Schöngens
We consider a dynamic load balancing scenario in which users allocate resources in a non-cooperative and selfish fashion. The perceived performance of a resource for a user decreases with the number of users that allocate the resource. In our dynamic, concurrent model, users may reallocate resources in a round-based fashion. As opposed to various settings analyzed in the literature, we assume that users have quality of service demands. A user has zero utility when falling short of a certain minimum performance threshold and having positive utility otherwise. Whereas various load-balancing protocols have been proposed for the setting without quality of service requirements, we consider protocols that satisfy an additional locality constraint: The behavior of a user depends merely on the state of the resource it currently allocates. This property is particularly useful in scenarios where the state of other resources is not readily accessible. For instance, if resources represent channels in a mobile network, then accessing channel information may require time-intensive measurements. We consider several variants of the model, where the quality of service demands may depend on the user, the resource, or both. For all cases we present protocols for which the dynamics converge to a state in which all users are satisfied. More importantly, the time to reach such a state scales nicely. It is only logarithmic in the number of users, which makes our protocols applicable in large-scale systems.
workshop on internet and network economics | 2007
Heiner Ackermann; Paul W. Goldberg; Vahab S. Mirrokni; Heiko Röglin; Berthold Vöcking
Congestion games are a well-studied model for resource sharing among uncoordinated selfish agents. Usually, one assumes that the resources in a congestion game do not have any preferences over the players that can allocate them. In typical load balancing applications, however, different jobs can have different priorities, and jobs with higher priorities get, for example, larger shares of the processor time. We introduce a model in which each resource can assign priorities to the players and players with higher priorities can displace players with lower priorities. Our model does not only extend standard congestion games, but it can also be seen as a model of two-sided markets with ties. We prove that singleton congestion games with priorities are potential games, and we show that every player-specific singleton congestion game with priorities possesses a pure Nash equilibrium that can be found in polynomial time. Finally, we extend our results to matroid congestion games, in which the strategy space of each player consists of the bases of a matroid over the resources.
Internet Mathematics | 2008
Heiner Ackermann; Alexander Skopalik
Network congestion games with player-specific delay functions do not possess pure Nash equilibria in general. We therefore address the computational complexity of the corresponding decision problem and prove that it is NP-complete to decide whether a pure Nash equilibrium exists. This result is true for games with directed edges as well as for networks with undirected edges, and still holds for games with two players only. In contrast to games with networks of arbitrary size, we present a polynomial-time algorithm deciding whether there exists a Nash equilibrium in games with networks of constant size. Additionally, we introduce a family of player-specific network congestion games that are guaranteed to possess equilibria. In these games players have identical delay functions. However, each player may use only a certain subset of the edges. For this class of games we prove that finding a pure Nash equilibrium is PLS-complete. Again, this result is true for networks with directed edges as well as for networks with undirected edges, and still holds for games with three players only. In games with networks of constant size, however, we prove that pure Nash equilibria can be computed in polynomial time.
Internet Mathematics | 2008
Heiner Ackermann; Paul W. Goldberg; Vahab S. Mirrokni; Heiko Röglin; Berthold Vöcking
Congestion games are a well-studied model for resource sharing among uncoordinated selfish players. Usually, one assumes that the resources in a congestion game do not have any preferences regarding the players that can access them. In typical load-balancing applications, however, different jobs can have different priorities, and jobs with higher priorities get, for example, larger shares of processor time. We extend the classical notion of congestion game and introduce a model in which each resource can assign priorities to the players, and players with higher priorities can displace players with lower priorities. Not only does our model extend classical congestion games, it can also be seen as a model of two-sided markets with ties. Hence it unifies previous results for these two classical models. We prove that singleton congestion games with priorities are potential games. Furthermore, we show that every player-specific singleton congestion game with priorities possesses a pure Nash equilibrium that can be found in polynomial time. Finally, we extend our results to matroid congestion games, in which the strategy spaces of the players are matroids over the resources.