Tami Tamir
Interdisciplinary Center Herzliya
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tami Tamir.
Algorithmica | 2001
Hadas Shachnai; Tami Tamir
Abstract. We study two variants of the classic knapsack problem, in which we need to place items of different types in multiple knapsacks; each knapsack has a limited capacity, and a bound on the number of different types of items it can hold: in the class-constrained multiple knapsack problem (CMKP) we wish to maximize the total number of packed items; in the fair placement problem (FPP) our goal is to place the same (large) portion from each set. We look for a perfect placement, in which both problems are solved optimally. We first show that the two problems are NP-hard; we then consider some special cases, where a perfect placement exists and can be found in polynomial time. For other cases, we give approximate solutions. Finally, we give a nearly optimal solution for the CMKP. Our results for the CMKP and the FPP are shown to provide efficient solutions for two fundamental problems arising in multimedia storage subsystems.
ACM Transactions on Algorithms | 2007
Amotz Bar-Noy; Richard E. Ladner; Tami Tamir
Given is a sequence of <i>n</i> positive integers <i>w</i><inf>1</inf>;<i>w</i><inf>2</inf>....,<i>w</i><inf><i>n</i></inf> that are associated with the items 1, 2....<i>n</i> respectively. In the <i>windows scheduling</i> problem, the goal is to schedule all the items (equal length information pages) on broadcasting channels such that the gap between two consecutive appearances of page <i>i</i> on any of the channels is at most <i>w</i><inf><i>i</i></inf> slots (a slot is the transmission time of one page). In the <i>unit fractions bin packing</i> problem, the goal is to pack all the items in bins of unit size where the size (width) of item <i>i</i> is 1/<i>w</i><inf><i>i</i></inf>. The optimization objective is to minimize the number of channels or bins. In the off-line setting the sequence is known in advance whereas in the on-line setting the items arrive in order and assignment decisions are irrevocable. Since a page requires at least 1=wi of the channels bandwidth, it follows that windows scheduling without migration (all broadcasts of a page must be from the same channel) is a restricted version of unit fractions bin packing.Let <i>H</i> = [Σ<sup><i>n</i></sup><inf><i>i</i> = 1</inf> (1/<i>w</i><inf><i>i</i></inf>)] be the obvious bandwidth lower bound on the required number of bins (channels). Previously an <i>H</i> + <i>O</i>(ln <i>H</i>) off-line algorithm for the windows scheduling problem was known. This paper presents an <i>H</i> + 1 off-line algorithm to the unit fractions bin packing problem. In the on-line setting, this paper presents an <i>H</i> + <i>O</i>(√<i>H</i>) algorithm to both problems where the one for the unit fractions bin packing problem is simpler. On the other hand, this paper shows that already for the unit fractions bin packing problem, any on-line algorithm must use at least <i>H</i> + Ω (ln <i>H</i>) bins.
workshop on algorithms and data structures | 2003
Nicholas J. A. Harvey; Richard E. Ladner; László Lovász; Tami Tamir
We consider the problem of fairly matching the left-hand vertices of a bipartite graph to the right-hand vertices. We refer to this problem as the semi-matching problem; it is a relaxation of the known bipartite matching problem. We present a way to evaluate the quality of a given semi-matching and show that, under this measure, an optimal semi-matching balances the load on the right hand vertices with respect to any L p -norm. In particular, when modeling a job assignment system, an optimal semi-matching achieves the minimal makespan and the minimal flow time for the system.
Theory of Computing Systems \/ Mathematical Systems Theory | 2008
Hadas Shachnai; Tami Tamir; Omer Yehezkely
Abstract We consider two variants of the classical bin packing problem in which items may be fragmented. This can potentially reduce the total number of bins needed for packing the instance. However, since fragmentation incurs overhead, we attempt to avoid it as much as possible. In bin packing with size increasing fragmentation (BP-SIF), fragmenting an item increases the input size (due to a header/footer of fixed size that is added to each fragment). In bin packing with size preserving fragmentation (BP-SPF), there is a bound on the total number of fragmented items. These two variants of bin packing capture many practical scenarios, including message transmission in community TV networks, VLSI circuit design and preemptive scheduling on parallel machines with setup times/setup costs. While both BP-SPF and BP-SIF do not belong to the class of problems that admit a polynomial time approximation scheme (PTAS), we show in this paper that both problems admit a dual PTAS and an asymptotic PTAS. We also develop for each of the problems a dual asymptotic fully polynomial time approximation scheme (AFPTAS). Our AFPTASs are based on a non-standard transformation of the mixed packing and covering linear program formulations of our problems into pure covering programs, which enables to efficiently solve these programs.
Journal of Discrete Algorithms | 2012
Hadas Shachnai; Tami Tamir
Given is a set of items and a set of devices, each possessing two limited resources. Each item requires some amounts of these resources. Further, each item is associated with a profit and a color, and items of the same color can share the use of one resource. The goal is to allocate the resources to the most profitable (feasible) subset of items. In alternative formulation, the goal is to pack the most profitable subset of items in a set of two-dimensional bins (knapsacks), in which the capacity in one dimension is sharable. Indeed, the special case where there is a single item in each color is the well-known two-dimensional vector packing (2DVP) problem. Thus, unless P = NP, the problem that we study does not admit a fully polynomial time approximation scheme (FPTAS) for a single bin, and is MAX-SNP hard for multiple bins. Our problem has several important applications, including data placement on disks in media-on-demand systems. We present approximation algorithms as well as optimal solutions for some instances. In some cases, our results are similar to the best known results for 2DVP. Specifically, for a single bin, we show that the problem is solvable in pseudo-polynomial time and develop a polynomial time approximation scheme (PTAS) for general instances. For a natural subclass of instances we obtain a simpler scheme. This yields the first combinatorial PTAS for a non-trivial subclass of instances for 2DVP. For multiple bins, we develop a PTAS for a subclass of instances arising in the data placement problem. Finally, we show that when the number of distinct colors in the instance is fixed, our problem admits a PTAS, even if the items have arbitrary sizes and profits, and the bins are arbitrary.
Theoretical Computer Science | 2010
Michele Flammini; Gianpiero Monaco; Luca Moscardelli; Hadas Shachnai; Mordechai Shalom; Tami Tamir; Shmuel Zaks
We consider a scheduling problem in which a bounded number of jobs can be processed simultaneously by a single machine. The input is a set of n jobs J = {J1, … , Jn}. Each job, Jj, is associated with an interval [sj, cj] along which it should be processed. Also given is the parallelism parameter g ≥ 1, which is the maximal number of jobs that can be processed simultaneously by a single machine. Each machine operates along a contiguous time interval, called its busy interval, which contains all the intervals corresponding to the jobs it processes. The goal is to assign the jobs to machines such that the total busy time of the machines is minimized. The problem is known to be NP-hard already for g = 2. We present a 4-approximation algorithm for general instances, and approximation algorithms with improved ratios for instances with bounded lengths, for instances where any two intervals intersect, and for instances where no interval is properly contained in another. Our study has important application in optimizing the switching costs of optical networks.
Mathematics of Operations Research | 2013
Ariel Kulik; Hadas Shachnai; Tami Tamir
Submodular maximization generalizes many fundamental problems in discrete optimization, including Max-Cut in directed/undirected graphs, maximum coverage, maximum facility location, and marketing over social networks. In this paper we consider the problem of maximizing any submodular function subject to d knapsack constraints, where d is a fixed constant. We establish a strong relation between the discrete problem and its continuous relaxation, obtained through extension by expectation of the submodular function. Formally, we show that, for any nonnegative submodular function, an α-approximation algorithm for the continuous relaxation implies a randomized α-e-approximation algorithm for the discrete problem. We use this relation to obtain an e-1-e-approximation for the problem, and a nearly optimal 1-e-1-e-approximation ratio for the monotone case, for any e > 0. We further show that the probabilistic domain defined by a continuous solution can be reduced to yield a polynomial-size domain, given an oracle for the extension by expectation. This leads to a deterministic version of our technique.
european symposium on algorithms | 2002
Hadas Shachnai; Tami Tamir; Gerhard J. Woeginger
AbstractIt is well known that for preemptive scheduling on uniform machines there exist polynomial time exact algorithms, whereas for non-preemptive scheduling there are probably no such algorithms. However, it is not clear how many preemptions (in total, or per job) suffice in order to guarantee an optimal polynomial time algorithm. In this paper we investigate exactly this hardness gap, formalized as two variants of the classic preemptive scheduling problem. In generalized multiprocessor scheduling (GMS) we have a job-wise or total bound on the number of preemptions throughout a feasible schedule. We need to find a schedule that satisfies the preemption constraints, such that the maximum job completion time is minimized. In minimum preemptions scheduling (MPS) the only feasible schedules are preemptive schedules with the smallest possible makespan. The goal is to find a feasible schedule that minimizes the overall number of preemptions. Both problems are NP-hard, even for two machines and zero preemptions. For GMS, we develop polynomial time approximation schemes, distinguishing between the cases where the number of machines is fixed, or given as part of the input. Our scheme for a fixed number of machines has linear running time, and can be applied also for instances where jobs have release dates, and for instances with arbitrary preemption costs. For MPS, we derive matching lower and upper bounds on the number of preemptions required by any optimal schedule. Our results for MPS hold for any instance in which a job, Jj, can be processed simultaneously by ρj machines, for some ρj ≥ 1.
Operations Research | 2012
Michal Feldman; Tami Tamir
We study strategic resource allocation settings, where jobs correspond to self-interested players who choose resources with the objective of minimizing their individual cost. Our framework departs from the existing game-theoretic models mainly in assuming conflicting congestion effects, but also in assuming an unlimited supply of resources. In our model, a jobs cost is composed of both its resources load (which increases with congestion) and its share in the resources activation cost (which decreases with congestion). We provide results for a job-scheduling setting with heterogeneous jobs and identical machines. We show that if the resources activation cost is shared equally among its users, a pure Nash equilibrium (NE) might not exist. In contrast, the proportional sharing rule induces a game that admits a pure NE, which can also be computed in polynomial time. As part of the algorithms analysis, we establish a new, nontrivial property of schedules obtained by the longest processing time algorithm. We also observe that, unlike in congestion games, best-response dynamics (BRD) are not guaranteed to converge to a Nash equilibrium. Finally, we measure the inefficiency of equilibria with respect to the minimax objective function, and prove that there is no universal bound for the worst-case inefficiency (as quantified by the “price of anarchy” measure). However, the best-case inefficiency (quantified by the “price of stability” measure) is bounded by 5/4, and this is tight. These results add another layer to the growing literature on the price of anarchy and stability, which studies the extent to which selfish behavior affects system efficiency.
Journal of Artificial Intelligence Research | 2009
Michal Feldman; Tami Tamir
A Nash Equilibrium (NE) is a strategy profile resilient to unilateral deviations, and is predominantly used in the analysis of multiagent systems. A downside of NE is that it is not necessarily stable against deviations by coalitions. Yet, as we show in this paper, in some cases, NE does exhibit stability against coalitional deviations, in that the benefits from a joint deviation are bounded. In this sense, NE approximates strong equilibrium. Coalition formation is a key issue in multiagent systems. We provide a framework for quantifying the stability and the performance of various assignment policies and solution concepts in the face of coalitional deviations. Within this framework we evaluate a given configuration according to three measures: (i) IR_min: the maximal number alpha, such that there exists a coalition in which the minimal improvement ratio among the coalition members is alpha, (ii) IR_max: the maximal number alpha, such that there exists a coalition in which the maximal improvement ratio among the coalition members is alpha, and (iii) DR_max: the maximal possible damage ratio of an agent outside the coalition. We analyze these measures in job scheduling games on identical machines. In particular, we provide upper and lower bounds for the above three measures for both NE and the well-known assignment rule Longest Processing Time (LPT). Our results indicate that LPT performs better than a general NE. However, LPT is not the best possible approximation. In particular, we present a polynomial time approximation scheme (PTAS) for the makespan minimization problem which provides a schedule with IR_min of 1+epsilon for any given epsilon. With respect to computational complexity, we show that given an NE on m >= 3 identical machines or m >= 2 unrelated machines, it is NP-hard to determine whether a given coalition can deviate such that every member decreases its cost.