Featured Researches

Discrete Mathematics

Logical-Combinatorial Approaches in Dynamic Recognition Problems

A pattern recognition scenario, where instead of object classification into the classes by the learning set, the algorithm aims to allocate all objects to the same, the so-called normal class, is the research objective.

Read more
Discrete Mathematics

Longest paths in 2-edge-connected cubic graphs

We prove almost tight bounds on the length of paths in 2 -edge-connected cubic graphs. Concretely, we show that (i) every 2 -edge-connected cubic graph of size n has a path of length Ω( log 2 n loglogn ) , and (ii) there exists a 2 -edge-connected cubic graph, such that every path in the graph has length O( log 2 n) .

Read more
Discrete Mathematics

Low-Complexity Tilings of the Plane

A two-dimensional configuration is a coloring of the infinite grid Z^2 with finitely many colors. For a finite subset D of Z^2, the D-patterns of a configuration are the colored patterns of shape D that appear in the configuration. The number of distinct D-patterns of a configuration is a natural measure of its complexity. A configuration is considered having low complexity with respect to shape D if the number of distinct D-patterns is at most |D|, the size of the shape. This extended abstract is a short review of an algebraic method to study periodicity of such low complexity configurations.

Read more
Discrete Mathematics

Lower Bound for (Sum) Coloring Problem

The Minimum Sum Coloring Problem is a variant of the Graph Vertex Coloring Problem, for which each color has a weight. This paper presents a new way to find a lower bound of this problem, based on a relaxation into an integer partition problem with additional constraints. We improve the lower bound for 18 graphs of standard benchmark DIMACS, and prove the optimal value for 4 graphs by reaching their known upper bound.

Read more
Discrete Mathematics

Lower Bounds for Shoreline Searching with 2 or More Robots

Searching for a line on the plane with n unit speed robots is a classic online problem that dates back to the 50's, and for which competitive ratio upper bounds are known for every n≥1 . In this work we improve the best lower bound known for n=2 robots from 1.5993 to 3. Moreover we prove that the competitive ratio is at least 3 – √ for n=3 robots, and at least 1/cos(π/n) for n≥4 robots. Our lower bounds match the best upper bounds known for n≥4 , hence resolving these cases. To the best of our knowledge, these are the first lower bounds proven for the cases n≥3 of this several decades old problem.

Read more
Discrete Mathematics

Lyndon Words, the Three Squares Lemma, and Primitive Squares

We revisit the so-called "Three Squares Lemma" by Crochemore and Rytter [Algorithmica 1995] and, using arguments based on Lyndon words, derive a more general variant which considers three overlapping squares which do not necessarily share a common prefix. We also give an improved upper bound of n log 2 n on the maximum number of (occurrences of) primitively rooted squares in a string of length n , also using arguments based on Lyndon words. To the best of our knowledge, the only known upper bound was n log ϕ n≈1.441n log 2 n , where ϕ is the golden ratio, reported by Fraenkel and Simpson [TCS 1999] obtained via the Three Squares Lemma.

Read more
Discrete Mathematics

MAX CUT in Weighted Random Intersection Graphs and Discrepancy of Sparse Random Set Systems

Let V be a set of n vertices, M a set of m labels, and let R be an m×n matrix of independent Bernoulli random variables with success probability p . A random instance G(V,E, R T R) of the weighted random intersection graph model is constructed by drawing an edge with weight [ R T R ] v,u between any two vertices u,v for which this weight is larger than 0. In this paper we study the average case analysis of Weighted Max Cut, assuming the input is a weighted random intersection graph, i.e. given G(V,E, R T R) we wish to find a partition of V into two sets so that the total weight of the edges having one endpoint in each set is maximized. We initially prove concentration of the weight of a maximum cut of G(V,E, R T R) around its expected value, and then show that, when the number of labels is much smaller than the number of vertices, a random partition of the vertices achieves asymptotically optimal cut weight with high probability (whp). Furthermore, in the case n=m and constant average degree, we show that whp, a majority type algorithm outputs a cut with weight that is larger than the weight of a random cut by a multiplicative constant strictly larger than 1. Then, we highlight a connection between the computational problem of finding a weighted maximum cut in G(V,E, R T R) and the problem of finding a 2-coloring with minimum discrepancy for a set system Σ with incidence matrix R . We exploit this connection by proposing a (weak) bipartization algorithm for the case m=n,p= Θ(1) n that, when it terminates, its output can be used to find a 2-coloring with minimum discrepancy in Σ . Finally, we prove that, whp this 2-coloring corresponds to a bipartition with maximum cut-weight in G(V,E, R T R) .

Read more
Discrete Mathematics

MIP and Set Covering approaches for Sparse Approximation

The Sparse Approximation problem asks to find a solution x such that ||y−Hx||<α , for a given norm ||⋅|| , minimizing the size of the support ||x| | 0 :=#{j | x j ≠0} . We present valid inequalities for Mixed Integer Programming (MIP) formulations for this problem and we show that these families are sufficient to describe the set of feasible supports. This leads to a reformulation of the problem as an Integer Programming (IP) model which in turn represents a Minimum Set Covering formulation, thus yielding many families of valid inequalities which may be used to strengthen the models up. We propose algorithms to solve sparse approximation problems including a branch \& cut for the MIP, a two-stages algorithm to tackle the set covering IP and a heuristic approach based on Local Branching type constraints. These methods are compared in a computational experimentation with the goal of testing their practical potential.

Read more
Discrete Mathematics

Macroscopic network circulation for planar graphs

The analysis of networks, aimed at suitably defined functionality, often focuses on partitions into subnetworks that capture desired features. Chief among the relevant concepts is a 2-partition, that underlies the classical Cheeger inequality, and highlights a constriction (bottleneck) that limits accessibility between the respective parts of the network. In a similar spirit, the purpose of the present work is to introduce a new concept of maximal global circulation and to explore 3-partitions that expose this type of macroscopic feature of networks. Herein, graph circulation is motivated by transportation networks and probabilistic flows (Markov chains) on graphs. Our goal is to quantify the large-scale imbalance of network flows and delineate key parts that mediate such global features. While we introduce and propose these notions in a general setting, in this paper, we only work out the case of planar graphs. We explain that a scalar potential can be identified to encapsulate the concept of circulation, quite similarly as in the case of the curl of planar vector fields. Beyond planar graphs, in the general case, the problem to determine global circulation remains at present a combinatorial problem.

Read more
Discrete Mathematics

Makespan Minimization with OR-Precedence Constraints

We consider a variant of the NP-hard problem of assigning jobs to machines to minimize the completion time of the last job. Usually, precedence constraints are given by a partial order on the set of jobs, and each job requires all its predecessors to be completed before it can start. In his seminal paper, Graham (1966) presented a simple 2-approximation algorithm, and, more than 40 years later, Svensson (2010) proved that 2 is essentially the best approximation ratio one can hope for in general. In this paper, we consider a different type of precedence relation that has not been discussed as extensively and is called OR-precedence. In order for a job to start, we require that at least one of its predecessors is completed - in contrast to all its predecessors. Additionally, we assume that each job has a release date before which it must not start. We prove that Graham's algorithm has an approximation guarantee of 2 also in this setting, and present a polynomial-time algorithm that solves the problem to optimality, if preemptions are allowed. The latter result is in contrast to classical precedence constraints, for which Ullman (1975) showed that the preemptive variant is already NP-hard. Our algorithm generalizes a result of Johannes (2005) who gave a polynomial-time algorithm for unit processing time jobs subject to OR-precedence constraints, but without release dates. The performance guarantees presented here match the best-known ones for special cases where classical precedence constraints and OR-precedence constraints coincide.

Read more

Ready to get started?

Join us today