Imen Chakroun
university of lille
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Imen Chakroun.
Concurrency and Computation: Practice and Experience | 2013
Imen Chakroun; Mohand-Said Mezmaz; Nouredine Melab; Ahcène Bendjoudi
In this paper, we address the design and implementation of graphical processing unit (GPU)‐accelerated branch‐and‐bound algorithms (B&B) for solving flow‐shop scheduling optimization problems (FSP). Such applications are CPU‐time consuming and highly irregular. On the other hand, GPUs are massively multithreaded accelerators using the single instruction multiple data model at execution. A major issue that arises when executing on GPU, a B&B applied to FSP is thread or branch divergence. Such divergence is caused by the lower bound function of FSP that contains many irregular loops and conditional instructions. Our challenge is therefore to revisit the design and implementation of B&B applied to FSP dealing with thread divergence. Extensive experiments of the proposed approach have been carried out on well‐known FSP benchmarks using an Nvidia Tesla (C2050 GPU card (http://www.nvidia.com/docs/IO/43395/NV_DS_Tesla_C2050_C2070_jul10_lores.pdf)). Compared with a CPU‐based execution, accelerations up to × 77.46 are achieved for large problem instances. Copyright
international conference on cluster computing | 2012
Nouredine Melab; Imen Chakroun; Mohand-Said Mezmaz; Daniel Tuyttens
Branch-and-Bound (B&B) algorithms are time-intensive tree-based exploration methods for solving to optimality combinatorial optimization problems. In this paper, we investigate the use of GPU computing as a major complementary way to speed up those methods. The focus is put on the bounding mechanism of B&B algorithms, which is the most time consuming part of their exploration process. We propose a parallel B&B algorithm based on a GPU-accelerated bounding model. The proposed approach concentrate on optimizing data access management to further improve the performance of the bounding mechanism which uses large and intermediate data sets that do not completely fit in GPU memory. Extensive experiments of the contribution have been carried out on well-known FSP benchmarks using an Nvidia Tesla C2050 GPU card. We compared the obtained performances to a single and a multithreaded CPU-based execution. Accelerations up to X100 are achieved for large problem instances.
Journal of Parallel and Distributed Computing | 2013
Imen Chakroun; Nordine Melab; Mohand-Said Mezmaz; Daniel Tuyttens
In this paper, we revisit the design and implementation of Branch-and-Bound (B&B) algorithms for solving large combinatorial optimization problems on GPU-enhanced multi-core machines. B&B is a tree-based optimization method that uses four operators (selection, branching, bounding and pruning) to build and explore a highly irregular tree representing the solution space. In our previous works, we have proposed a GPU-accelerated approach in which only a single CPU core is used and only the bounding operator is performed on the GPU device. Here, we extend the approach (LL-GB&B) in order to minimize the CPU-GPU communication latency and thread divergence. Such an objective is achieved through a GPU-based fine-grained parallelization of the branching and pruning operators in addition to the bounding one. The second contribution consists in investigating the combination of a GPU with multi-core processing. Two scenarios have been explored leading to two approaches: a concurrent (RLL-GB&B) and a cooperative one (PLL-GB&B). In the first one, the exploration process is performed concurrently by the GPU and the CPU cores. In the cooperative approach, the CPU cores prepare and off-load to GPU pools of tree nodes using data streaming while the GPU performs the exploration. The different approaches have been extensively experimented on the Flowshop scheduling problem. Compared to a single CPU-based execution, LL-GB&B allows accelerations up to (x160) for large problem instances. Moreover, when combining multi-core and GPU, we figure out that using RLL-GB&B is not beneficial while PLL-GB&B enables an improvement up to 36% compared to LL-GB&B.
ieee international conference on high performance computing data and analytics | 2012
Imen Chakroun; Nouredine Melab
Solving exactly Combinatorial Optimization Problems (COPs) using a Branch-and-Bound (B&B) algorithm requires a huge amount of computational resources. Therefore, we recently investigated designing B&B algorithms on top of graphics processing units (GPUs) using a parallel bounding model. The proposed model assumes parallelizing the evaluation of the lower bounds on pools of sub-problems. The results demonstrated that the size of the evaluated pool has a significant impact on the performance of B&B and that it depends strongly on the problem instance being solved. In this paper, we design an adaptative parallel B&B algorithm for solving permutation-based combinatorial optimization problems such as FSP (Flow-shop Scheduling Problem) on GPU accelerators. To do so, we propose a dynamic heuristic for parameter auto-tuning at runtime. Another challenge of this pioneering work 1 is to exploit larger degrees of parallelism by using the combined computational power of multiple GPU devices. The approach has been applied to the permutation flow-shop problem. Extensive experiments have been carried out on well-known FSP benchmarks using an Nvidia Tesla S1070 Computing System equipped with two Tesla T10 GPUs. Compared to a CPU-based execution, accelerations up to ×105 are achieved for large problem instances.
Concurrency and Computation: Practice and Experience | 2014
Nouredine Melab; Imen Chakroun; Ahcène Bendjoudi
Branch‐and‐bound (B&B) algorithms are attractive methods for solving to optimality combinatorial optimization problems using an implicit enumeration of a dynamically built tree‐based search space. Nevertheless, they are time‐consuming when dealing with large problem instances. Therefore, pruning tree nodes (subproblems) is traditionally used as a powerful mechanism to reduce the size of the explored search space. Pruning requires to perform the bounding operation, which consists of applying a lower bound function to the subproblems generated during the exploration process. Preliminary experiments performed on the Flow‐Shop scheduling problem (FSP) have shown that the bounding operation consumes over 98% of the execution time of the B&B algorithm. In this paper, we investigate the use of graphics processing unit (GPU) computing as a major complementary way to speed up the search. We revisit the design and implementation of the parallel bounding model on GPU accelerators. The proposed approach enables data access optimization. Extensive experiments have been carried out on well‐known FSP benchmarks using an Nvidia Tesla C2050 GPU card. Compared to a CPU‐based single core execution using an Intel Core i7‐970 processor without GPU, speedups higher than 100 times faster are achieved for large problem instances. At an equivalent peak performance, GPU‐accelerated B&B is twice faster than its multi‐core counterpart. Copyright
international conference on conceptual structures | 2013
Imen Chakroun; Nordine Melab
Abstract Branch-and-Bound (B&B) algorithms are well-known tree-based exploratory methods for solving to optimality NP-hard discrete optimization problems. The construction of the B&B tree and its exploration are performed using four operators: branching, bounding, selection and pruning. Such algorithms are irregular which makes challenging their parallel design and implementation on GPU accelerators. Among the few existing related works, we have recently revisited on GPU the bounding operator. The reported results show that speedups up to × 100 can be obtained on recent GPU cards. In this paper, we address the GPU-based design and implementation of B&B algorithms considering the branching and pruning operators as well as the bounding one. The proposed template transforms the unpredictable and irregular workload associated to the explored B&B tree into regular data-parallel kernels optimized for the SIMD-based execution model of GPUs. Thread divergence and uncoalesced memory accesses are considered in the optimization process. The proposed approach has been experimented on the Flow-Shop scheduling problem and compared to another GPU-based strategy and to a cluster of workstations (COWs) based approach. The reported results demonstrate the efficiency of the proposed approach over the two other ones. Speedups up to × 160 are obtained for large problem instances using an Nvidia Tesla C2050 hardware configuration.
parallel processing and applied mathematics | 2011
Imen Chakroun; Ahc ene Bendjoudi; Nouredine Melab
In this paper, we propose a pioneering work on designing and programming B&B algorithms on GPU. To the best of our knowledge, no contribution has been proposed to raise such challenge. We focus on the parallel evaluation of the bounds for the Flow-shop scheduling problem. To deal with thread divergence caused by the bounding operation, we investigate two software based approaches called thread data reordering and branch refactoring. Experiments reported that parallel evaluation of bounds speeds up execution up to 54.5 times compared to a CPU version.
Journal of Computer and System Sciences | 2015
Imen Chakroun; Nouredine Melab
In this work, we revisit the design and implementation of the Branch-and-Bound (BB (2) combining multi-core and GPU allows improvement up to 36% over a single CPU-GPU execution; (3) the more GPU devices are used, the better the speedups are whatever is the considered problem instance. Addressing the design and implementation of B&B algorithms for heterogeneous environments.Computations auto-mapping on the target platform.Proposing new patterns for combining multi-core and GPU computing for B&B.
international conference on cluster computing | 2016
Tom Vander Aa; Imen Chakroun; Tom Haber
Matrix factorization is a common machine learning technique for recommender systems. Despite its high prediction accuracy, the Bayesian Probabilistic Matrix Factorization algorithm (BPMF) has not been widely used on large scale data because of its high computational cost. In this paper we propose a distributed high-performance parallel implementation of BPMF on shared memory and distributed architectures. We show by using efficient load balancing using work stealing on a single node, and by using asynchronous communication in the distributed version we beat state of the art implementations.
high performance computing and communications | 2012
Imen Chakroun; Nouredine Melab