Vincent Vidal
Centre national de la recherche scientifique
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Vincent Vidal.
Artificial Intelligence | 2006
Vincent Vidal; Hector Geffner
A key feature of modern optimal planners such as graphplan and blackbox is their ability to prune large parts of the search space. Previous Partial Order Causal Link (POCL) planners provide an alternative branching scheme but lacking comparable pruning mechanisms do not perform as well. In this paper, a domain-independent formulation of temporal planning based on Constraint Programming is introduced that successfully combines a POCL branching scheme with powerful and sound pruning rules. The key novelty in the formulation is the ability to reason about supports, precedences, and causal links involving actions that are not in the plan. Experiments over a wide range of benchmarks show that the resulting optimal temporal planner is much faster than current ones and is competitive with the best parallel planners in the special case in which actions have all the same duration.
Artificial Intelligence | 2009
Christophe Lecoutre; Lakhdar Sais; Sébastien Tabary; Vincent Vidal
Constraint programming is a popular paradigm to deal with combinatorial problems in artificial intelligence. Backtracking algorithms, applied to constraint networks, are commonly used but suffer from thrashing, i.e. the fact of repeatedly exploring similar subtrees during search. An extensive literature has been devoted to prevent thrashing, often classified into look-ahead (constraint propagation and search heuristics) and look-back (intelligent backtracking and learning) approaches. In this paper, we present an original look-ahead approach that allows to guide backtrack search toward sources of conflicts and, as a side effect, to obtain a behavior similar to a backjumping technique. The principle is the following: after each conflict, the last assigned variable is selected in priority, so long as the constraint network cannot be made consistent. This allows us to find, following the current partial instantiation from the leaf to the root of the search tree, the culprit decision that prevents the last variable from being assigned. This way of reasoning can easily be grafted to many variations of backtracking algorithms and represents an original mechanism to reduce thrashing. Moreover, we show that this approach can be generalized so as to collect a (small) set of incompatible variables that are together responsible for the last conflict. Experiments over a wide range of benchmarks demonstrate the effectiveness of this approach in both constraint satisfaction and automated artificial intelligence planning.
Artificial Intelligence | 2001
Michel Cayrol; Pierre Régnier; Vincent Vidal
Abstract Planners of the Graphplan family (Graphplan, IPP, STAN ,… ) are currently considered to be the most efficient ones on numerous planning domains. Their partially ordered plans can be represented as sequences of sets of actions. The sets of actions generated by Graphplan satisfy a strong independence property which allows one to manipulate each set as a whole. We present a detailed formal analysis that demonstrates that the independence criterion can be partially relaxed in order to produce valid plans in the sense of Graphplan. Indeed, two actions at a same level of the planning-graph do not need to be marked as mutually exclusive if there exists a possible ordering between them that respects a criterion of “authorization”, less constrained than the criterion of independence. The ordering between the actions can be set up after the plan has been generated, and the extraction of the solution plan needs an extra checking process that guarantees that an ordering can be found for actions considered simultaneously, at each level of the planning-graph. This study lead us to implement a modified Graphplan, LCGP (for “Least Committed GraphPlan”), which is still sound and complete and generally produces plans that have fewer levels than those of Graphplan (the same number in the worst cases). We present an experimental study which demonstrates that, in classical planning domains, LCGP solves more problems than planners from the family of Graphplan (Graphplan, IPP, STAN ,… ). In most cases, LCGP also outperforms the other planners.
principles and practice of constraint programming | 2005
Vincent Vidal; Hector Geffner
Many benchmark domains in AI planning including Blocks, Logistics, Gripper, Satellite, and others lack the interactions that characterize puzzles and can be solved non-optimally in low polynomial time. They are indeed easy problems for people, although as with many other problems in AI, not always easy for machines. In this paper, we address the question of whether simple problems such as these can be solved in a simple way, i.e., without search, by means of a domain-independent planner. We address this question empirically by extending the constraint-based planner CPT with additional domain-independent inference mechanisms. We show then for the first time that these and several other benchmark domains can be solved with no backtracks while performing only polynomial node operations. This is a remarkable finding in our view that suggests that the classes of problems that are solvable without search may be actually much broader than the classes that have been identified so far by work in Tractable Planning.
Journal of Graphics Tools | 2008
Vincent Vidal; Xing Mei; Philippe Decaudin
Interactive volume rendering methods such as texture-based slicing techniques and ray casting have been well developed in recent years. The rendering performance is generally restricted by the volume size, the fill-rate, and the texture fetch speed of the graphics hardware. For most 3D data sets, a fraction of the volume is empty, which will reduce the rendering performance without specific optimization. In this paper, we present a simple kd-tree-based space partitioning scheme to efficiently remove the empty spaces from the volume data sets at the preprocessing stage. The splitting rule of the scheme is based on a simple yet effective cost function evaluated through a fast approximation of the bounding volume of the nonempty regions. The scheme culls a large number of empty voxels and encloses the remaining data with a small number of axis-aligned bounding boxes, which are then used for interactive rendering. The number of boxes is controlled by halting criteria. In addition to its simplicity, our scheme requires little preprocessing time and improves the rendering performance significantly.
european conference on evolutionary computation in combinatorial optimization | 2010
Jacques Bibai; Pierre Savéant; Marc Schoenauer; Vincent Vidal
Divide-and-Evolve (DaE) is an original “memeticization” of Evolutionary Computation and Artificial Intelligence Planning. DaE optimizes either the number of actions, or the total cost of actions, or the total makespan, by generating ordered sequences of intermediate goals via artificial evolution, and calling an external planner to solve each subproblem in turn. DaE can theoretically use any embedded planner. However, since the introduction of this approach only one embedded planner had been used: the temporal optimal planner CPT. In this paper, we propose a new version of DaE, using time-based Atom Choice and embarking the sub-optimal planner YAHSP in order to test the robustness of the approach and to evaluate the impact of using a sub-optimal planner rather than an optimal one, depending on the type of planning problem.
The Visual Computer | 2012
Vincent Vidal; Christian Wolf; Florent Dupont
A new mesh optimization framework for 3D triangular surface meshes is presented, which formulates the task as an energy minimization problem in the same spirit as in Hoppe et al. (SIGGRAPH’93: Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques, 1993). The desired mesh properties are controlled through a global energy function including data attached terms measuring the fidelity to the original mesh, shape potentials favoring high quality triangles, and connectivity as well as budget terms controlling the sampling density. The optimization algorithm modifies mesh connectivity as well as the vertex positions. Solutions for the vertex repositioning step are obtained by a discrete graph cut algorithm examining global combinations of local candidates.Results on various 3D meshes compare favorably to recent state-of-the-art algorithms. Applications consist in optimizing triangular meshes and in simplifying meshes, while maintaining high mesh quality. Targeted areas are the improvement of the accuracy of numerical simulations, the convergence of numerical schemes, improvements of mesh rendering (normal field smoothness) or improvements of the geometric prediction in mesh compression techniques.
tests and proofs | 2017
Jinjiang Guo; Vincent Vidal; Irene Cheng; Anup Basu; Atilla Baskurt; Guillaume Lavoué
Objective visual quality assessment of 3D models is a fundamental issue in computer graphics. Quality assessment metrics may allow a wide range of processes to be guided and evaluated, such as level of detail creation, compression, filtering, and so on. Most computer graphics assets are composed of geometric surfaces on which several texture images can be mapped to make the rendering more realistic. While some quality assessment metrics exist for geometric surfaces, almost no research has been conducted on the evaluation of texture-mapped 3D models. In this context, we present a new subjective study to evaluate the perceptual quality of textured meshes, based on a paired comparison protocol. We introduce both texture and geometry distortions on a set of 5 reference models to produce a database of 136 distorted models, evaluated using two rendering protocols. Based on analysis of the results, we propose two new metrics for visual quality assessment of textured mesh, as optimized linear combinations of accurate geometry and texture quality measurements. These proposed perceptual metrics outperform their counterparts in terms of correlation with human opinion. The database, along with the associated subjective scores, will be made publicly available online.
international conference on evolutionary multi-criterion optimization | 2013
Mostepha Redouane Khouadjia; Marc Schoenauer; Vincent Vidal; Johann Dréo; Pierre Savéant
All standard Artifical Intelligence (AI) planners to-date can only handle a single objective, and the only way for them to take into account multiple objectives is by aggregation of the objectives. Furthermore, and in deep contrast with the single objective case, there exists no benchmark problems on which to test the algorithms for multi-objective planning.
international conference on computer graphics and interactive techniques | 2015
Jinjiang Guo; Vincent Vidal; Atilla Baskurt; Guillaume Lavoué
Several perceptually-based quality metrics have been introduced to predict the global impact of geometric artifacts on the visual appearance of a 3D model. They usually produce a single score that reflects the global level of annoyance caused by the distortions. However, beside this global information, it is also important in many applications to obtain information about the local visibility of the artifacts (i.e. estimating a localized distortion measure). In this work we present a psychophysical experiment where observers are asked to mark areas of 3D meshes that contain noticeable distortions. The collected per-vertex distortion maps are first used to illustrate several perceptual mechanisms of the human visual system. They then serve as ground-truth to evaluate the performance of well-known geometric attributes and metrics for predicting the visibility of artifacts. Results show that curvature-based attributes demonstrate excellent performance. As expected, the Hausdorff distance is a poor predictor of the perceived local distortion while the recent perceptually-based metrics provide the best results.