Sébastien Jodogne
University of Liège
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sébastien Jodogne.
ACM Transactions on Computational Logic | 2005
Sébastien Jodogne; Pierre Wolper
This article considers finite-automata-based algorithms for handling linear arithmetic with both real and integer variables. Previous work has shown that this theory can be dealt with by using finite automata on infinite words, but this involves some difficult and delicate to implement algorithms. The contribution of this article is to show, using topological arguments, that only a restricted class of automata on infinite words are necessary for handling real and integer linear arithmetic. This allows the use of substantially simpler algorithms, which have been successfully implemented.
international joint conference on automated reasoning | 2001
Sébastien Jodogne; Pierre Wolper
This paper considers finite-automata based algorithms for handling linear arithmetic with both real and integer variables. Previous work has shown that this theory can be dealt with by using finite automata on infinite words, but this involves some difficult and delicate to implement algorithms. The contribution of this paper is to show, using topological arguments, that only a restricted class of automata on infinite words are necessary for handling real and integer linear arithmetic. This allows the use of substantially simpler algorithms and opens the path to the implementation of a usable system for handling this combined theory.
The International Journal of Robotics Research | 2011
Justus H. Piater; Sébastien Jodogne; Renaud Detry; Dirk Kraft; Norbert Krüger; Oliver Kroemer; Jan Peters
We discuss vision as a sensory modality for systems that interact flexibly with uncontrolled environments. Instead of trying to build a generic vision system that produces task-independent representations, we argue in favor of task-specific, learn-able representations. This concept is illustrated by two examples of our own work. First, our RLVC algorithm performs reinforcement learning directly on the visual input space. To make this very large space manageable, RLVC interleaves the reinforcement learner with a supervised classification algorithm that seeks to split perceptual states so as to reduce perceptual aliasing. This results in an adaptive discretization of the perceptual space based on the presence or absence of visual features. Its extension, RLJC, additionally handles continuous action spaces. In contrast to the minimalistic visual representations produced by RLVC and RLJC, our second method learns structural object models for robust object detection and pose estimation by probabilistic inference. To these models, the method associates grasp experiences autonomously learned by trial and error. These experiences form a non-parametric representation of grasp success likelihoods over gripper poses, which we call a grasp density. Thus, object detection in a novel scene simultaneously produces suitable grasping options.
IEEE Transactions on Medical Imaging | 2015
Ching-Wei Wang; Cheng-Ta Huang; Meng-Che Hsieh; Chung-Hsing Li; Sheng-Wei Chang; Wei-Cheng Li; Rémy Vandaele; Sébastien Jodogne; Pierre Geurts; Cheng Chen; Guoyan Zheng; Chengwen Chu; Hengameh Mirzaalian; Ghassan Hamarneh; Tomaž Vrtovec; Bulat Ibragimov
Cephalometric analysis is an essential clinical and research tool in orthodontics for the orthodontic analysis and treatment planning. This paper presents the evaluation of the methods submitted to the Automatic Cephalometric X-Ray Landmark Detection Challenge, held at the IEEE International Symposium on Biomedical Imaging 2014 with an on-site competition. The challenge was set to explore and compare automatic landmark detection methods in application to cephalometric X-ray images. Methods were evaluated on a common database including cephalograms of 300 patients aged six to 60 years, collected from the Dental Department, Tri-Service General Hospital, Taiwan, and manually marked anatomical landmarks as the ground truth data, generated by two experienced medical doctors. Quantitative evaluation was performed to compare the results of a representative selection of current methods submitted to the challenge. Experimental results show that three methods are able to achieve detection rates greater than 80% using the 4 mm precision range, but only one method achieves a detection rate greater than 70% using the 2 mm precision range, which is the acceptable precision range in clinical practice. The study provides insights into the performance of different landmark detection approaches under real-world conditions and highlights achievements and limitations of current image analysis techniques.
advanced concepts for intelligent vision systems | 2006
Olivier Barnich; Sébastien Jodogne; Marc Van Droogenbroeck
We address the topic of real-time analysis and recognition of silhouettes. The method that we propose first produces object features obtained by a new type of morphological operators, which can be seen as an extension of existing granulometric filters, and then insert them into a tailored classification scheme. Intuitively, given a binary segmented image, our operator produces the set of all the largest rectangles that can be wedged inside any connected component of the image. The latter are obtained by a standard background subtraction technique and morphological filtering. To classify connected components into one of the known object categories, the rectangles of a connected component are submitted to a machine learning algorithm called EXtremely RAndomized trees (Extra-trees). The machine learning algorithm is fed with a static database of silhouettes that contains both positive and negative instances. The whole process, including image processing and rectangle classification, is carried out in real-time. Finally we evaluate our approach on one of todays hot topics: the detection of human silhouettes. We discuss experimental results and show that our method is stable and computationally effective. Therefore, we assess that algorithms like ours introduce new ways for the detection of humans in video sequences.
computer aided verification | 2003
Frédéric Herbreteau; Sébastien Jodogne
This paper addresses the problem of computing an exact and effective representation of the set of reachable configurations of a linear hybrid automaton. Our solution is based on accelerating the state-space exploration by computing symbolically the repeated effect of control cycles. The computed sets of configurations are represented by Real Vector Automata (RVA), the expressive power of which is beyond that of the first-order additive theory of reals and integers. This approach makes it possible to compute in finite time sets of configurations that cannot be expressed as finite unions of convex sets. The main technical contributions of the paper consist in a powerful sufficient criterion for checking whether a hybrid transformation (i.e., with both discrete and continuous features) can be accelerated, as well as an algorithm for applying such an accelerated transformation on RVA. Our results have been implemented and successfully applied to several case studies, including the well-known leaking gas burner, and a simple communication protocol with timers.
use of p2p grid and agents for the development of content networks | 2007
Cyril Briquet; Xavier Dalem; Sébastien Jodogne; Pierre-Arnoul de Marneffe
Scheduling Data-Intensive Bags of Tasks in P2P Grids leads to transfers of large input data files, which cause delays in completion times. We propose to combine several existing technologies and patterns to perform efficient data-aware scheduling: (1) use of the BitTorrent P2P file sharing protocol to transfer data, (2) data caching on computational Resources, (3) use of a data-aware Resource selection scheduling algorithm similar to Storage Affinity, (4) a new Task selection scheduling algorithm (Temporal Tasks Grouping), based on the temporally grouped scheduling of Tasks sharing input data files. Data replication is also discusse. The proposed approach does not need an overlay network or Predictive Communications Ordering, making our operational implementation of a P2P Grid middleware easily deployable in unstructured P2P networks. Experiments show that performance gains are achieved by combining BitTorrent, caching, Storage Affinity and Temporal Tasks Grouping. This work can be summarized as combining P2P Grid computing and P2P data transfer technologies.
international conference on machine learning | 2005
Sébastien Jodogne; Justus H. Piater
We introduce flexible algorithms that can automatically learn mappings from images to actions by interacting with their environment. They work by introducing an image classifier in front of a Reinforcement Learning algorithm. The classifier partitions the visual space according to the presence or absence of highly informative local descriptors. The image classifier is incrementally refined by selecting new local descriptors when perceptual aliasing is detected. Thus, we reduce the visual input domain down to a size manageable by Reinforcement Learning, permitting us to learn direct percept-to-action mappings. Experimental results on a continuous visual navigation task illustrate the applicability of the framework.
international symposium on robotics | 2011
Justus H. Piater; Sébastien Jodogne; Renaud Detry; Dirk Kraft; Norbert Krüger; Oliver Kroemer; Jan Peters
We describe two quite different methods for associating action parameters to visual percepts. Our RLVC algorithm performs reinforcement learning directly on the visual input space. To make this very large space manageable, RLVC interleaves the reinforcement learner with a supervised classification algorithm that seeks to split perceptual states so as to reduce perceptual aliasing. This results in an adaptive discretization of the perceptual space based on the presence or absence of visual features. Its extension RLJC also handles continuous action spaces. In contrast to the minimalistic visual representations produced by RLVC and RLJC, our second method learns structural object models for robust object detection and pose estimation by probabilistic inference. To these models, the method associates grasp experiences autonomously learned by trial and error. These experiences form a nonparametric representation of grasp success likelihoods over gripper poses, which we call a grasp density. Thus, object detection in a novel scene simultaneously produces suitable grasping options.
Journal of Applied Clinical Medical Physics | 2014
Nadia Withofs; Claire Bernard; Catherine Van der Rest; Philippe Martinive; Mathieu Hatt; Sébastien Jodogne; Dimitris Visvikis; John Aldo Lee; Philippe Coucke; Roland Hustinx
PET/CT imaging could improve delineation of rectal carcinoma gross tumor volume (GTV) and reduce interobserver variability. The objective of this work was to compare various functional volume delineation algorithms. We enrolled 31 consecutive patients with locally advanced rectal carcinoma. The FDG PET/CT and the high dose CT (CTRT) were performed in the radiation treatment position. For each patient, the anatomical GTVRT was delineated based on the CTRT and compared to six different functional/metabolic GTVPET derived from two automatic segmentation approaches (FLAB and a gradient‐based method); a relative threshold (45% of the SUVmax) and an absolute threshold (SUV>2.5), using two different commercially available software (Philips EBW4 and Segami OASIS). The spatial sizes and shapes of all volumes were compared using the conformity index (CI). All the delineated metabolic tumor volumes (MTVs) were significantly different. The MTVs were as follows (mean±SD):GTVRT(40.6±31.28ml); FLAB(21.36±16.34ml); the gradient‐based method (18.97±16.83ml); OASIS45%(15.89±12.68ml); Philips45%(14.52±10.91ml); OASIS2.5(41.62±33.26ml); Philips2.5(40±31.27ml). CI between these various volumes ranged from 0.40 to 0.90. The mean CI between the different MTVs and the GTVCT was <0.4. Finally, the DICOM transfer of MTVs led to additional volume variations. In conclusion, we observed large and statistically significant variations in tumor volume delineation according to the segmentation algorithms and the software products. The manipulation of PET/CT images and MTVs, such as the DICOM transfer to the Radiation Oncology Department, induced additional volume variations. PACS number: 87.55.D‐