Jacques Verly
University of Liège
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jacques Verly.
advanced video and signal based surveillance | 2005
Pierre F. Gabriel; Jean-Bernard Hayet; Justus H. Piater; Jacques Verly
This paper presents a new approach for tracking objects in complex situations such as people in a crowd or players on a soccer field. Each object in the image is represented by several interest points (IPs). These IPs are obtained using a color version of the Harris IP detector. Each IP is characterized by the local appearance (chromatic first-order local jet) of the object and by geometric parameters. We track objects by matching IPs from image to image based on the Mahalanobis distance. The approach is robust to occlusion. Performance is illustrated by some examples.
IEEE Transactions on Image Processing | 1993
Jacques Verly; Richard L. Delanoy
The application of adaptive (i.e., data-dependent) mathematical morphology techniques to range imagery, i.e., the use of structuring elements (SEs) that automatically adjust to the gray-scale values in a range image in order to deal with features of known physical sizes, is discussed. The technique is applicable to any type of image for which the distance to a scene element is available for each pixel.
ieee radar conference | 2003
Fabian D. Lapierre; Jacques Verly; M. Van Droogenbroeck
We address the problem of detecting slow-moving targets using a space-time adaptive processing (STAP) radar. The construction of optimum weights at each range implies the estimation of the clutter covariance matrix. This is typically done by straight averaging of neighboring data snapshots. However, in bistatic configurations, these snapshots are range-dependent. As a result, straight averaging results in poor performance. After reviewing existing methods exploiting the precise shape of the bistatic direction-Doppler curves.
Proceedings of the IEEE | 1996
Jacques Verly; Richard L. Delanoy
We describe an experimental, model-based automatic target recognition (ATR) system, called XTRS, for recognizing tactical vehicles in real or synthetic laser-radar (LADAR) range and intensity images corresponding to a forwardlooking, CO/sub 2/ laser radar (LADAR) that is carried either on a ground vehicle or on an airborne platform. Various aspects of the systems operation are illustrated through a variety of examples. Generic techniques are highlighted whenever possible. A first such technique is the use of feature-indicating interest images to focus attention on specific areas of the input imagery. A second is the use of an application-independent matching engine for matching features extracted from the imagery against an application-dependent appearance model hierarchy that represents the objects to be recognized. A third generic technique is the systems architectures and its control mechanism. Following the description of XTRS, we discuss XTRSs recognition performance on real data collected with the groundbased version of the ladar sensor. We then provide a detailed account of XTRSs performance on synthetic datasets created to rest the limits of system performance. Finally, we briefly discuss the use of XTRS in conjunction with the airborne version of the sensor. Overall, more than 1500 range and intensity image pairs were used throughout XTRSs development.
Medical Engineering & Physics | 2015
Mohamed Boutaayamou; Cédric Schwartz; Julien Stamatakis; Vincent Denoël; Didier Maquet; Bénédicte Forthomme; Jean-Louis Croisier; Benoît Macq; Jacques Verly; Gaëtan Garraux; Olivier Bruls
An original signal processing algorithm is presented to automatically extract, on a stride-by-stride basis, four consecutive fundamental events of walking, heel strike (HS), toe strike (TS), heel-off (HO), and toe-off (TO), from wireless accelerometers applied to the right and left foot. First, the signals recorded from heel and toe three-axis accelerometers are segmented providing heel and toe flat phases. Then, the four gait events are defined from these flat phases. The accelerometer-based event identification was validated in seven healthy volunteers and a total of 247 trials against reference data provided by a force plate, a kinematic 3D analysis system, and video camera. HS, TS, HO, and TO were detected with a temporal accuracy ± precision of 1.3 ms ± 7.2 ms, -4.2 ms ± 10.9 ms, -3.7 ms ± 14.5 ms, and -1.8 ms ± 11.8 ms, respectively, with the associated 95% confidence intervals ranging from -6.3 ms to 2.2 ms. It is concluded that the developed accelerometer-based method can accurately and precisely detect HS, TS, HO, and TO, and could thus be used for the ambulatory monitoring of gait features computed from these events when measured concurrently in both feet.
SPIE's International Symposium on Optical Engineering and Photonics in Aerospace Sensing | 1994
Dan E. Dudgeon; Richard T. Lacoss; Carol H. Lazott; Jacques Verly
The signature of a target imaged by a millimeter-wave SAR is highly variable. Various viewing angles will cause different scattering centers to be illuminated, the returns from which can vary greatly with minor changes in viewing angle, and the coherence of the radiation induces speckle noise. Using fully polarimetric turntable (inverse SAR) data, we have undertaken some basic investigations of the persistence of scatterers as a function of azimuth for a number of depression angles from 15 degrees to 32 degrees. Although many scatterers persist for only a few degrees of azimuth, enough persist for 10 to 20 degrees to make model-based recognition feasible. Based on these results, we have developed an experimental system for target recognition. The system uses the functional template approach for detection, pose estimation, and initial hypothesis ranking. The best-matching template defines an area where so-called bright-points are extracted, resulting in a binary feature map that shows the location of strong scatterers. Back-end recognition consists of matching these feature maps to target appearance models that capture the location of scatterers that produce strong returns and are sufficiently persistent with changes in viewing angle. The performance of the hypothesis generation via functional templates is briefly reviewed, both for ISAR data and for SAR data. Recognition results obtained with the new back-end recognition system are also presented for the case of ISAR data.
advanced video and signal based surveillance | 2005
Jean-Bernard Hayet; Tom Mathes; Jacek Czyz; Justus H. Piater; Jacques Verly; Benoît Macq
This article presents a modular architecture for multicamera tracking in the context of sports broadcasting. For each video stream, a geometrical module continuously performs the image-to-model homography estimation. A local-feature based tracking module tracks the players in each view. A supervisor module collects, associates and fuses the data provided by the tracking modules. The originality of the proposed system is three-fold. First, it allows to localize the targets on the ground with rotating and zooming cameras; second, it does not use background modeling techniques; and third, the local tracking can cope with severe occlusions. We present experimental results on raw TV-camera footage of a soccer game.
international conference on information fusion | 2000
William Ross; Allen M. Waxman; William W. Streilein; M. Aguiiar; Jacques Verly; Fang Liu; Michael Braun; Paul Harmon; Steve J. Rak
Describes a system under development for the 3D fusion of multi-sensor surface surveillance imagery, including electro-optical (EO), IR, SAR, multispectral and hyperspectral sources. Our approach is founded on biologically-inspired image processing algorithms. We have developed an image processing architecture enabling the unified interactive visualization of fused multi-sensor site data which utilizes a color image fusion algorithm based on retinal and cortical processing of color. We have also developed interactive Web-based tools for training neural network search agents that are capable of automatically scanning site data for the fused multi-sensor signatures of targets and/or surface features of interest. Each search agent is an interactively trained instance of a neural network model of cortical pattern recognition called a fuzzy ARTMAP. The utilization of 3D site models is central to our approach because it enables the accurate multi-platform image registration that is necessary for color image fusion and the designation, learning and searching for multi-sensor fused pixel signatures. Interactive stereo 3D viewing and fly-through tools enable efficient and intuitive site exploration and analysis. Web-based remote visualization and search agent training and utilization tools facilitate rapid, distributed and collaborative site exploitation and dissemination of results.
Journal of Computer Assisted Tomography | 1979
Jacques Verly; R. N. Bracewell
Tomographic reconstruction has ordinarily assumed that the measurement data can he regarded as line integrals, but the finite width of the X-ray beam invalidates this assumption. The data can however be expressed in the form of integrals over a strip rather than a line. The strip integral kernel is calculated allowing for extended source and detector, as well as for nonuniform photon emission and detector sensitivity. Strip eccentricity, which occurs in practice, is also taken into account. Even if the measurement data were to cover all scanning angles, there would be imperfect reconstruction expressible as a space-variant point spread function deducible from the strip integral kernel. To deal with this it is convenient to introduce the concepts of generalized projection and generalized Radon transform. Point-spread functions are given for cases involving piecewise-uniform symmetrical source distributions and uniform detectors.
IEEE Transactions on Aerospace and Electronic Systems | 2007
Sébastien De Grève; Philippe Ries; Fabian D. Lapierre; Jacques Verly
The goal of radar space-time adaptive processing (STAP) is to detect slow moving targets from a moving platform, typically airborne or spaceborne. STAP generally requires the estimation and the inversion of an interference-plus-noise (I+N) covariance matrix. To reduce both the number of samples involved in the estimation and the computational cost inherent to the matrix inversion, many suboptimum STAP methods have been proposed. We propose a new canonical framework that encompasses all suboptimum STAP methods we are aware of. The framework allows for both covariance-matrix (CM) estimation and range-dependence compensation (RDC); it also applies to monostatic and bistatic configurations. Finally, we discuss a taxonomy for classifying the methods described by the framework.