Daniel Crevier
Université du Québec
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daniel Crevier.
Computer Vision and Image Understanding | 1997
Daniel Crevier; Richard Lepage
The development of software that would be to image understanding systems what expert system shells are to expert systems has been the subject of considerable enquiry over the last ten years: this paper reviews pertinent publications and tries to present a coherent view of the field. After a survey of the advantages of explicit knowledge representation in image understanding, we tackle the subject under two main headings. We first expose the nature of the knowledge that the various authors have represented for image understanding. To this effect, we have elaborated a knowledge taxonomy consisting of seven modules, ranging in specificity from task domain knowledge to generic knowledge about the use of software systems. We then examine how researchers have represented these various kinds of knowledge. Most of the representations known to artificial intelligence were pressed into service, and a discussion of their relative merits is presented.
Computer Vision and Image Understanding | 2008
Daniel Crevier
A methodology is presented for making use of ground truth, human-segmented image data sets to compare, develop and optimize image segmentation algorithms. Central to this question is the problem of quantifying the accuracy of the match between machine and reference segmentations. In this regard, the paper introduces a natural extension to the concept of precision-recall curves, which are a standard evaluation technique in pattern recognition. Computationally efficient match measures defined so as to benefit from the availability of multiple alternative human segmentations, are also proposed. The Berkeley image segmentation data set is used to select among the proposed measures, which results in a validation of the local best fit heuristic as a way to best exploit reference segmentations. I then show how the resulting match criterion can be used to improve the recent SRM segmentation algorithm by gradual modifications and additions. In particular, I demonstrate and quantify performance increases resulting from changing color coordinates, optimizing the segment merging rule, introducing texture, and forcing segments to stop at edges. As modifications to the algorithm require the optimization of parameters, a mixed deterministic and Monte-Carlo method well adapted to the problem is introduced. A demonstration of how the method can be used to compare the performance of two algorithms is made, and its broad applicability to other segmentation methods is discussed.
Intelligent Robots and Computer Vision XIII: Algorithms and Computer Vision | 1994
Daniel Crevier; Hoi J. Yoo
We present a new method for linking edge points in a digital image, and segmenting the resulting edges into simple geometric elements. The initial linking procedure operates on the raw output of conventional edge detection algorithms, and links the pixels into sequences in a manner that guarantees the absence of branches. This linking requires no computationally expensive directional calculations to minimize branching. The resulting contours can, however, be of arbitrary length and complexity. In order to facilitate their later manipulation by higher-level algorithms, these contours are then segmented into straight line segments and circular arcs. The segmentation procedure relies on the overall symmetry of the detected segments, and avoids problems associated with the detection of corners or high curvature points. A contour segment is said to possess the considered overall symmetry property if, within certain tolerances, for any point on the segment, travelling an equal distance along the segment on each side of the point leads to contour points separated by equal straight-line chords from the central point. It appears that within the framework of digitized images, this property can only be satisfied by straight line segments and circular arcs. We describe an algorithm to extract, from arbitrary non-branching contours, segments verifying this symmetry property. After extraction, segments are classified as lines or arcs, and the radii and centers are estimated for the latter.
IEEE Transactions on Education | 1996
Daniel Crevier
We present a set of four educational experiments in machine vision. These were designed to run on low-cost hardware which is yet powerful enough to serve in genuine industrial applications of machine vision. The experiments introduce students to thresholding, connected component analysis, Hough transforms, stereo vision, and color coordinate systems. The programming involved is close enough to the hardware to expose students to real-time processing techniques and prepares them to tackle the type of problems they will face in field applications of machine vision. The experiments are: locate coins in an image, identify their denominations, and count the amount of money present; extract the straight edges of a cube by the Hough transform technique; extract three-dimensional (3-D) information from left and right images of the same cube; and transform color images from RGB to HSI coordinates and visually assess the results.
Intelligent Robots and Computer Vision XIII: Algorithms and Computer Vision | 1994
Hoi J. Yoo; Daniel Crevier; Richard Lepage; Harley R. Myler
We describe procedures that extract line drawings from digitized gray level images, without use of domain knowledge, by modeling preattentive and perceptual organization functions of the human visual system. First, edge points are identified by standard low-level processing, based on the Canny edge operator. Edge points are then linked into single-pixel thick straight- line segments and circular arcs: this operation serves to both filter out isolated and highly irregular segments, and to lump the remaining points into a smaller number of structures for manipulation by later stages of processing. The next stages consist in linking the segments into a set of closed boundaries, which is the systems definition of a line drawing. According to the principles of Gestalt psychology, closure allows us to organize the world by filling in the gaps in a visual stimulation so as to perceive whole objects instead of disjoint parts. To achieve such closure, the system selects particular features or combinations of features by methods akin to those of preattentive processing in humans: features include gaps, pairs of straight or curved parallel lines, L- and T-junctions, pairs of symmetrical lines, and the orientation and length of single lines. These preattentive features are grouped into higher-level structures according to the principles of proximity, similarity, closure, symmetry, and feature conjunction. Achieving closure may require supplying missing segments linking contour concavities. Choices are made between competing structures on the basis of their overall compliance with the principles of closure and symmetry. Results include clean line drawings of curvilinear manufactured objects. The procedures described are part of a system called VITREO (viewpoint-independent 3-D recognition and extraction of objects).
canadian conference on electrical and computer engineering | 1993
Daniel Crevier
In order to decide whether to lump a group of neighboring pixels into a uniform region, image segmentation techniques often rely on the mean and variance of their brightnesses or colors. The paper addresses the problem of using to this end the mean and variance of the hue of a population of pixels. The conventional definitions of these statistics can be misleading because of the angular character of the hue coordinate system, which often splits peaks in hue histograms. The authors present a way of processing hue distributions that avoids this effect. It is based on recursive formulas for calculating the means and variances of hue distributions in an object-dependent coordinate system. At the cost of a slight numerical overhead, these computations generate results in agreement with the authors intuitive understanding of colors in split peak situations, and reduce to the standard definitions in well-behaved histograms. An example of this procedure, and of its integration to an image segmentation algorithm, is given.<<ETX>>
Intelligent Robots and Computer Vision XII: Algorithms and Techniques | 1993
Daniel Crevier
Color images can be analyzed using two kinds of coordinate systems: rectangular systems based on primary colors (RGB), and cylindrical systems based on hue, saturation, and intensity (HSI). HSI systems match our intuitive understanding of colors and make it possible to name colors in knowledge bases, a significant advantage given the mushrooming use of declarative knowledge for image analysis. On the other hand, HSI systems give rise to singularities which result in undesirable instabilities, notably with respect to the statistical properties of hue distributions. Computing the mean and variance of a split distribution in the conventional manner would yield an unrealistically large variance and a mean hue in the blue-green region. The paper presents alternative ways of computing means and variances that avoid these effects. At the cost of a relatively slight numerical overhead, these computations generate results in agreement with our intuitive understanding of colors in split peak situations, and reduce to the standard definitions in well-behaved histograms. Recursive formulas are given for the calculation of these statistics, and an efficient algorithm is presented. Equivalence conditions between the results of the introduced procedures and conventional calculations are stated. Examples are given using actual color images.
Intelligent Robots and Computer Vision XIII: Algorithms and Computer Vision | 1994
Richard Lepage; Daniel Crevier
A goal of computer vision is the construction of scene descriptions based on information extracted from one or more 2D images. A reconstruction strategy based on a four-level representational framework is presented. We are interested in the second representational level, the Primal Sketch. It makes explicit important information about the two-dimensional image, primarily the intensity changes and their geometrical distribution and organization. The intensity changes corresponding to physical features of the observed scene appear at several spatial scales, in contrast to spurious edges, and image analysis performed at multiple resolutions is therefore more robust. We propose a compact pyramidal neural network implementation of the multiresolution representation of the input images. Features of the scene are detected at each resolution level and feedback interaction is built between pyramid levels in order to reinforce edges which correspond to physical features of the observed scene. A vigilance neuron determines the importance granted to each spatial resolution in the feature extraction process.
Proceedings of SPIE | 1996
Daniel Crevier
We will address the problem of segmenting single images into parts corresponding to those intuitively provided by human perception. To this effect a resistive network analogue of the edge image is used, in which electric resistances correspond to edge segments. Compact contours including given segments can then be found by introducing current sources in these segments, and following the path of largest current. In order to overcome the artifacts of edge finders and apply to partially occluded contours, the method requires the detection of gaps in L-junctions and collinearities, and the introduction of virtual resistances at these locations. Since contours must be found serially, the segmentation can be guided by a knowledge-based attentional mechanism, as seems to happen in human perception. The method also offers a natural framework for fusing information from various image understanding mechanisms. When a contour containing a given seed segment is sought, or entering into perceptually significant relationships with the seed segment, such as symmetry, skew symmetry or parallelism. The electric circuit part of the method can be implemented as a very simple neural network, which raises intriguing questions about the existence of such a structure in the human visual system.
Archive | 1993
Daniel Crevier