Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amar Mitiche is active.

Publication


Featured researches published by Amar Mitiche.


Image and Vision Computing | 1986

Curvature-based representation of objects from range data

Baba C. Vemuri; Amar Mitiche; Jake K. Aggarwal

Abstract A representation technique for visible three-dimensional object surfaces is presented which uses regions that are homogeneous in certain intrinsic surface properties. First, smooth patches are fitted to the object surfaces; principal curvatures are then computed and surface points classified accordingly. Such a representation scheme has applications in various image processing tasks such as graphics display and recognition of objects. An algorithm is presented for computing object descriptions. The algorithm divides the range data array into windows and fits approximating surfaces to those windows that do not contain discontinuities in range. The algorithm is not restricted to polyhedral objects nor is it committed to a particular type of approximating surface. It uses tension splines which make the fitting patches locally adaptable to the shape of object surfaces. Maximal regions are then formed by coalescing patches with similar intrinsic curvature-based properties. Regions on the surface of the object can be subsequently organized into a labelled graph, where each node represents a region and is assigned a label depicting the type of region and containing the set of feature values computed for that region.


Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 1982

Experiments in combining intensity and range edge maps

Baldemar Gil; Amar Mitiche; Jake K. Aggarwal

With a laser sensor it is possible to obtain registered range and intensity data for a scene. The problem of combining range and intensity data for segmentation is addressed. Although both sources of information describe the same scene, they are very dissimilar. To place both sources of information in the same form, the edge maps of the range image and of the intensity image are derived. Then, the problem of combining the two images is reduced to combining their edge maps. An initial step in examining the edge maps is to observe which edges they have in common. Two procedures for extracting the edges common to the range image and the intensity image are presented.


Optical Engineering | 1986

Multiple Sensor Integration/Fusion Through Image Processing: A Review

Amar Mitiche; Jake K. Aggarwal

Multiple sensing is the ability to sense the environment with the concurrent use of several sensors. There are currently a number of different sensors routinely used in image processing applications, and the trend is toward the development of more sophisticated and less expensive sensors. This trend is complemented by the development of parallel and multiprocessor architectures for processing the large amounts of data collected by these sensors. The capabilities of many image processing systems can be greatly enhanced by the organized use of several types of sensors and by the develop-ment of methods capable of integrating the collected data in a way that can yield information otherwise unavailable, or hard to obtain, from any single type of sensor. The advantage in using several similar sensors has already been demonstrated in the context of dynamic scene analysis, where information contained in a sequence of visual intensity images (including stereoscopic images) has been integrated with the twofold objective of obtaining the three-dimensional description and the motion of objects in space. The advantage in using several different sensors is clear from the quite obvious observation that different sensors are sensitive to different signals, each one of which can reveal a particular set of properties of the sensed environment. This paper discusses the advantages of multiple sensor integration/fusion with different sensors through image processing and identifies a number of associated problems. It reviews preliminary work on the solution of these problems and indicates the direction of future research.


Pattern Recognition | 1987

Experiments in computing optical flow with the gradient-based, multiconstraint method

Amar Mitiche; Yuan-Fang Wang; Jake K. Aggarwal

Abstract In this paper we present results on experiments in computing optical flow with the multiconstraint approach. This approach relies on the gradient equation, which relates spatial and temporal changes in the image to optical flow, and the use of various image functions. The experiments reported here are performed on camera-acquired pictures and use feature operators to derive image functions from the original luminance function. Results indicate that this approach is promising.


Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 1982

Contour registration by shape-specific points for shape matching

Amar Mitiche; Jake K. Aggarwal

Abstract The paper shows how registration of planar figures can be achieved using shape-specific points computed from the given figures. Registration makes the subsequent task of shape matching much easier. Examples are given, using the centroid and radius weighted mean, which yield reasonable results even for noisy images.


Computer Graphics and Image Processing | 1980

Edge detection in textures

Larry S. Davis; Amar Mitiche

Publisher Summary This chapter discusses the problem of detecting edges in cellular textures. Detecting edges is an important first step in the solution of many image analysis tasks. Edges are used primarily to aid in the segmentation of an image into meaningful regions, but they are also extensively used to compute relatively local measures of textural variation. Once edges are detected in textured regions, they can be used to define texture descriptors in a variety of ways. A general edge detection procedure may involve applying an edge-sensitive operator to the texture, thresholding the results of the edge operator, and finally computing peaks from the above-threshold points. Subsequently, one can compute first-order statistics of edge properties, such as orientation, contrast, and fuzziness, or higher-order statistics that can measure the spatial arrangement of edges in the texture. Such statistics can be computed from generalized co-occurrence matrices, which count the number of times that specific pairs of edges occur in specific relative spatial positions. The utility of such tools depends on the reliability with which edges can be detected in textures.


Image and Vision Computing | 1985

Image segmentation by conventional and information-integrating techniques: a synopsis

Amar Mitiche; Jake K. Aggarwal

Abstract A synoptic discussion of image segmentation is presented. The aim is to provide an overall understanding of the general problems and issues associated with various segmentation techniques. The discussion has been organized in two parts: the first part is on those conventional techniques which use mainly information from one type of data; the second part is on those which try to combine data from several sources to obtain interpretations that cannot be obtained or would be hard to obtain from any single source of data. The paper stresses the importance, for segmentation, of data integration from several sensors and data integration over time, particularly the use of motion. It further emphasizes the importance of stating clearly the assumptions before developing or using a particular image segmentation algorithm.


Archive | 1987

3-D Object Representation from Range Data Using Intrinsic Surface Properties

Baba C. Vemuri; Amar Mitiche; Jake K. Aggarwal

A representation of three-dimensional object surfaces by regions homogeneous in intrinsic surface properties is presented. This representation has applications in both recognition and display of objects, and is not restricted to a particular type of approximating surface.


Computer Graphics and Image Processing | 1982

MITES (mit-æs): A model-driven, iterative texture segmentation algorithm

Larry S. Davis; Amar Mitiche

Abstract A new algorithm for segmentation of images containing textured regions is presented. The algorithm is named MITES, which is an acronym for M odel-driven, I terative, T exture S egmentation. MITES represents an alternative to the traditional pixel classification approach to texture image segmentation because it makes explicit use of the spatial coherence of uniformly textured regions.


Computer Graphics and Image Processing | 1981

Edge detection in textures—maxima selection☆

Larry S. Davis; Amar Mitiche

Abstract This paper continues the analysis contained in [L. Davis and A. Mitiche, Computer Graphics and Image Processing12, 1980, 25–39] concerning the design of edge detection procedures for textures. Edges are detected in textures by a combination of thresholding edge operator responses and then choosing local maxima of the surviving points. The thresholding step was analyzed in the paper mentioned above. Here, we analyze the effects of neighborhood size on the computation of local maxima. The analysis indicates that only small neighborhoods are required to attain reliable local maxima selection. This result is consistent with experience with real images.

Collaboration


Dive into the Amar Mitiche's collaboration.

Top Co-Authors

Avatar

Jake K. Aggarwal

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Baldemar Gil

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Steven Seida

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Yuan-Fang Wang

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Steve Seida

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge