Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mircea Nicolescu is active.

Publication


Featured researches published by Mircea Nicolescu.


Computer Vision and Image Understanding | 2007

Vision-based hand pose estimation: A review

Ali Erol; George Bebis; Mircea Nicolescu; Richard Boyle; Xander Twombly

Direct use of the hand as an input device is an attractive method for providing natural human-computer interaction (HCI). Currently, the only technology that satisfies the advanced requirements of hand-based input for HCI is glove-based sensing. This technology, however, has several drawbacks including that it hinders the ease and naturalness with which the user can interact with the computer-controlled environment, and it requires long calibration and setup procedures. Computer vision (CV) has the potential to provide more natural, non-contact solutions. As a result, there have been considerable research efforts to use the hand as an input device for HCI. In particular, two types of research directions have emerged. One is based on gesture classification and aims to extract high-level abstract information corresponding to motion patterns or postures of the hand. The second is based on pose estimation systems and aims to capture the real 3D motion of the hand. This paper presents a literature review on the latter research direction, which is a very challenging problem in the context of HCI.


human-robot interaction | 2008

Understanding human intentions via hidden markov models in autonomous mobile robots

Richard Kelley; Alireza Tavakkoli; Christopher King; Monica N. Nicolescu; Mircea Nicolescu; George Bebis

Understanding intent is an important aspect of communication among people and is an essential component of the human cognitive system. This capability is particularly relevant for situations that involve collaboration among agents or detection of situations that can pose a threat. In this paper, we propose an approach that allows a robot to detect intentions of others based on experience acquired through its own sensory-motor capabilities, then using this experience while taking the perspective of the agent whose intent should be recognized. Our method uses a novel formulation of Hidden Markov Models designed to model a robots experience and interaction with the world. The robots capability to observe and analyze the current scene employs a novel vision-based technique for target detection and tracking, using a non-parametric recursive modeling approach. We validate this architecture with a physically embedded robot, detecting the intent of several people performing various activities.


computer vision and pattern recognition | 2005

A Review on Vision-Based Full DOF Hand Motion Estimation

Ali Erol; George Bebis; Mircea Nicolescu; Richard Boyle; Xander Twombly

Direct use of the hand as an input device is an attractive method for providing natural human-computer interaction (HCI). Currently, the only technology that satisfies the advanced requirements of hand-based input for HCI is glovebased sensing. This technology, however, has several drawbacks including that it hinders the ease and naturalness with which the user can interact with the computer controlled environment, and it requires long calibration and setup procedures. Computer vision has the potential to provide much more natural, non-contact solutions. As a result, there have been considerable research efforts to use the hand as an input device for HCI. A very challenging problem in this context, which is the focus of this review, is recovering the 3D pose of the hand and the fingers as glove-based devices do. This paper presents a brief literature review on full degreeof- freedom (DOF) hand motion estimation methods.


computer vision and pattern recognition | 2006

Peg-Free Hand Shape Verification Using High Order Zernike Moments

Gholamreza Amayeh; George Bebis; Ali Erol; Mircea Nicolescu

Hand-based verification is a key biometric technology with a wide range of potential applications both in industry and government. The focus of this work is on improving the efficiency, accuracy, and robustness of hand-based verification. In particular, we propose using high-order Zernike moments to represent hand geometry, avoiding the more difficult and prone to errors process of hand-landmark extraction (e.g., finding finger joints). The proposed system operates on 2D hand silhouette images acquired by placing the hand on a planar lighting table without any guidance pegs, increasing the ease of use compared to conventional systems. Zernike moments are powerful translation, rotation, and scale invariant shape descriptors. To deal with several practical issues related to the computation of highorder Zernike moments including computational cost and lack of accuracy due to numerical errors, we have employed an efficient algorithm that uses arbitrary precision arithmetic, a look-up table, and avoids recomputing the same terms multiple times [2]. The proposed hand-based authentication system has been tested on a database of 40 subjects illustrating promising results. Qualitative comparisons with state of the art systems illustrate comparable of better performance.


international symposium on visual computing | 2005

Accurate and efficient computation of high order zernike moments

Gholamreza Amayeh; Ali Erol; George Bebis; Mircea Nicolescu

Zernike Moments are useful tools in pattern recognition and image analysis due to their orthogonality and rotation invariance property. However, direct computation of these moments is very expensive, limiting their use especially at high orders. There have been some efforts to reduce the computational cost by employing quantized polar coordinate systems, which also reduce the accuracy of the moments. In this paper, we propose an efficient algorithm to accurately calculate Zernike moments at high orders. To preserve accuracy, we do not use any form of coordinate transformation and employ arbitrary precision arithmetic. The computational complexity is reduced by detecting the common terms in Zernike moments with different order and repetition. Experimental results show that our method is more accurate than other methods and it has comparable computational complexity especially in case of using large images and high order moments.


machine vision applications | 2009

Non-parametric statistical background modeling for efficient foreground region detection

Alireza Tavakkoli; Mircea Nicolescu; George Bebis; Monica N. Nicolescu

Most methods for foreground region detection in videos are challenged by the presence of quasi-stationary backgrounds—flickering monitors, waving tree branches, moving water surfaces or rain. Additional difficulties are caused by camera shake or by the presence of moving objects in every image. The contribution of this paper is to propose a scene-independent and non-parametric modeling technique which covers most of the above scenarios. First, an adaptive statistical method, called adaptive kernel density estimation (AKDE), is proposed as a base-line system that addresses the scene dependence issue. After investigating its performance we introduce a novel general statistical technique, called recursive modeling (RM). The RM overcomes the weaknesses of the AKDE in modeling slow changes in the background. The performance of the RM is evaluated asymptotically and compared with the base-line system (AKDE). A wide range of quantitative and qualitative experiments is performed to compare the proposed RM with the base-line system and existing algorithms. Finally, a comparison of various background modeling systems is presented as well as a discussion on the suitability of each technique for different scenarios.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2005

A voting-based computational framework for visual motion analysis and interpretation

Mircea Nicolescu; Gérard G. Medioni

Most approaches for motion analysis and interpretation rely on restrictive parametric models and involve iterative methods which depend heavily on initial conditions and are subject to instability. Further difficulties are encountered in image regions where motion is not smooth-typically around motion boundaries. This work addresses the problem of visual motion analysis and interpretation by formulating it as an inference of motion layers from a noisy and possibly sparse point set in a 4D space. The core of the method is based on a layered 4D representation of data and a voting scheme for affinity propagation. The inherent problem caused by the ambiguity of 2D to 3D interpretation is usually handled by adding additional constraints, such as rigidity. However, enforcing such a global constraint has been problematic in the combined presence of noise and multiple independent motions. By decoupling the processes of matching, outlier rejection, segmentation, and interpretation, we extract accurate motion layers based on the smoothness of image motion, and then locally enforce rigidity for each layer in order to infer its 3D structure and motion. The proposed framework is noniterative and consistently handles both smooth moving regions and motion discontinuities without using any prior knowledge of the motion model.


computer vision and pattern recognition | 2008

Gender classification from hand shape

Gholamreza Amayeh; George Bebis; Mircea Nicolescu

Many social interactions and services today depend on gender. In this paper, we investigate the problem of gender classification from hand shape. Our work has been motivated by studies in anthropometry and psychology suggesting that it is possible to distinguish between male and female hands by considering certain geometric features. Our system segments the hand silhouette into six different parts corresponding to the palm and fingers. To represent the geometry of each part, we use region and boundary features based on Zernike moments and Fourier descriptors. For classification, we compute the distance of a given part from two different eigenspaces, one corresponding to the male class and the other corresponding to female class. We have experimented using each part of the hand separately as well as fusing information from different parts of the hand. Using a small database containing 20 males and 20 females, we report classification results close to 98% using score-level fusion and LDA.


computer vision and pattern recognition | 2003

Motion segmentation with accurate boundaries - a tensor voting approach

Mircea Nicolescu; Gérard G. Medioni

Producing an accurate motion flow field is very difficult at motion boundaries. We present a noniterative approach for segmentation from image motion, based on two voting processes, in different dimensional spaces. By expressing the motion layers as surfaces in a 4D (four-dimensional) space, a voting process is first used to enforce the smoothness of motion and determine an estimation of pixel velocities, motion regions and boundaries. The boundary estimation is then combined with intensity information from the original images in order to locally define a boundary tensor field. The correct boundary is inferred by a 2D (two-dimensional) voting process within this field that enforces the smoothness of boundaries. Finally, correct velocities are computed for the pixels near boundaries, as they are reassigned to different regions. We demonstrate our contribution by analyzing several image sequences, containing multiple types of motion.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2003

Layered 4D representation and voting for grouping from motion

Mircea Nicolescu; Gérard G. Medioni

We address the problem of perceptual grouping from motion cues by formulating it as a motion layers inference from a sparse and noisy point set in a 4D space. Our approach is based on a layered 4D representation of data, and a voting scheme for token communication, within a tensor voting computational framework. Given two sparse sets of point tokens, the image position and potential velocity of each token are encoded into a 4D tensor. By enforcing the smoothness of motion through a voting process, the correct velocity is selected for each input point as the most salient token. An additional dense voting step allows for the inference of a dense representation in terms of pixel velocities, motion regions, and boundaries. Using a 4D space for this tensor voting approach is essential since it allows for a spatial separation of the points according to both their velocities and image coordinates. Unlike most other methods that optimize certain objective functions, our approach is noniterative and, therefore, does not suffer from local optima or poor convergence problems. We demonstrate our method with synthetic and real images, by analyzing several difficult cases-opaque and transparent motion, rigid and nonrigid motion, curves and surfaces in motion.

Collaboration


Dive into the Mircea Nicolescu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gérard G. Medioni

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ali Erol

University of Nevada

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Leandro A. Loss

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge