Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stelian Persa is active.

Publication


Featured researches published by Stelian Persa.


Computer Communications | 2003

Philosophies and technologies for ambient aware devices in wearable computing grids

Pieter P. Jonker; Stelian Persa; Jurjen Caarls; Frank de Jong; Inald Lagendijk

In this paper we treat design philosophies and enabling technologies for ambient awareness within grids of future mobile computing/communication devices. We extensively describe the possible context sensors, their required accuracies, their use in mobile services-possibly leading to background interactions of user devices-as well as a draft of their integration into an ambient aware device. We elaborate on position sensing as one of the main aspects of context aware systems. We first describe a maximum accuracy setup for a mobile user that has the ability of Augmented Reality for indoor and outdoor applications. We then focus on a set-up for pose sensing of a mobile user, based on the fusion of several inertia sensors and DGPS. We describe the anchoring of the position of the user by using visual tracking, using a camera and image processing. We describe our experimental set-up with a background process that, once initiated by the DGPS system, continuously looks in the image for visual clues and-when found-tries to track them, to continuously adjust the inertial sensor system. We present some results of our combined inertia tracking and visual tracking system; we are able to track device rotation and position with an update rate of 10 ms with an accuracy for the rotation of about two degrees, whereas head position accuracy is in the order of a few cm at a visual clue distance of less than 3 m.


ambient intelligence | 2003

Sensor Fusion for Augmented Reality

Jurjen Caarls; Pieter P. Jonker; Stelian Persa

In this paper we describe in detail our sensor fusion framework for augmented reality applications. We combine inertia sensors with a compass, DGPS and a camera to determine the position of the user’s head. We use two separate extended complementary Kalman filters for orientation and position. The orientation filter uses quaternions for stable representation of the orientation.


ubiquitous computing | 1999

On Positioning for Augmented Reality Systems

Stelian Persa; Pieter P. Jonker

In Augmented Reality (AR), see-through Head Mounted Displays (HMDs) superimpose virtual 3D objects over the real world. They have the potential to enhance a users perception and his interaction with the real world. However, many AR applications will not be accepted until we can accurately align virtual objects in the real world. One step to achieve better registration is to improve the tracking system. This paper surveys the requirements for and feasibility of a combination of an inertial tracking system and a vision based positioning system implemented on a parallel SIMD linear processor array.


Intelligent Robots and Computer Vision XX: Algorithms, Techniques, and Active Vision | 2001

Real-time computer vision system for mobile robot

Stelian Persa; Pieter P. Jonker

The purpose of this paper is to present a real-time vision system for position determination and vision guidance to navigate an autonomous mobile robot in a known environment. We use a digital camera, which provides ten times the video capture bandwidth than a USB, using FireWire interface. In order to achieve real-time image processing we use MMX technology to accelerate vision tasks. Camera calibration is a necessary step in 3D computer vision in order to extract metric information from 2D images. Calibration is used to determine the camera parameters by minimizing the mean square error between model and calibration points. The camera calibration we use here is based on several views of a planar calibration pattern, which is an easy-to-use and accurate algorithm. For position pose of the robot we use the corner points and lines, features extracted from the image, and matched with the model of the environment. The algorithm is as follows: first we compute an initial pose using the Ganatipathys four-point algorithm, and we use this initial estimation as the starting point for the iterative algorithm proposed by Araujo in order to refine our pose.


Intelligent Systems and Advanced Manufacturing | 2002

Multisensor robot navigation system

Stelian Persa; Pieter P. Jonker

Almost all robot navigation systems work indoors. Outdoor robot navigation systems offer the potential for new application areas. The biggest single obstacle to building effective robot navigation systems is the lack of accurate wide-area sensors for trackers that report the locations and orientations of objects in an environment. Active (sensor-emitter) tracking technologies require powered-device installation, limiting their use to prepared areas that are relative free of natural or man-made interference sources. The hybrid tracker combines rate gyros and accelerometers with compass and tilt orientation sensor and DGPS system. Sensor distortions, delays and drift required compensation to achieve good results. The measurements from sensors are fused together to compensate for each others limitations. Analysis and experimental results demonstrate the system effectiveness. The paper presents a field experiment for a low-cost strapdown-IMU (Inertial Measurement Unit)/DGPS combination, with data processing for the determination of 2-D components of position (trajectory), velocity and heading. In the present approach we have neglected earth rotation and gravity variations, because of the poor gyroscope sensitivities of our low-cost ISA (Inertial Sensor Assembly) and because of the relatively small area of the trajectory. The scope of this experiment was to test the feasibility of an integrated DGPS/IMU system of this type and to develop a field evaluation procedure for such a combination.


Intelligent Robots and Computer Vision XIX: Algorithms, Techniques, and Active Vision | 2000

Real-time image processing architecture for robot vision

Stelian Persa; Pieter P. Jonker

This paper presents a study of the impact of MMX technology and PIII Streaming SIMD (Single Instruction stream, Multiple Data stream). Extensions in image processing and machine vision application, which, because of their hard real time constrains, is an undoubtedly challenging task. A comparison with traditional scalar code and with other parallel SIMD architecture (IMPA-VISION board) is discussed with emphasis of the particular programming strategies for speed optimization. More precisely we discuss the low level and intermediate level image processing algorithms, which are best suited for parallel SIMD implementation. High-level image processing algorithms are more suitable for parallel implementation on MIMD architectures. While the IMAP-VISION system performs better because of the large number of processing elements, the MMX processor and PIII (with Streaming SIMD Extensions) remains a good candidate for low-level image processing.


Archive | 2000

Hybrid Tracking System for Outdoor Augmented Reality

Stelian Persa; Pieter P. Jonker


Archive | 2001

Evaluation of Two Real Time Image Processing Architectures

Stelian Persa; Pieter P. Jonker


Journal of Machine Vision and Applications | 2000

Evaluation of Two Real Time Low Level Image Processing Architecture.

Stelian Persa; Cristina Nicolescu; Pieter P. Jonker


Archive | 2000

8-1 2 Evaluation of Two Real Time Low Level Image

Stelian Persa; Cristina Nicolescu; Pieter P. Jonker

Collaboration


Dive into the Stelian Persa's collaboration.

Top Co-Authors

Avatar

Pieter P. Jonker

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Cristina Nicolescu

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jurjen Caarls

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Frank de Jong

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Inald Lagendijk

Delft University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge