Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yakup Genc is active.

Publication


Featured researches published by Yakup Genc.


international symposium on mixed and augmented reality | 2002

Marker-less tracking for AR: a learning-based approach

Yakup Genc; Sebastian Riedel; Fabrice Souvannavong; Cuneyt Akinlar; Nassir Navab

Estimating the pose of a camera (virtual or real) in which some augmentation takes place is one of the most important parts of an augmented reality (AR) system. The availability of powerful processors and fast frame grabbers has made the use of vision-based trackers commonplace due to their accuracy as well as flexibility and ease of use. Current vision-based trackers are based on tracking of markers. The use of markers increases robustness and reduces computational requirements. However, their use can be very complicated, as they require maintenance. Direct use of scene features for tracking, therefore, is desirable. To this end, we describe a general system that tracks the position and orientation of a camera observing a scene without visual markers. Our method is based on a two-stage process. In the first stage, a set of features is learned with the help of an external tracking system during use. The second stage uses these learned features for camera tracking when the system in the first stage decides that it is possible to do so. The system is very general so that it can employ any available feature tracking and pose estimation system for learning and tracking. We experimentally demonstrate the viability of the method in real-life examples.


international symposium on mixed and augmented reality | 2002

Practical solutions for calibration of optical see-through devices

Yakup Genc; Mihran Tuceryan; Nassir Navab

Registration is a crucial task in a see-through augmented reality (AR) system. The importance stems not only from the fact that registration requires careful calibration but also from the necessity that any calibration procedure should take users into account. Tuceryan et al. (2002) proposed a general method for calibrating a see-through device based on dynamic alignment of virtual and real points. Although a powerful tool, our experiments showed that users find alignment of many points overwhelming. We introduce improvements to simplify the calibration process and increase the success rate. We first identified why calibration parameters differ from user to user and how this can be prevented by adopting particular configurations for the tracker sensor and display. This allowed us to re-use the existing calibrations. Furthermore, we have introduced a simpler model for the calibration that requires fewer user inputs, typically four, to calibrate the system.


Presence: Teleoperators & Virtual Environments | 2002

Single-Point active alignment method (SPAAM) for optical see-through HMD calibration for augmented reality

Mihran Tuceryan; Yakup Genc; Nassir Navab

Augmented reality (AR) is a technology in which a users view of the real world is enhanced or augmented with additional information generated from a computer model. To have a working AR system, the see-through display system must be calibrated so that the graphics are properly rendered. The optical see-through systems present an additional challenge because, unlike the video see-through systems, we do not have direct access to the image data to be used in various calibration procedures. This paper reports on a calibration method we developed for optical see-through headmounted displays. We first introduce a method for calibrating monocular optical seethrough displays (that is, a display for one eye only) and then extend it to stereo optical see-through displays in which the displays for both eyes are calibrated in a single procedure. The method integrates the measurements for the camera and a six-degrees-offreedom tracker that is attached to the camera to do the calibration. We have used both an off-the-shelf magnetic tracker as well as a vision-based infrared tracker we have built. In the monocular case, the calibration is based on the alignment of image points with a single 3D point in the world coordinate system from various viewpoints. In this method, the user interaction to perform the calibration is extremely easy compared to prior methods, and there is no requirement for keeping the head immobile while performing the calibration. In the stereo calibration case, the user aligns a stereoscopically fused 2D marker, which is perceived in depth, with a single target point in the world whose coordinates are known. As in the monocular case, there is no requirement that the user keep his or her head fixed.


international symposium on mixed and augmented reality | 2000

Optical see-through HMD calibration: a stereo method validated with a video see-through system

Yakup Genc; Frank Sauer; Fabian Wenzel; Mihran Tuceryan; Nassir Navab

We present initial results from ongoing work to calibrate optical see-through head-mounted displays (HMDs). We have developed a method to calibrate stereoscopic optical see-through HMDs based on the 3D alignment of a target in the physical world with a virtual object in the users view. This is an extension of the single point active alignment method (SPAAM) (Tuceryan and Navab, 2000) developed for monocular HMDs. Going from the monocular to the stereoscopic optical HMDs for calibration purposes is not straightforward. This is in part due to the perceptual complexity of the stereo fusion process bringing up completely new challenges including the choice of the shape of the virtual object, the physical target and how to display the virtual object without any knowledge of the characteristics of the HMD and eye combination, i.e. the projection model. We have addressed these issues and proposed a solution for the calibration problem which we have validated through experiments on the see-through HMD system described in (Sauer et al., 2000). By experimenting, we have found the appropriate type of virtual objects and physical features to be used in the 3D alignment. Furthermore, we have measured how good the alignment and calibration actually are.


international conference on pattern recognition | 2010

Edge Drawing: A Heuristic Approach to Robust Real-Time Edge Detection

Cihan Topal; Cuneyt Akinlar; Yakup Genc

We propose a new edge detection algorithm that works by computing a set of anchor edge points in an image and then linking these anchor points by drawing edges between them. The resulting edge map consists of perfect contiguous, one pixel wide edges. The performance tests show that our algorithm is up to 16% faster than the fastest known edge detection algorithm, i.e., OpenCV implementation of the Canny edge detector. We believe that our edge detector is a novel step in edge detection and would be very suitable for the next generation real-time image processing and computer vision applications.


international symposium on mixed and augmented reality | 2001

A new system for online quantitative evaluation of optical see-through augmentation

Erin McGarrity; Yakup Genc; Mihran Tuceryan; Charles B. Owen; Nassir Navab

A crucial aspect in the implementation of an augmented reality (AR) system is determining its accuracy. The accuracy of a system determines the applications it can be used for. The aim of our research is measuring the overall accuracy of an arbitrary AR system. Once measurements of a system are made, they can be analyzed for determining the structure and sources of errors. From the analysis it may also be possible to improve the methods used to calibrate and register the virtual to the real. This paper describes an online system for measuring the registration accuracy of optical see-through augmentation. By online, we mean that the user can measure the registration error they are experiencing while they are using the system. We overcome the difficulty of not having retinal access by having the user indicate the projection of a perceived object on a planar measurement device. Our method provides information which can be used to analyze the structure of the system error in two or three dimensions. The results of the application of our method to two monocular optical see-through AR systems are shown.


international symposium on mixed and augmented reality | 2001

Taking AR into large scale industrial environments: navigation and information access with mobile computers

Xiang Zhang; Yakup Genc; Nassir Navab

This paper presents a framework of applications based on AR technologies. This framework is designed for spatial data access, on-site navigation, as well as real-time video augmentation, and is applicable to different scenarios in large industrial settings. In the core of our framework lies a mobile computer equipped with a camera. The camera observes the environment for visual coded markers. These markers are registered to a global coordinate system through available drawings or floor plans. The tracker software processes images coming from the camera at moderate frame rates (typically 12 fps) and estimates the location of the user. The system then can guide the user through the environment, provide the user with location relevant data from a spatial database, and augment the view of the user through the camera. Data exchange is done via a wireless network and the user interface allows the user to access various types of data without considerable effort.


international conference on computer vision | 1999

New algorithms for two-frame structure from motion

John Oliensis; Yakup Genc

We describe two new algorithms for two-frame structure from motion from tracked point features. One is the first fast algorithm for computing an exact least-squares estimate. It exploits our observation that the rotationally invariant least-squares error can be written in a simple form that depends just on the motion. The other is essentially as accurate as the least-squares estimate and is more efficient, probably faster, and potentially more robust than previous algorithms of comparable accuracy. We also analyze theoretically the accuracy of the optical-flow approximation to the least-squares error.


international conference on computer vision | 1999

Fast algorithms for projective multi-frame structure from motion

John Oliensis; Yakup Genc

We describe new algorithms for multi-frame structure from motion from tracked point features. The algorithms are essentially linear and give accuracies similar to those of a maximum-likelihood estimate. For the common situations where the calibration is fixed and approximately known, we experimentally compare the fully projective versions of our algorithms to mixed projective/Euclidean strategies. Our theoretical results clarify the nature of dominant-plane compensation and the effect of calibration error on translation recovery.


asian conference on computer vision | 2006

Fusion of 3d and appearance models for fast object detection and pose estimation

Hesam Najafi; Yakup Genc; Nassir Navab

Real-time estimation of a camera’s pose relative to an object is still an open problem. The difficulty stems from the need for fast and robust detection of known objects in the scene given their 3D models, or a set of 2D images or both. This paper proposes a method that conducts a statistical analysis of the appearance of model patches from all possible viewpoints in the scene and incorporates the 3D geometry during both matching and the pose estimation processes. Thereby the appearance information from the 3D model and real images are combined with synthesized images in order to learn the variations in the multiple view feature descriptors using PCA. Furthermore, by analyzing the computed visibility distribution of each patch from different viewpoints, a reliability measure for each patch is estimated. This reliability measure is used to further constrain the classification problem. This results in a more scalable representation reducing the effect of the complexity of the 3D model on the run-time matching performance. Moreover, as required in many real-time applications this approach can yield a reliability measure for the estimated pose. Experimental results show how the pose of complex objects can be estimated efficiently from a single test image.

Collaboration


Dive into the Yakup Genc's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jean Ponce

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yan Li

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge