Siamak Khatibi
Blekinge Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Siamak Khatibi.
digital television conference | 2007
Jiandan Chen; Siamak Khatibi; Wlodek Kulesza
This paper presents a method for planning the position of multiple stereo sensors in an indoor environment. This is a component in an intelligent vision agent system. We propose a new approach to dynamically adjust the multiple stereo pairs position, pose and baseline length in 3D space in order to get sufficient visibility and enough accuracy for surveillance, tracking and 3D reconstruction. The paper proposes visibility constraints to plan the cameras pose, and a depth accuracy constraint to control the baseline length. The minimum number of stereo pairs necessary to cover the target space is optimized by an integer linear programming. The 3D simulations of reconstruction accuracy and the human activities space coverage problem were performed in Matlab.
international conference on signal processing and communication systems | 2011
Homayoun Jamshidi; Thomas Lukaszewicz; Amin Kashi; Ansel Berghuvud; Hans-Jürgen Zepernick; Siamak Khatibi
In this paper, we present several fusion approaches to merge speed limits reported in digital maps with detected speed limit signs using an onboard camera. Digital maps holding speed limits signs are required to be updated to cover speed limit changes and are unable to support variable speed limits. On the other hand, a camera system placed onboard a vehicular can detect variable speed limits as well as temporary speed limits at construction sites. However, an onboard camera cannot detect implicit speed limits. As such, a combination of digital map and camera system can provide more accurate speed limit information for driver assistance and support vehicle safety features. In this paper, the digital map and camera system are fused to obtain the desired more accurate speed limit information. The fused speed limits as well as those from the individual sources are compared with ground truth data obtained from an extensive measurement campaign spanning over 15000 km driving distance in five European countries. Specifically, five fusion approaches are defined, modeled and evaluated. Four of them are based on prioritizing information while the remaining approach is based on the classical Dempster-Shafer data fusion technique. The performance of the approaches are presented in terms of the percentage of correct speed limit detection with respect to the driving distance. The obtained results clearly show that fusion techniques can significantly increase the amount of correctly detected speed limits. A MATLAB based graphical user interface was designed to load test data, evaluate and present the results as fast and as efficient as possible.
Acta Polytechnica | 2016
Wei Wen; Siamak Khatibi
The current camera has made a huge progress in the sensor resolution and the lowluminance performance. However, we are still far from having an optimal camera as powerful as our eye is. The study of the evolution process of our visual system indicates attention to two major issues: the form and the density of the sensor. High contrast and optimal sampling properties of our visual spatial arrangement are related directly to the densely hexagonal form. In this paper, we propose a novel software-based method to create images on a compact dense hexagonal grid, derived from a simulated square sensor array by a virtual increase of the fill factor and a half a pixel shifting. After that, the orbit functions are proposed for a hexagonal image processing. The results show it is possible to achieve an image processing in the orbit domain and the generated hexagonal images are superior, in detection of curvature edges, to the square images. We believe that the orbit domain image processing has a great potential to be the standard processing for hexagonal images.
Archive | 2008
Wlodek Kulesza; Jiandan Chen; Siamak Khatibi
The first part of this chapter introduces a mathematical geometry model which is used to analyze the iso-disparity surface. This model can be used to dynamically adjust the positions, poses and baseline lengths of multiple stereo pairs of cameras in 3D space in order to get sufficient visibility and accuracy for surveillance, tracking and 3D reconstruction. The depth reconstruction accuracy is quantitatively analyzed by the proposed model. The proposed iso-disparity mathematical model presents possibility of reliable control of the iso-disparity curves’ shapes and intervals by applying the systems configuration and target properties. In the second part of this chapter, the key factors affecting the accuracy of 3D reconstruction are analysed. It shows that the convergence angle and target distance influence the depth reconstruction accuracy most significantly. The depth accuracy constraints are implemented in the model to control the stereo pair’s baseline length, position and pose. It guarantees a certain accuracy in the 3D reconstruction. The reconstruction accuracy is verified by a cubic reconstruction method. The optimization is implemented by applying the camera, object and stereo pair constraints into the integer linear programming.
international conference on image and graphics | 2015
Wei Wen; Siamak Khatibi
In the past twenty years, CCD sensor has made huge progress in improving resolution and low-light performance by hardware. However due to physical limits of the sensor design and fabrication, fill factor has become the bottle neck for improving quantum efficiency of CCD sensor to widen dynamic range of images. In this paper we propose a novel software-based method to widen dynamic range, by virtual increase of fill factor achieved by a resampling process. The CCD images are rearranged to a new grid of virtual pixels com-posed by subpixels. A statistical framework consisting of local learning model and Bayesian inference is used to estimate new subpixel intensity. By knowing the different fill factors, CCD images were obtained. Then new resampled images were computed, and compared to the respective CCD and optical image. The results show that the proposed method is possible to widen significantly the recordable dynamic range of CCD images and increase fill factor to 100 % virtually.
machine vision applications | 2013
J. Rafid Siddiqui; Mohammad Havaei; Siamak Khatibi; Craig A. Lindley
This paper presents a novel approach for the classification of planar surfaces in an unorganized point clouds. A feature-based planner surface detection method is proposed which classifies a point cloud data into planar and non-planar points by learning a classification model from an example set of planes. The algorithm performs segmentation of the scene by applying a graph partitioning approach with improved representation of association among graph nodes. The planarity estimation of the points in a scene segment is then achieved by classifying input points as planar points which satisfy planarity constraint imposed by the learned model. The resultant planes have potential application in solving simultaneous localization and mapping problem for navigation of an unmanned-air vehicle. The proposed method is validated on real and synthetic scenes. The real data consist of five datasets recorded by capturing three-dimensional(3D) point clouds when a RGBD camera is moved in five different indoor scenes. A set of synthetic 3D scenes are constructed containing planar and non-planar structures. The synthetic data are contaminated with Gaussian and random structure noise. The results of the empirical evaluation on both the real and the simulated data suggest that the method provides a generalized solution for plane detection even in the presence of the noise and non-planar objects in the scene. Furthermore, a comparative study has been performed between multiple plane extraction methods.
international conference on machine vision | 2013
J. R. Siddiqui; Siamak Khatibi
The problem of robust and invariant representation of places is being addressed. A place recognition technique is proposed followed by an application to a semantic topological mapping. The proposed technique is evaluated on a robot localization database which consists of a large set of images taken under various weather conditions. The results show that the proposed method can robustly recognize the places and is invariant to geometric transformations, brightness changes and noise. The comparative analysis with the state-of-the-art semantic place description methods show that the method outperforms the competing methods and exhibits better average recognition rates.
image and vision computing new zealand | 2013
J. R. Siddiqui; Siamak Khatibi
The cumbersome process of construction and incremental update of large indoor maps can be simplified by semantic maps. A novel semantic mapping method for indoor environments is proposed which employs a flash-n-extend strategy for constructing and updating the map. At the exposure of every flash event, a 3D snapshot of the environment is taken which is extended until flash event reoccurs. A flash event occurs at a motion state transition of a mobile robot which is detected by the decomposition of motion estimates. The proposed method is evaluated on a set of image sequences and is found to be robust in building indoor maps which are suitable for robust autonomous navigation. The constructed maps provide simplistic representation of the environment which makes it ideal for high-level reasoning tasks.
Image and Vision Computing | 2010
Jiandan Chen; Siamak Khatibi; Wlodek Kulesza
The depth spatial quantization uncertainty is one of the factors which influence the depth reconstruction accuracy caused by a discrete sensor. This paper discusses the quantization uncertainty distribution, introduces a mathematical model of the uncertainty interval range, and analyzes the movements of the sensors in an Intelligent Vision Agent System. Such a system makes use of multiple sensors which control the deployment and autonomous servo of the system. This paper proposes a dithering algorithm which reduces the depth reconstruction uncertainty. The algorithm assures high accuracy from a few images taken by low-resolution sensors. The dither signal is estimated and then generated through an analysis of the iso-disparity planes. The signal allows for control of the camera movement. The proposed approach is validated and compared with a direct triangulation method. The simulation results are reported in terms of depth reconstruction error statistics. The physical experiment shows that the dithering method reduces the depth reconstruction error.
Electro-optical remote sensing, detection, and photonic technologies and their applications | 2007
Jiandan Chen; Siamak Khatibi; Jenny Wirandi; Wlodek Kulesza
The Intelligent Vision Agent System, IVAS, is a system for automatic target detection, identification and information processing for use in human activities surveillance. This system consists of multiple sensors, and with control of their deployment and autonomous servo. Finding the optimal configuration for these sensors in order to capture the target objects and their environment to a required specification is a crucial problem. With a stereo pair of sensors, the 3D space can be discretized by an iso-disparity surface, and the depth reconstruction accuracy of the space is closely related to the iso-disparity curve positions. This paper presents a method to enable planning the position of these multiple stereo sensors in indoor environments. The proposed method is a mathematical geometry model, used to analyze the isodisparity surface. We will show that the distribution of the iso-disparity surface and the depth reconstruction accuracy are controllable by the parameters of such model. This model can be used to dynamically adjust the positions, poses and baselines lengths of multiple stereo pairs of cameras in 3D space in order to get sufficient visibility and accuracy for surveillance tracking and 3D reconstruction. We implement the model and present uncertainty maps of depth reconstruction calculated while varying the baseline length, focal length, stereo convergence angle and sensor pixel length. The results of these experiments show how the depth reconstruction uncertainty depends on stereo pairs baseline length, zooming and sensor physical properties.