Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kalin Mitkov Atanassov is active.

Publication


Featured researches published by Kalin Mitkov Atanassov.


Proceedings of SPIE | 2011

Content-based depth estimation in focused plenoptic camera

Kalin Mitkov Atanassov; Sergio Goma; Vikas Ramachandra; Todor G. Georgiev

Depth estimation in focused plenoptic camera is a critical step for most applications of this technology and poses interesting challenges, as this estimation is content based. We present an iterative algorithm, content adaptive, that exploits the redundancy found in focused plenoptic camera captured images. Our algorithm determines for each point its depth along with a measure of reliability allowing subsequent enhancements of spatial resolution of the depth map. We remark that the spatial resolution of the recovered depth corresponds to discrete values of depth in the captured scene to which we refer as slices. Moreover, each slice has a different depth and will allow extraction of different spatial resolutions of depth, depending on the scene content being present in that slice along with occluding areas. Interestingly, as focused plenoptic camera is not theoretically limited in spatial resolution, we show that the recovered spatial resolution is depth related, and as such, rendering of a focused plenoptic image is content dependent.


Proceedings of SPIE | 2011

RAW camera DPCM compression performance analysis

Katherine L. Bouman; Vikas Ramachandra; Kalin Mitkov Atanassov; Mickey Aleksic; Sergio Goma

The MIPI standard has adopted DPCM compression for RAW data images streamed from mobile cameras. This DPCM is line based and uses either a simple 1 or 2 pixel predictor. In this paper, we analyze the DPCM compression performance as MTF degradation. To test this schemes performance, we generated Siemens star images and binarized them to 2-level images. These two intensity values where chosen such that their intensity difference corresponds to those pixel differences which result in largest relative errors in the DPCM compressor. (E.g. a pixel transition from 0 to 4095 corresponds to an error of 6 between the DPCM compressed value and the original pixel value). The DPCM scheme introduces different amounts of error based on the pixel difference. We passed these modified Siemens star chart images to this compressor and compared the compressed images with the original images using IT3 MTF response plots for slanted edges. Further, we discuss the PSF influence on DPCM error and its propagation through the image processing pipe.


Proceedings of SPIE | 2011

3D image processing architecture for camera phones

Kalin Mitkov Atanassov; Vikas Ramachandra; Sergio Goma; Milivoje Aleksic

Putting high quality and easy-to-use 3D technology into the hands of regular consumers has become a recent challenge as interest in 3D technology has grown. Making 3D technology appealing to the average user requires that it be made fully automatic and foolproof. Designing a fully automatic 3D capture and display system requires: 1) identifying critical 3D technology issues like camera positioning, disparity control rationale, and screen geometry dependency, 2) designing methodology to automatically control them. Implementing 3D capture functionality on phone cameras necessitates designing algorithms to fit within the processing capabilities of the device. Various constraints like sensor position tolerances, sensor 3A tolerances, post-processing, 3D video resolution and frame rate should be carefully considered for their influence on 3D experience. Issues with migrating functions such as zoom and pan from the 2D usage model (both during capture and display) to 3D needs to be resolved to insure the highest level of user experience. It is also very important that the 3D usage scenario (including interactions between the user and the capture/display device) is carefully considered. Finally, both the processing power of the device and the practicality of the scheme needs to be taken into account while designing the calibration and processing methodology.


electronic imaging | 2015

Depth enhanced and content aware video stabilization

Albrecht Johannes Lindner; Kalin Mitkov Atanassov; Sergiu Radu Goma

We propose a system that uses depth information for video stabilization. The system uses 2D-homographies as frame pair transforms that are estimated with keypoints at the depth of interest. This makes the estimation more robust as the points lie on a plane. The depth of interest can be determined automatically from the depth histogram, inferred from user input such as tap-to-focus, or selected by the user; i.e., tap-to-stabilize. The proposed system can stabilize videos on the fly in a single pass and is especially suited for mobile phones with multiple cameras that can compute depth maps automatically during image acquisition.


Proceedings of SPIE | 2012

Unassisted 3D camera calibration

Kalin Mitkov Atanassov; Vikas Ramachandra; James Wilson Nash; Sergio Goma

With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.


Proceedings of SPIE | 2010

Evaluation methodology for Bayer demosaic algorithms in camera phones

Sergio Goma; Kalin Mitkov Atanassov

The current approach used for demosaic algorithm evaluation is mostly empirical and does not offer a meaningful quantitative metric - this disconnects the theoretical results from the results seen in practice. In camera phones, the difference is even bigger due to the low signal to noise ratios and also due to the overlapping of the color filters. This implies that a demosaic algorithm has to be designed to allow for graceful degradation in presence of noise. Also, the demosaic algorithm has to be tolerant to high color correlations. In this paper we propose a special class of images and a methodology that can be used to produce a metric indicative of a real case demosaic algorithm performance. The test image that we propose is formed by using a dual chirp signal that is a function of the distance from the center.


Proceedings of SPIE | 2010

Evaluating the quality of EDOF in camera phones

Kalin Mitkov Atanassov; Sergio Goma

Extended Depth of Focus technologies are well known in the literature, and in recent years this technology has made its way into camera phones. While the fundamental approach might have significant advantages over conventional technologies, often in practice, it turns out the results can be accompanied by undesired artifacts that are hard to quantify. In order to conduct an objective comparison with the conventional focus technology, new methods need to be devised that are able to quantify not only the quality of focus but also the artifacts introduced by the use of EDOF methods. In this paper we propose a test image and a methodology to quantify focus quality and its dependence on the distance. Our test image is created from a test image element that contains different shapes to measure frequency response.


international conference on image processing | 2016

Hardware-friendly universal demosaick using non-iterative map reconstruction

Hasib Ahmed Siddiqui; Kalin Mitkov Atanassov; Sergio Goma

Non-Bayer color filter array (CFA) sensors have recently drawn attention due to their superior compression of spectral energy, ability to deliver improved signal-to-noise ratio, or ability to provide high dynamic range (HDR) imaging. Demosaicking methods that perform color interpolation of Bayer CFA data have been widely investigated. However, a bottleneck to the adaption of emerging non-Bayer CFA sensors is the unavailability of efficient color-interpolation algorithms that can demosaick the new patterns. Designing a new demosaick algorithm for every proposed CFA pattern is a challenge. In this paper, we propose a hardware-friendly universal demosaick algorithm based on maximum a-posteriori (MAP) estimation that can be configured to demosaick raw images captured using a variety of CFA sensors. The forward process of mosaicking is modeled as a linear operation. We then use quadratic data-fitting and image prior terms in a MAP framework and pre-compute the inverse matrix for recovering the full RGB image from CFA observations for a given pattern. The pre-computed inverse is later used in real-time application to demosaick the given CFA pattern. The inverse matrix is observed to have a Toeplitz-like structure, allowing for hardware-efficient implementation of the algorithm. We use a set of 24 Kodak color images to evaluate the quality of our demosaick algorithm on three different CFA patterns. The PSNR values of the reconstructed full-channel RGB images from CFA samples are reported in the paper.


Proceedings of SPIE | 2014

Structured light 3D depth map enhancement and gesture recognition using image content adaptive filtering

Vikas Ramachandra; James Wilson Nash; Kalin Mitkov Atanassov; Sergio Goma

A structured-light system for depth estimation is a type of 3D active sensor that consists of a structured-light projector that projects an illumination pattern on the scene (e.g. mask with vertical stripes) and a camera which captures the illuminated scene. Based on the received patterns, depths of different regions in the scene can be inferred. In this paper, we use side information in the form of image structure to enhance the depth map. This side information is obtained from the received light pattern image reflected by the scene itself. The processing steps run real time. This post-processing stage in the form of depth map enhancement can be used for better hand gesture recognition, as is illustrated in this paper.


Proceedings of SPIE | 2013

Digital ruler: real-time object tracking and dimension measurement using stereo cameras

James Wilson Nash; Kalin Mitkov Atanassov; Sergio Goma; Vikas Ramachandra; Hasib Ahmed Siddiqui

Stereo metrology involves obtaining spatial estimates of an object’s length or perimeter using the disparity between boundary points. True 3D scene information is required to extract length measurements of an object’s projection onto the 2D image plane. In stereo vision the disparity measurement is highly sensitive to object distance, baseline distance, calibration errors, and relative movement of the left and right demarcation points between successive frames. Therefore a tracking filter is necessary to reduce position error and improve the accuracy of the length measurement to a useful level. A Cartesian coordinate extended Kalman (EKF) filter is designed based on the canonical equations of stereo vision. This filter represents a simple reference design that has not seen much exposure in the literature. A second filter formulated in a modified sensor-disparity (DS) coordinate system is also presented and shown to exhibit lower errors during a simulated experiment.

Collaboration


Dive into the Kalin Mitkov Atanassov's collaboration.

Researchain Logo
Decentralizing Knowledge