Victor Bucha
Samsung
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Victor Bucha.
computer vision and pattern recognition | 2016
Oleg Muratov; Yury Vyacheslavovich Slynko; Vitaly Vladimirovich Chernov; Maria Mikhailovna Lyubimtseva; Artem Shamsuarov; Victor Bucha
We propose a method of reconstruction of 3D representation (a mesh with a texture) of an object on a smartphone with a monocular camera. The reconstruction consists of two parts – real-time scanning around the object and postprocessing. At the scanning stage IMU sensors data are acquired along with tracks of features in video. A special care is taken to comply with 360 scan requirement. All these data are used to build a camera trajectory using bundle adjustment techniques after scanning is completed. This trajectory is used in calculation of depth maps, which then are used to construct a polygonal mesh with overlaid textures. The proposed method ensures tracking at 30 fps on a modern smartphone while the post-processing part is completed within 1 minute using an OpenCL compatible mobile GPU. In addition, we show that with a few modifications this algorithm can be adopted for human face reconstruction.
computer vision and pattern recognition | 2016
Vladimir Paramonov; Ivan Panchenko; Victor Bucha; Andrey Drogolyub; Sergey Zagoruyko
In this paper we present a single-lens single-frame passive depth sensor based on conventional imaging system with minor hardware modifications. It is based on colorcoded aperture approach and has high light-efficiency which allows capturing images even with handheld devices with small cameras. The sensor measures depth in millimeters in the whole frame, in contrast to prior-art approaches. Contributions of this paper are: (1) introduction of novel light-efficient coded aperture designs and corresponding algorithm modification, (2) depth sensor calibration procedure and disparity to depth conversion method, (3) a number of color-coded aperture based depth sensor implementations including a DSLR based prototype, a smartphone based prototype and a compact camera based prototype, (4) applications including real-time 3D scene reconstruction and depth based image effects.
Seventh International Conference on Graphic and Image Processing (ICGIP 2015) | 2015
Ivan Panchenko; Victor Bucha
In this paper we describe a Hardware Accelerator (HWA) for fast recursive approximation of separable convolution with exponential function. This filter can be used in many Image Processing (IP) applications, e.g. depth-dependent image blur, image enhancement and disparity estimation. We have adopted this filter RTL implementation to provide maximum throughput in constrains of required memory bandwidth and hardware resources to provide a power-efficient VLSI implementation.
Proceedings of SPIE | 2014
Petr Pohl; Michael Sirotenko; Ekaterina V. Tolstaya; Victor Bucha
In this article we propose high quality motion estimation based on variational optical flow formulation with non-local regularization term. To improve motion in occlusion areas we introduce occlusion motion inpainting based on 3-frame motion clustering. Variational formulation of optical flow proved itself to be very successful, however a global optimization of cost function can be time consuming. To achieve acceptable computation times we adapted the algorithm that optimizes convex function in coarse-to-fine pyramid strategy and is suitable for modern GPU hardware implementation. We also introduced two simplifications of cost function that significantly decrease computation time with acceptable decrease of quality. For motion clustering based motion inpaitning in occlusion areas we introduce effective method of occlusion aware joint 3-frame motion clustering using RANSAC algorithm. Occlusion areas are inpainted by motion model taken from cluster that shows consistency in opposite direction. We tested our algorithm on Middlebury optical flow benchmark, where we scored around 20th position, but being one of the fastest method near the top. We also successfully used this algorithm in semi-automatic 2D to 3D conversion tool for spatio-temporal background inpainting, automatic adaptive key frame detection and key points tracking.
Proceedings of SPIE | 2011
Ekaterina V. Tolstaya; Victor Bucha; Michael N. Rychagov
Modern consumer 3D TV sets are able to show video content in two different modes: 2D and 3D. In 3D mode, stereo pair comes from external device such as Blue-ray player, satellite receivers etc. The stereo pair is split into left and right images that are shown one after another. The viewer sees different image for left and right eyes using shutter-glasses properly synchronized with a 3DTV. Besides, some devices that provide TV with a stereo content are able to display some additional information by imposing an overlay picture on video content, an On-Screen-Display (OSD) menu. Some OSDs are not always 3D compatible and lead to incorrect 3D reproduction. In this case, TV set must recognize the type of OSD, whether it is 3D compatible, and visualize it correctly by either switching off stereo mode, or continue demonstration of stereo content. We propose a new stable method for detection of 3D incompatible OSD menus on stereo content. Conventional OSD is a rectangular area with letters and pictograms. OSD menu can be of different transparency levels and colors. To be 3D compatible, an OSD is overlaid separately on both images of a stereo pair. The main problem in detecting OSD is to distinguish whether the color difference is due to OSD presence, or due to stereo parallax. We applied special techniques to find reliable image difference and additionally used a cue that usually OSD has very implicit geometrical features: straight parallel lines. The developed algorithm was tested on our video sequences database, with several types of OSD with different colors and transparency levels overlaid upon video content. Detection quality exceeded 99% of true answers.
Proceedings of SPIE | 2009
Victor Bucha; Ilia V. Safonov; Michael N. Rychagov; Ji-Suk Hong; Sang Ho Kim
Present paper generally relates to content-aware image resizing and image inscribing into particular predetermined areas. The problem consists in transformation of the image to a new size with or without modification of aspect ratio in a manner that preserves the recognizability and proportions of the important features of the image. Most close solutions presented in prior art cover along with standard image linear scaling, including down-sampling and up-sampling, image cropping, image retargeting, seam carving and some special image manipulations which similar to some kind of image retouching. Present approach provides a method for digital image retargeting by means of erasing or addition of less significant image pixels. The defined above retargeting approach can be easily used for image shrinking easily. However, for image enlargement there are some limitations as a stretching artifact. History map with relaxation is introduced to avoid such drawback and overcome some known limits of retargeting. In proposed approach means for important objects preservation are taken into account. It allows significant improvement of resulting quality of retargeting. Retargeting applications for different devices such as display, copier, facsimile and photo-printer are described as well.
Archive | 2008
Victor Bucha; Ilia V. Safonov; Michael N. Rychagov
Archive | 2011
Ekaterina V. Tolstaya; Victor Bucha
Archive | 2009
Victor Bucha
Archive | 2017
Vitaly Vladimirovich Chernov; Artem Shamsuarov; Oleg Muratov; Yury Vyacheslavovich Slynko; Maria Mikhailovna Lyubimtseva; Victor Bucha