Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Olli Suominen is active.

Publication


Featured researches published by Olli Suominen.


electronic imaging | 2015

Preserving natural scene lighting by strobe-lit video

Olli Suominen; Atanas P. Gotchev

Capturing images in low light intensity, and preserving ambient light in such conditions pose significant problems in terms of achievable image quality. Either the sensitivity of the sensor must be increased, filling the resulting image with noise, or the scene must be lit with artificial light, destroying the aesthetic quality of the image. While the issue has been previously tackled for still imagery using cross-bilateral filtering, the same problem exists in capturing video. We propose a method of illuminating the scene with a strobe light synchronized to every other frame captured by the camera, and merging the information from consecutive frames alternating between high gain and high intensity lighting. The motion between the frames is compensated using motion estimation based on block matching between strobe-illuminated frames. The uniform lighting conditions between every other frame make it possible to utilize conventional motion estimation methods, circumventing the image registration challenges faced in fusing flash/non-flash pairs from non-stationary images. The results of the proposed method are shown to closely resemble those computed using the same filter based on reference images captured at perfect camera alignment. The method can be applied starting from a simple set of three frames to video streams of arbitrary lengths with the only requirements being sufficiently accurate syncing between the imaging device and the lighting unit, and the capability to switch states (sensor gain high/low, illumination on/off) fast enough.


visual communications and image processing | 2014

Depth estimation by combining stereo matching and coded aperture

Chun Wang; Erdem Sahin; Olli Suominen; Atanas P. Gotchev

We investigate possible improvements that can be achieved in depth estimation by merging coded apertures and stereo cameras. We analyze several stereo camera setups which are equipped with different sets of coded apertures to explore such possibilities. The demonstrated results of this analysis are encouraging in the sense that coded apertures can provide valuable complementary information to stereo vision based depth estimation in some cases. In addition to that, we take advantage of stereo camera arrangement to have a single shot multiple coded aperture system. We show that with this system, it is possible to extract depth information robustly, by utilizing the inherent relation between the disparity and defocus cues, even for scene regions which are problematic for stereo matching.


Archive | 2017

Auto-regression-driven, reallocative particle filtering approaches in PPG-based respiration rate estimation

Mikko Pirhonen; Olli Suominen; Antti Vehkaoja

Interest towards respiratory state assessment with non-obtrusive instrumentation has led to the design of novel algorithmic solutions. Notably, respiratory behavior has been observed to cause modulative changes in two discreetly measurable physiological signals, PPG and ECG. The potential to integrate respiratory rate measurements in widely used instrumentation with no additional cost has made the research of suitable signal processing methods attractive. We have studied and compared auto-regressive (AR) model order optimization and coefficient extraction methods combined with a reallocative particle filtering approach for respiration rate estimation from finger PPG signal. The evaluated coefficient extraction methods were Yule-Walker, Burg, and Least-square. Considered model order optimization methods were Akaike’s information criteria (AIC) and Minimum description length. Methods were evaluated with a publicly available dataset comprised of approximately 10-minute measurements from 39 healthy subjects at rest. From the evaluated AR model parameter extraction methods, Burg’s method combined AIC performed the best. We obtained the mean absolute error of 2.7 and bias of -0.4 respirations per minute with this combination.


electronic imaging | 2016

Non-uniform resampling in perspective compensated large scale 3D visualization.

Maria Shcherban; Olli Suominen; Atanas P. Gotchev

The presented work addresses the problem of non-uniform resampling that arises when an image shown on a spatially immersive projection display, such as walls of a room, is intended to look undistorted for the viewer at different viewing angles. A possible application for the proposed concept is in commercial motion capture studios, where it can be used to provide real-time visualization of virtual scenes for the performing actor. We model the viewer as a virtual pinhole camera, which is being tracked by the motion capture system. The visualization surfaces, i.e. displays or projector screens, are assumed to be planar with known dimensions, and are utilized along with the tracked position and orientation of the viewer. As the viewer moves, the image to be shown is geometry corrected, so that the viewer receives the intended image regardless of the relative pose of the visualization surface. The location and orientation of the viewer result in constant recalculation of the projected sampling grid, which causes a non-uniform sampling pattern and drastic changes in sampling rate. Here we observe and compare the ways to overcome the consequent problems in regular-to-irregular resampling and aliasing, and propose a method to objectively evaluate the quality of the geometry compensation. Introduction During the past couple of decades the level of animation in movie and video games has greatly improved. The animation of humans, animals and other virtual creatures becomes more and more realistic. These all are possible due to the developments in motion capture technology [1], rapidly developing computer graphics technology, improvements in the power of computers and graphic cards. In motion capture a live motion of and object or a person is captured, digitized and mapped to a 3D virtual model that performs the same movements as the object being captured. Then the virtual model is placed into a virtual environment. When motion capture includes capturing face expressions and gentle movements, it is referred to as performance capture. The process of performance capture has a lot in common with the art of acting. Therefore, the actor’s emotions, as well as subtle movements, play a significant role in the final result. In many cases, the actor is not merely overlaid into a virtual scene, but has to interact with purely virtual 3D content. Commercial motion capture systems are able to provide reasonable realtime visualization of the virtual scene to the director and cameraman (with the help of virtual camera systems), but crucially, not to the actor. The interviews of performance capture actors have shown that visualization solutions for motion capture studios provide an insufficient level of immersion with the virtual scene. The final result depends greatly on the ability of the actor to imagine the virtual scene, which becomes a serious problem when shooting is done for complex virtual scenes. Therefore, a proper visualization of virtual scenes for immersive actor feedback becomes an important issue. In the presented work we introduce an algorithm for a for view-dependent geometric correction in a CAVE-like environment. The rendered image of the virtual scene is pre-distorted based on the geometry of the visualization surfaces, so that the projected image looks undistorted to the viewer. A possible utilization of such a system is in commercial motion capture studios to provide an immersive feedback from the actor. As the actor moves within motion capture environment, the images shown on the walls are adapted to the viewer location such that the viewer receives the intended image regardless of the relative pose of the visualization surfaces. The introduced image based approach allows independence from the image source, creating a method that is more universally applicable. However, it also adds new kinds of problems, the most significant being the non-uniform resampling required to pre-distort the imagery. In the presented work we discuss the options of handling these sampling related issues and analyse their relative performance. Prior work The existing visualization technique in commercial motion capture studios is made with the help of a virtual camera system which is used for view-dependent rendering of the preconstructed virtual content. The rendered image is displayed either on the monitor mounted to the virtual camera rig or by the visualization display mounted to the wall of the motion capture studio. Since the display is in a fixed position, the actor easily loses track of the location and actions of unseen virtual characters, creating a mismatch between the real and virtual 3D worlds which prompts reshoots and manual work in post processing and decreases in the acting performance. Other possible visualisation techniques include headmounted displays (HMDs) [3, 4]. The obvious drawback of visualization techniques involving head-worn solutions is restriction of the actor with movements, which decreases the level of performance. Moreover, in most cases the actor also needs to see the real objects in the studio, such as props. Therefore, virtual reality solutions that substitute view to the real world, cannot be used in this case. Different configurations of projectors available nowadays include small portable hand-held or pico projectors [5, 6], that can be held by the hands or attached to the head of the viewer for visualization of the virtual content. These solutions have similar drawbacks as other head-worn solutions described above. The availability of different kinds of projectors enables creation of display surfaces that cover large visualization surfaces, such as seamless displays [8, 9, 10] and spatially immersive and semi-immersive displays [7, 11], that surround the viewer, pro©2016 Society for Imaging Science and Technology DOI: 10.2352/ISSN.2470-1173.2016.21.3DIPM-404 IS&T International Symposium on Electronic Imaging 2016 3D Image Processing, Measurement (3DIPM), and Applications 2016 3DIPM-404.1 viding a feeling of immersion with the displayed visual content. Spatially immersive displays enable creation of large scale visualization surfaces of different geometrical complexity, such as walls of a room, truncated domes or real objects. In order to compensate for visual distortions caused by the irregular geometry, color and texture of the visualisation surface, projected images are corrected in color and geometry. A great deal of research was carried out in this area. Some of the methods are described in [12, 13, 14, 15]. The solutions use a projector-camera system to find the mapping between projector and camera pixels. Geometry correction of the imagery content is done by comparing the projected and the captured images. Other projection-based systems provide view-dependent stereoscopic projection in real environments [12]. The most common approach for geometric warping in this case is using the precalculated 3D model of the visualization surface, which can be acquired by structured light, depth from stereo, depth from focus methods. For this a two-pass rendering technique, described in [7], is used there to render a perspectively correct imagery content. On the first pass, the view of the virtual scene from the perspective of the viewer is rendered. On the next step, the rendered image is texture mapped onto the visualization surface and rendered from the viewpoint of the projector [7]. A method for adaptation of the geometry and color of the imagery content also for dynamically changing environments is presented in [16]. The most common example of the use of spatially immersive displays is a CAVE (CAVE automatic virtual environment) [11], which is a rear projected virtual environment, having the shape of a room, with walls, floor and ceiling used as projection surfaces, in which a user feels fully immersed within a virtual environment. In this case the problem of geometry correction of the imagery content reduces to the problem of visualization on planar surfaces. The projector geometry and the geometry of projection surfaces is known a-priori. The viewer inside the room is head tracked so that the rendered image of the virtual scene retains the correct perspective. In order to render the content of the virtual scene, a perspective projection with asymmetric viewing frustum is used. A virtual camera is placed into a position of the viewer with the camera plane parallel to the projection surface. The use of the CAVE-like system for motion capture studio is described in [2]. The system described there is a projectionbased system which allows generation of the 3D models, as well as immersive actor feedback. The conventional rendering pipeline used in CAVE-like projection-based virtual reality systems is used there. The visualization surfaces are covered with retroreflective cloth, in order to compensate for unevenly distributed lighting conditions. The problem of non-uniform sampling has been heavily studied in the past years. Different possibilities of sampling exist based on the nature of the data and sampling grid, such as regularto-regular, regular-to-irregular, irregular, listed in the increasing difficulty. The methods that address the problems of reconstruction of band limited images from irregular samples are iterative reconstruction algorithms and adaptive weight methods [18, 19, 17]. The most common way for image reconstruction from irregular samples is by using splines [20]. The way to approach the problem of geometry correction in the presented paper is the reconstruction of the image from the regular samples, which can be done with conventional interpolation methods such as nearestneighbour, linear, cubic and spline interpolation. Therefore, we do not describe the irregular-to-regular methods in more detail. Description of the use case The proposed solution is a projection based CAVE-like system that is able to make the motion capture environment more immersive by providing the actor with pro


electronic imaging | 2016

Depth Assisted Composition of Synthetic and Real 3D Scenes.

Santiago Cortes; Olli Suominen; Atanas P. Gotchev

SANTIAGO CORTES REINA: DEPTH ASSISTED COMPOSITION OF SYNTHETIC AND REAL 3D SCENES Tampere University of technology Master of Science Thesis, 66 pages October 2015 Master’s Degree Programme in Information Technology Major: Signal processing Examiner: Atanas Gotchev


Computer-aided chemical engineering | 2016

Framework for optimization and scheduling of a copper production plant

Olli Suominen; Ville Mörsky; Risto Ritala; Matti Vilkko

Abstract This work presents a nonlinear optimization and scheduling approach applied to a copper production plant. The solution maximizes smelting furnace production and provides valid converting schedules by simulating the evolution of the process over the optimization horizon. The production process is briefly described and the main models used to predict and calculate furnace and converter parameters are detailed. Though the solution is concentrated on the main elements, copper and iron, the optimization framework enables easy future augmentation with more complex models. A schedule optimization case is presented.


international conference on image processing | 2015

Efficient cost volume sampling for plane sweeping based multiview depth estimation

Olli Suominen; Atanas P. Gotchev

Plane sweeping is an increasingly popular algorithm for generating depth estimates from multiview images. It avoids image rectification, and can align the matching process with slanted surfaces, improving accuracy and robustness. However, the size of the search space increases significantly when different surface orientations are considered. We present an efficient way to perform plane sweeping without individually computing reprojection and similarity metrics on image pixels for all cameras, all orientations and all distances. The procedure truly excels when the amount of views is increased and scales efficiently with the number of different plane orientations. It relies on approximation to generate the costs, but the differences are shown to be small. In practice, it provides results equivalent to conventional matching but faster, making it suitable for applying in many existing implementations.


asian conference on intelligent information and database systems | 2015

Accuracy Evaluation of a Linear Positioning System for Light Field Capture

Suren Vagharshakyan; Ahmed Durmush; Olli Suominen; Robert Bregovic; Atanas P. Gotchev

In this paper a method has been proposed for estimating the positions of a moving camera attached to a linear positioning system (LPS). By comparing the estimated camera positions with the expected positions, which were calculated based on the LPS specifications, the manufacturer specified accuracy of the system, can be verified. Having this data, one can more accurately model the light field sampling process. The overall approach is illustrated on an in-house assembled LPS.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2015

Speed-optimized free-viewpoint rendering based on depth layering

Aleksandra Chuchvara; Olli Suominen; Mihail Georgiev; Atanas P. Gotchev

In this paper free-viewpoint rendering is addressed and a new fast approach for virtual views synthesis from view-plus-depth 3D representation is proposed. Depth layering in disparity domain is employed in order to optimally approximate the scene geometry by a set of constant depth layers. This approximation facilitates the use of connectivity information for segment-based forward warping of the reference layer map, producing a complete virtual view layer map containing no cracks or holes. The warped layer map is used to guide the disocclusions inpainting process of the synthesized texture map. For this purpose, a speed-optimized patch-based inpainting approach is proposed. In contrast to the existing methods, patch similarity function is based on local binary patterns descriptors. Such binary representation allows for efficient processing and comparison of patches, as well as compact storage and reuse of previously calculated binary descriptors. The experimental results demonstrate realtime capability of the proposed method even for CPU-based implementation, while the quality is comparable with other view synthesis approaches.


digital television conference | 2013

Circular trajectory correspondences for iterative closest point registration

Olli Suominen; Atanas P. Gotchev

Iterative closest point (ICP) is a popular algorithm for finding rigid transformations between 3D point clouds. It aims to find the rotation and translation differences between the point clouds. A key component in the algorithm is finding correspondences between the two data sets, which are then used to determine the differences. A method for finding these pairings is described, utilizing the circular trajectory of points when the cloud is being rotated. The proposed method reveals more information per iteration cycle than techniques previously used for 3D data. The method enables the use of an efficient implementation by using a simple data structure, which has the same computational complexity to build and to access as the k-d tree commonly used with nearest neighbor correspondence searches. The experimental results show that the convergence rate is superior to the original ICP based on point-to-point minimization and compares favorably to more refined and complex approaches, e.g. normal shooting with point to plane minimization. This together with the efficient implementation strategy and low amount of computation per iteration makes the circular trajectory correspondences (CTC) a valid choice for registration tasks, especially in applications where processing power is limited.

Collaboration


Dive into the Olli Suominen's collaboration.

Top Co-Authors

Avatar

Atanas P. Gotchev

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Matti Vilkko

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ahmed Durmush

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Aleksandra Chuchvara

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Antti Vehkaoja

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Chun Wang

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Erdem Sahin

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jouni Mattila

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Mihail Georgiev

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Mikko Pirhonen

Tampere University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge