Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ianir A. Ideses is active.

Publication


Featured researches published by Ianir A. Ideses.


Journal of Real-time Image Processing | 2007

Real-time 2D to 3D video conversion

Ianir A. Ideses; Leonid P. Yaroslavsky; Barak Fishbain

We present a real-time implementation of 2D to 3D video conversion using compressed video. In our method, compressed 2D video is analyzed by extracting motion vectors. Using the motion vector maps, depth maps are built for each frame and the frames are segmented to provide object-wise depth ordering. These data are then used to synthesize stereo pairs. 3D video synthesized in this fashion can be viewed using any stereoscopic display. In our implementation, anaglyph projection was selected as the 3D visualization method, because it is mostly suited to standard displays.


Journal of Optics | 2005

Three methods that improve the visual quality of colour anaglyphs

Ianir A. Ideses; Leonid P. Yaroslavsky

Anaglyphs are one of the most economical methods for three-dimensional visualization. This method, however, suffers from severe drawbacks such as loss of colour and extreme discomfort for prolonged viewing. We propose several methods for anaglyph enhancement that rely on stereo image registration, defocusing and nonlinear operations on synthesized depth maps. These enhancements substantially reduce unwanted ghosting artefacts, improve the visual quality of the images, and make comfortable viewing of the same sequence possible in three-dimensional as well as the two-dimensional mode of the same sequence.


Journal of Real-time Image Processing | 2007

Real time turbulent video perfecting by image stabilization and super-resolution

Barak Fishbain; Leonid P. Yaroslavsky; Ianir A. Ideses

The paper presents a real-time algorithm that compensates image distortions due to atmospheric turbulence in video sequences, while keeping the real moving objects in the video unharmed. The algorithm involves (1) generation of a “reference” frame, (2) estimation, for each incoming video frame, of a local image displacement map with respect to the reference frame, (3) segmentation of the displacement map into two classes: stationary and moving objects; (4) turbulence compensation of stationary objects. Experiments with both simulated and real-life sequences have shown that the restored videos, generated in real-time using standard computer hardware, exhibit excellent stability for stationary objects while retaining real motion.


international conference on image analysis and recognition | 2004

New Methods to Produce High Quality Color Anaglyphs for 3-D Visualization

Ianir A. Ideses; Leonid P. Yaroslavsky

3D visualization techniques have received gaining interest in recent years. During the last years, several methods for synthesis and projection of stereoscopic images and video have been developed. These include autostereoscopic displays, LCD shutter glasses, polarization based separation and anaglyphs. Among these methods, anaglyph based synthesis of 3D images provides a low cost solution for stereoscopic projection and allows viewing of stereo video content where standard video equipment exists. Standard anaglyph based projection of stereoscopic images, however, usually yields low quality images characterized by ghosting effects and loss of color perception. In this paper, methods for improving quality of anaglyph images, as well as conservation of the image color perception are proposed. These methods include image alignment and use of operations on synthesized depth maps.


electronic imaging | 2007

3D from Compressed 2D Video

Ianir A. Ideses; Leonid P. Yaroslavsky; Barak Fishbain; Roni Vistuch

In this paper, we present an efficient method to synthesize 3D video from compressed 2D video. The 2D video is analyzed by computing frame-by-frame motion maps. For this computation, MPEG motion vectors extraction was performed. Using the extracted motion vector maps, the video undergoes analysis and the frames are segmented to provide object-wise depth ordering. The frames are then used to synthesize stereo pairs. This is performed by resampling the video frames on a grid that is governed by a corresponding depth-map. In order to improve the quality of the synthetic video, as well as to enable 2D viewing where 3D visualization is not possible, several techniques for image enhancement are used. In our test case, anaglyph projection was selected as the 3D visualization method, as the method is mostly suited to standard displays. The drawback of this method is ghosting artifacts. In our implementation we minimize these unwanted artifacts by modifying the computed depth-maps using non-linear transformations. Defocusing of one anaglyph color component was also used to counter such artifacts. Our results show that the suggested methods enable synthesis of high quality 3D videos in real-time.


Optics Letters | 2007

Superresolution in turbulent videos: making profit from damage

Leonid P. Yaroslavsky; Barak Fishbain; Gil Shabat; Ianir A. Ideses

It is shown that one can make use of local instabilities in turbulent video frames to enhance image resolution beyond the limit defined by the image sampling rate. We outline the processing algorithm, present its experimental verification on simulated and real-life videos, and discuss its potentials and limitations.


Optics Express | 2005

Redundancy of stereoscopic images: Experimental evaluation

Leonid P. Yaroslavsky; Juan Campos; Manuel Espínola; Ianir A. Ideses

With the recent advancement in visualization devices over the last years, we are seeing a growing market for stereoscopic content. In order to convey 3D content by means of stereoscopic displays, one needs to transmit and display at least 2 points of view of the video content. This has profound implications on the resources required to transmit the content, as well as demands on the complexity of the visualization system. It is known that stereoscopic images are redundant which may prove useful for compression and may have positive effect on the construction of the visualization device. In this paper we describe an experimental evaluation of data redundancy in color stereoscopic images. In the experiments with computer generated and real life test stereo images, several observers visually tested the stereopsis threshold and accuracy of parallax measurement in anaglyphs and stereograms as functions of the blur degree of one of two stereo images. In addition, we tested the color saturation threshold in one of two stereo images for which full color 3D perception with no visible color degradations was maintained. The experiments support a theoretical estimate that one has to add, to data required to reproduce one of two stereoscopic images, only several percents of that amount of data in order to achieve stereoscopic perception.


digital television conference | 2007

Depth Map Quantization - How Much is Sufficient?

Ianir A. Ideses; Leonid P. Yaroslavsky; Itai Amit; Barak Fishbain

With the recent advancement in visualization devices over the last years, we are seeing a growing market for stereoscopic content. In order to synthesize 3D content, one needs to have either a stereo pair or an image and a depth map. Computing depth maps for images is a highly computationally intensive and time-consuming process. In this paper, we describe results of an experimental evaluation of depth map data redundancy in stereoscopic images. In the experiments with computer generated images, several observers visually tested the number of quantization levels required for comfortable and quantization unaffected stereoscopic vision. The experiments show that the number of depth quantization levels can be as low as only a couple of tens. This may have profound implication on the process of depth map estimation and 3D synthesis.


Advances in Optical Technologies | 2008

Spatial, Temporal, and Interchannel Image Data Fusion for Long-Distance Terrestrial Observation Systems

Barak Fishbain; Leonid P. Yaroslavsky; Ianir A. Ideses

This paper presents methods for intrachannel and interchannel fusion of thermal and visual sensors used in long-distance terrestrial observation systems. Intrachannel spatial and temporal fusion mechanisms used for image stabilization, super-resolution, denoising, and deblurring are supplemented by interchannel data fusion of visual- and thermal-range channels for generating fused videos intended for visual analysis by a human operator. Tests on synthetic, as well as on real-life, video sequences have confirmed the potential of the suggested methods.


Proceedings of SPIE | 2009

Real-time vision-based traffic flow measurements and incident detection

Barak Fishbain; Ianir A. Ideses; David Mahalel; Leonid P. Yaroslavsky

Visual surveillance for traffic systems requires short processing time, low processing cost and high reliability. Under those requirements, image processing technologies offer a variety of systems and methods for Intelligence Transportation Systems (ITS) as a platform for traffic Automatic Incident Detection (AID). There exist two classes of AID methods mainly studied: one is based on inductive loops, radars, infrared sonar and microwave detectors and the other is based on video images. The first class of methods suffers from drawbacks in that they are expensive to install and maintain and they are unable to detect slow or stationary vehicles. Video sensors, on the other hand, offer a relatively low installation cost with little traffic disruption during maintenance. Furthermore, they provide wide area monitoring allowing analysis of traffic flows and turning movements, speed measurement, multiple-point vehicle counts, vehicle classification and highway state assessment, based on precise scene motion analysis. This paper suggests the utilization of traffic models for real-time vision-based traffic analysis and automatic incident detection. First, the traffic flow variables, are introduced. Then, it is described how those variables can be measured from traffic video streams in real-time. Having the traffic variables measured, a robust automatic incident detection scheme is suggested. The results presented here, show a great potential for integration of traffic flow models into video based intelligent transportation systems. The system real time performance is achieved by utilizing multi-core technology using standard parallelization algorithms and libraries (OpenMP, IPP).

Collaboration


Dive into the Ianir A. Ideses's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Barak Fishbain

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Mahalel

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge