Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yael Pritch is active.

Publication


Featured researches published by Yael Pritch.


international conference on computer vision | 2009

Shift-map image editing

Yael Pritch; Eitam Kav-Venaki; Shmuel Peleg

Geometric rearrangement of images includes operations such as image retargeting, inpainting, or object rearrangement. Each such operation can be characterized by a shiftmap: the relative shift of every pixel in the output image from its source in an input image.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2001

Omnistereo: panoramic stereo imaging

Shmuel Peleg; Moshe Ben-Ezra; Yael Pritch

An omnistereo panorama consists of a pair of panoramic images, where one panorama is for the left eye and another panorama is for the right eye. The panoramic stereo pair provides a stereo sensation up to a full 360 degrees. Omnistereo panoramas can be constructed by mosaicing images from a single rotating camera. This approach also enables the control of stereo disparity, giving larger baselines for faraway scenes, and a smaller baseline for closer scenes. Capturing panoramic omnistereo images with a rotating camera makes it impossible to capture dynamic scenes at video rates and limits omnistereo imaging to stationary scenes. We present two possibilities for capturing omnistereo panoramas using optics without any moving parts. A special mirror is introduced such that viewing the scene through this mirror creates the same rays as those used with the rotating cameras. The lens used for omnistereo panorama is also introduced, together with the design of the mirror. Omnistereo panoramas can also be rendered by computer graphics methods to represent virtual environments.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2008

Nonchronological Video Synopsis and Indexing

Yael Pritch; Alex Rav-Acha; Shmuel Peleg

The amount of captured video is growing with the increased numbers of video cameras, especially the increase of millions of surveillance cameras that operate 24 hours a day. Since video browsing and retrieval is time consuming, most captured video is never watched or examined. Video synopsis is an effective tool for browsing and indexing of such a video. It provides a short video representation, while preserving the essential activities of the original video. The activity in the video is condensed into a shorter period by simultaneously showing multiple activities, even when they originally occurred at different times. The synopsis video is also an index into the original video by pointing to the original time of each activity. Video synopsis can be applied to create a synopsis of an endless video streams, as generated by Webcams and by surveillance cameras. It can address queries like show in one minute the synopsis of this camera broadcast during the past day. This process includes two major phases: (i) an online conversion of the endless video stream into a database of objects and activities (rather than frames). (ii) A response phase, generating the video synopsis as a response to the users query.


computer vision and pattern recognition | 2006

Making a Long Video Short: Dynamic Video Synopsis

Alex Rav-Acha; Yael Pritch; Shmuel Peleg

The power of video over still images is the ability to represent dynamic activities. But video browsing and retrieval are inconvenient due to inherent spatio-temporal redundancies, where some time intervals may have no activity, or have activities that occur in a small image region. Video synopsis aims to provide a compact video representation, while preserving the essential activities of the original video. We present dynamic video synopsis, where most of the activity in the video is condensed by simultaneously showing several actions, even when they originally occurred at different times. For example, we can create a stroboscopic movie, where multiple dynamic instances of a moving object are played simultaneously. This is an extension of the still stroboscopic picture. Previous approaches for video abstraction addressed mostly the temporal redundancy by selecting representative key-frames or time intervals. In dynamic video synopsis the activity is shifted into a significantly shorter period, in which the activity is much denser. Video examples can be found online in http://www.vision.huji.ac.il/synopsis


international conference on computer vision | 2007

Webcam Synopsis: Peeking Around the World

Yael Pritch; Alex Rav-Acha; Avital Gutman; Shmuel Peleg

The world is covered with millions of Webcams, many transmit everything in their field of view over the Internet 24 hours a day. A Web search finds public webcams in airports, intersections, classrooms, parks, shops, ski resorts, and more. Even more private surveillance cameras cover many private and public facilities. Webcams are an endless resource, but most of the video broadcast will be of little interest due to lack of activity. We propose to generate a short video that will be a synopsis of an endless video streams, generated by webcams or surveillance cameras. We would like to address queries like I would like to watch in one minute the highlights of this camera broadcast during the past day. The process includes two major phases: (i) An online conversion of the video stream into a database of objects and activities (rather than frames), (ii) A response phase, generating the video synopsis as a response to the users query. To include maximum information in a short synopsis we simultaneously show activities that may have happened at different times. The synopsis video can also be used as an index into the original video stream.


advanced video and signal based surveillance | 2009

Clustered Synopsis of Surveillance Video

Yael Pritch; Sarit Ratovitch; Avishai Hendel; Shmuel Peleg

Millions of surveillance cameras record video around the clock, producing huge video archives. Even when a video archive is known to include critical activities, finding them is like finding a needle in a haystack, making the archive almost worthless. Two main approaches were proposed to address this problem: action recognition and video summarization. Methods for automatic detection of activities still face problems in many scenarios. The video synopsis approach to video summarization is very effective, but may produce confusing summaries by the simultaneous display of multiple activities.A new methodology for the generation of short and coherent video summaries is presented, based on clustering of similar activities. Objects with similar activities are easy to watch simultaneously, and outliers can be spotted instantly. Clustered synopsis is also suitable for efficient creation of ground truth data.


computer vision and pattern recognition | 2000

Cameras for stereo panoramic imaging

Shmuel Peleg; Yael Pritch; Moshe Ben-Ezra

A panorama for visual stereo consists of a pair of panoramic images, where one panorama is for the left eye, and another panorama is for the right eye. A panoramic stereo pair provides a stereo sensation lip to a full 360 degrees. A stereo panorama cannot be photographed by two omnidirectional cameras from two viewpoints. It is normally constructed by mosaicing together images from a rotating stereo pair, or from a single moving camera. Capturing stereo panoramic images by a rotating camera makes it impossible to capture dynamic scenes at video rates, and limits stereo panoramic imaging to stationary scenes. This paper presents two possibilities for capturing stereo panoramic images using optics, without any moving parts. A special mirror is introduced such that viewing the scene through this mirror creates the same rays as those used with the rotating cameras. Such a mirror enables the capture of stereo panoramic movies with a regular video camera. A lens for stereo panorama is also introduced. The designs of the mirror and of the lens are based on curves whose caustic is a circle.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2007

Dynamosaicing: Mosaicing of Dynamic Scenes

Alex Rav-Acha; Yael Pritch; Dani Lischinski; Shmuel Peleg

This paper explores the manipulation of time in video editing, which allows us to control the chronological time of events. These time manipulations include slowing down (or postponing) some dynamic events while speeding up (or advancing) others. When a video camera scans a scene, aligning all the events to a single time interval will result in a panoramic movie. Time manipulations are obtained by first constructing an aligned space-time volume from the input video, and then sweeping a continuous 2D slice (time front) through that volume, generating a new sequence of images. For dynamic scenes, aligning the input video frames poses an important challenge. We propose to align dynamic scenes using a new notion of dynamics constancy, which is more appropriate for this task than the traditional assumption of brightness constancy. Another challenge is to avoid visual seams inside moving objects and other visual artifacts resulting from sweeping the space-time volumes with time fronts of arbitrary geometry. To avoid such artifacts, we formulate the problem of finding optimal time front geometry as one of finding a minimal cut in a 4D graph, and solve it using max-flow methods.


computer vision and pattern recognition | 2005

Dynamosaics: video mosaics with non-chronological time

Alex Rav-Acha; Yael Pritch; Dani Lischinski; Shmuel Peleg

With the limited field of view of human vision, our perception of most scenes is built over time while our eyes are scanning the scene. In the case of static scenes, this process can be modeled by panoramic mosaicing: stitching together images into a panoramic view. Can a dynamic scene, scanned by a video camera, be represented with a dynamic panoramic video even though different regions were visible at different times? In this paper, we explore time flow manipulation in video, such as the creation of new videos in which events that occurred at different times are displayed simultaneously. More general changes in the time flow are also possible, which enable re-scheduling the order of dynamic events in the video, for example. We generate dynamic mosaics by sweeping the aligned space-time volume of the input video by a time front surface and generating a sequence of time slices in the process. Various sweeping strategies and different time front evolutions manipulate the time flow in the video, enabling many unexplored and powerful effects, such as panoramic movies.


Proceedings IEEE Workshop on Omnidirectional Vision (Cat. No.PR00704) | 2000

Automatic disparity control in stereo panoramas (OmniStereo)

Yael Pritch; Moshe Ben-Ezra; Shmuel Peleg

An omnistereo panorama consists of a pair of panoramic images, where one panorama is for the left eye, and another panorama is for the right eye. An omnistereo pair provides a stereo sensation up to a full 360 degrees. OmniStereo panoramas can be created by mosaicing images from a rotating video camera, or by specially designed cameras. The stereo sensation is a function of the disparity between the left and right images. This disparity is a function of the ratio of the distance between the cameras (the baseline) and the distance to the object: disparity is larger with longer baseline and close objects. Since our eyes are a fixed distance apart, we loose stereo sensation for far away objects. It is possible to control the disparity in omnistereo panoramas which are generated by mosaicing images from a rotating camera. The baseline can be made larger for far away scenes, and smaller for nearer scenes. A method is described for the construction of omnistereo panoramas having larger baselines for far away scenes, and smaller baseline for closer scenes. The baseline can change within the panorama from directions with closer objects to directions with further objects.

Collaboration


Dive into the Yael Pritch's collaboration.

Top Co-Authors

Avatar

Shmuel Peleg

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Alex Rav-Acha

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Moshe Ben-Ezra

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Dani Lischinski

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Avishai Hendel

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Avital Gutman

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Sarit Ratovitch

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Daphna Weinshall

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Doron Feldman

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Eitam Kav Venaki

Hebrew University of Jerusalem

View shared research outputs
Researchain Logo
Decentralizing Knowledge