Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuri Ivanov is active.

Publication


Featured researches published by Yuri Ivanov.


EURASIP Journal on Advances in Signal Processing | 2008

Robust abandoned object detection using dual foregrounds

Fatih Porikli; Yuri Ivanov; Tetsuji Haga

As an alternative to the tracking-based approaches that heavily depend on accurate detection of moving objects, which often fail for crowded scenarios, we present a pixelwise method that employs dual foregrounds to extract temporally static image regions. Depending on the application, these regions indicate objects that do not constitute the original background but were brought into the scene at a subsequent time, such as abandoned and removed items, illegally parked vehicles. We construct separate long- and short-term backgrounds that are implemented as pixelwise multivariate Gaussian models. Background parameters are adapted online using a Bayesian update mechanism imposed at different learning rates. By comparing each frame with these models, we estimate two foregrounds. We infer an evidence score at each pixel by applying a set of hypotheses on the foreground responses, and then aggregate the evidence in time to provide temporal consistency. Unlike optical flow-based approaches that smear boundaries, our method can accurately segment out objects even if they are fully occluded. It does not require on-site training to compensate for particular imaging conditions. While having a low-computational load, it readily lends itself to parallelization if further speed improvement is necessary.


european conference on computer vision | 2010

Fast approximate nearest neighbor methods for non-Euclidean manifolds with applications to human activity analysis in videos

Rizwan Chaudhry; Yuri Ivanov

Approximate Nearest Neighbor (ANN) methods such as Locality Sensitive Hashing, Semantic Hashing, and Spectral Hashing, provide computationally efficient procedures for finding objects similar to a query object in large datasets. These methods have been successfully applied to search web-scale datasets that can contain millions of images. Unfortunately, the key assumption in these procedures is that objects in the dataset lie in a Euclidean space. This assumption is not always valid and poses a challenge for several computer vision applications where data commonly lies in complex non-Euclidean manifolds. In particular, dynamic data such as human activities are commonly represented as distributions over bags of video words or as dynamical systems. In this paper, we propose two new algorithms that extend Spectral Hashing to non-Euclidean spaces. The first method considers the Riemannian geometry of the manifold and performs Spectral Hashing in the tangent space of the manifold at several points. The second method divides the data into subsets and takes advantage of the kernel trick to perform non-Euclidean Spectral Hashing. For a data set of N samples the proposed methods are able to retrieve similar objects in as low as O(K) time complexity, where K is the number of clusters in the data. Since K ≪ N, our methods are extremely efficient. We test and evaluate our methods on synthetic data generated from the Unit Hypersphere and the Grassmann manifold. Finally, we show promising results on a human action database.


Proceedings of the 2007 workshop on Massive datasets | 2007

The MERL motion detector dataset

Christopher R. Wren; Yuri Ivanov; Darren Leigh; Jonathan Westhues

Looking into the future of residential and office building Mitsubishi Electric Research Labs (MERL) has been collecting motion sensor data from a network of over 200 sensors for a year. The data is the residual traces of year in the life of a research laboratory. It contains interesting spatio-temporal structure ranging all the way from the seconds of individuals walking down hallways, the minutes in lobbies chatting with colleagues, the hours of dozens of people attending talks and meetings, the days and weeks that drive the patterns of life, to the months and seasons with their ebb and flow of visiting employees. This document describes that dataset, which contains well over 30 million raw motion records, spanning a calendar year and two floors of our research laboratory, as well as calender, weather, and some intermediate analytic results. The dataset was originally released as part of the 2007 Workshop on Massive Datasets. The dataset can be obtained from http://www.merl.com/wmd.


visual communications and image processing | 2007

Tracking people in mixed modality systems

Yuri Ivanov; Alexander Sorokin; Christopher R. Wren; Ishwinder Kaur

In traditional surveillance systems tracking of objects is achieved by means of image and video processing. The disadvantages of such surveillance systems is that if an object needs to be tracked - it has to be observed by a video camera. However, geometries of indoor spaces typically require a large number of video cameras to provide the coverage necessary for robust operation of video-based tracking algorithms. Increased number of video streams increases the computational burden on the surveillance system in order to obtain robust tracking results. In this paper we present an approach to tracking in mixed modality systems, with a variety of sensors. The system described here includes over 200 motion sensors as well as 6 moving cameras. We track individuals in the entire space and across cameras using contextual information available from the motion sensors. Motion sensors allow us to almost instantaneously find plausible tracks in a very large volume of data, ranging in months, which for traditional video search approaches could be virtually impossible. We describe a method that allows us to evaluate when the tracking system is unreliable and present the data to a human operator for disambiguation.


IEEE Sensors Journal | 2011

Diamond Sentry: Integrating Sensors and Cameras for Real-Time Monitoring of Indoor Spaces

Pavan K. Turaga; Yuri Ivanov

Video-based surveillance and monitoring of indoor spaces such as offices, airports and convenience stores has attracted increasing interest in recent years. While video proves useful for inferring information pertaining to identities and activities, it results in large data overheads. On the other hand, motion sensors are much more data-efficient and far less expensive, but possess limited recognition capabilities. In this paper, we describe a system that integrates a large number of wireless motion sensors and a few strategically placed cameras and its application to real-time monitoring of indoor spaces. The system described here responds to an event immediately as it happens and provides visual evidence of the location of the event, thereby establishing an awareness of the events in the entire location being monitored, supplying the user with the information about “when,” “where,” and “what” happens in the space as the events unfold. We introduce a system that is designed for maximizing the utility of the video data recorded from a location. It achieves this goal by following the minimal commitment strategy, where no data is discarded and no particular hypothesis is pursued until the time when the interpretation is necessary. Additionally, we employ an alternative modality to help in indexing video data for real time, as well as for possible future use in forensic mode. We use the motion sensor data to specify policies of camera control. Utilizing these policies makes the application of machine learning and computer vision techniques simple to use to perform on-line surveillance tasks in a fast, accurate, and scalable way.


location and context awareness | 2007

Socialmotion: measuring the hidden social life of a building

Christopher R. Wren; Yuri Ivanov; Ishwinder Kaur; Darren Leigh; Jonathan Westhues

In this paper we present an approach to analyzing the social behaviors that occur in a typical office space. We describe a system consisting of over 200 motion sensors connected in a wireless network observing a medium-sized office space populated with almost 100 people for a period of almost a year. We use a tracklet graph representation of the data in the sensor network, which allows us to efficiently evaluate gross patterns of office-wide social behavior of its occupants during expected seasonal changes in the workforce as well as unexpected social events that affect the entire population of the space. We present our experiments with a method based on Kullback-Leibler metric applied to the office activity modelled as a Markov process. Using this approach we detect gross deviations of short term office-wide behavior patterns from previous long-term patterns spanning various time intervals. We compare detected deviations to the company calendar and find and provide some quantitative analysis of the relative impact of those disruptions across a range of temporal scales. We also present a favorable comparison to results achieved by applying the same analysis to email logs.


international conference on computer graphics and interactive techniques | 2007

Buzz: measuring and visualizing conference crowds

Christopher R. Wren; Yuri Ivanov; Darren Leigh; Jonathan Westhues

This exhibition explores the idea of using technology to understand the movement of people. Not just on a small stage, but in an expansive environment. Not the fine details of movement of individuals, but the gross patterns of a population. Not the identifying biometrics, but patterns of group behavior that evolve from the structure of the environment and the points of interest embedded in that structure. In this instance: a marketplace, and in particular, the marketplace of ideas called SIGGRAPH 2007 Emerging Technologies (ETech).


Video Analytics for Business Intelligence | 2012

Fast Approximate Nearest Neighbor Methods for Example-Based Video Search

Rizwan Chaudhry; Yuri Ivanov

The cost of computer storage is steadily decreasing. Many terabytes of video data can be easily collected using video cameras in public places for modern surveillance applications, or stored on video sharing websites. However, the growth in CPU speeds has recently slowed to a crawl. This situation implies that while the data is being collected, it cannot be cheaply processed in time. Searching such vast collections of video data for useful information requires radically different approaches, calling for algorithms with sub-linear time complexity.


international conference on multimodal interfaces | 2007

Interfacing life: a year in the life of a research lab

Yuri Ivanov

Humans perceive life around them through a variety of sensory inputs. Some, such as vision, or audition, have high information content, while others, such as touch and smell, do not. Humans and other animals use this gradation of senses to know how to attend to whats important. In contrast, it is widely accepted that in tasks of monitoring living spaces the modalities with high information content hold the key to decoding the behavior and intentions of the space occupants. In surveillance, video cameras are used to record everything that they can possibly see in the hopes that if something happens, it can later be found in the recorded data. Unfortunately, the latter proved to be harder than it sounds. In our work we challenge this idea and introduce a monitoring system that is built as a combination of channels with varying information content. The system has been deployed for over a year in our lab space and consists of a large motion sensor network combined with several video cameras. While the sensors give a general context of the events in the entire 3000 square meters of the space, cameras only attend to selected occurrences of the office activities. The system demonstrates several monitoring tasks which are all but impossible to perform in a traditional camera-only setting. In the talk we share our experiences, challenges and solutions in building and maintaining the system. We show some results from the data that we have collected for the period of over a year and introduce some other successful and novel applications of the system.


international conference on multimodal interfaces | 2007

Workshop on massive datasets

Christopher R. Wren; Yuri Ivanov

Are the tools we use to understand our data scalable to the tens of millions of records, huge spans of time, minute details of behavior, and large geographic extent that future sensor networks will generate? In the future buildings will be studded with sensors. Every movement will generate a few bits of data. Every fluctuation in temperature will be recorded. Every deviation in lighting will be noticed. These large and complex datasets will challenge the tools we use today. Looking into the future of residential and office building Mitsubishi Electric Research Labs (MERL) has been collecting motion sensor data from a network of over 200 sensors for a year. The data is the residual traces of year in the life of a research laboratory. It contains interesting spatiotemporal structure ranging all the way from the seconds of individuals walking down hallways, the minutes in lobbies chatting with colleagues, the hours of dozens of people attending talks and meetings, the days and weeks that drive the patterns of life, to the months and seasons with their ebb and flow of visiting employees. The dataset contains well over 30 million raw motion records, spanning a calendar year and two floors of our research laboratory. As such it presents a significant challenge for behavior analysis, search, manipulation and visualization of the data. We have also prepared accompanying analytics such as partial tracks and behavior detections, as well as map data and anonymous calendar data marking the pattern of meetings, vacations and holidays. Please see the technical report for more information [1].

Collaboration


Dive into the Yuri Ivanov's collaboration.

Top Co-Authors

Avatar

Christopher R. Wren

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Jonathan Westhues

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Darren Leigh

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abraham Goldsmith

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Ishwinder Kaur

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

John C. Barnwell

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Alex P. Pentland

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge