Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Susanna Ricco is active.

Publication


Featured researches published by Susanna Ricco.


medical image computing and computer assisted intervention | 2009

Correcting Motion Artifacts in Retinal Spectral Domain Optical Coherence Tomography via Image Registration

Susanna Ricco; Mei Chen; Hiroshi Ishikawa; Gadi Wollstein; Joel S. Schuman

Spectral domain optical coherence tomography (SD-OCT) is an important tool for the diagnosis of various retinal diseases. The measurements available from SD-OCT volumes can be used to detect structural changes in glaucoma patients before the resulting vision loss becomes noticeable. Eye movement during the imaging process corrupts the data, making measurements unreliable. We propose a method to correct for transverse motion artifacts in SD-OCT volumes after scan acquisition by registering the volume to an instantaneous, and therefore artifact-free, reference image. Our procedure corrects for smooth deformations resulting from ocular tremor and drift as well as the abrupt discontinuities in vessels resulting from microsaccades. We test our performance on 48 scans of healthy eyes and 116 scans of glaucomatous eyes, improving scan quality in 96% of healthy and 73% of glaucomatous eyes.


computer vision and pattern recognition | 2012

Dense Lagrangian motion estimation with occlusions

Susanna Ricco; Carlo Tomasi

We couple occlusion modeling and multi-frame motion estimation to compute dense, temporally extended point trajectories in video with significant occlusions. Our approach combines robust spatial regularization with spatially and temporally global occlusion labeling in a variational, Lagrangian framework with subspace constraints. We track points even through ephemeral occlusions. Experiments demonstrate accuracy superior to the state of the art while tracking more points through more frames.


asian conference on computer vision | 2009

Fingerspelling recognition through classification of letter-to-letter transitions

Susanna Ricco; Carlo Tomasi

We propose a new principle for recognizing fingerspelling sequences from American Sign Language (ASL) Instead of training a system to recognize the static posture for each letter from an isolated frame, we recognize the dynamic gestures corresponding to transitions between letters This eliminates the need for an explicit temporal segmentation step, which we show is error-prone at speeds used by native signers We present results from our system recognizing 82 different words signed by a single signer, using more than an hour of training and test video We demonstrate that recognizing letter-to-letter transitions without temporal segmentation is feasible and results in improved performance.


international conference on robotics and automation | 2011

Textured occupancy grids for monocular localization without features

Julian Mason; Susanna Ricco; Ronald Parr

A textured occupancy grid map is an extremely versatile data structure. It can be used to render human-readable views and for laser rangefinder localization algorithms. For camera-based localization, landmark or feature-based maps tend to be favored in current research. This may be because of a tacit assumption that working with a textured occupancy grid with a camera would be impractical. We demonstrate that a textured occupancy grid can be combined with an extremely simple monocular localization algorithm to produce a viable localization solution. Our approach is simple, efficient, and produces localization results comparable to laser localization results. A consequence of this result is that a single map representation, the textured occupancy grid, can now be used for humans, robots with laser rangefinders, and robots with just a single camera.


international conference on computer vision | 2013

Video Motion for Every Visible Point

Susanna Ricco; Carlo Tomasi

Dense motion of image points over many video frames can provide important information about the world. However, occlusions and drift make it impossible to compute long motion paths by merely concatenating optical flow vectors between consecutive frames. Instead, we solve for entire paths directly, and flag the frames in which each is visible. As in previous work, we anchor each path to a unique pixel which guarantees an even spatial distribution of paths. Unlike earlier methods, we allow paths to be anchored in any frame. By explicitly requiring that at least one visible path passes within a small neighborhood of every pixel, we guarantee complete coverage of all visible points in all frames. We achieve state-of-the-art results on real sequences including both rigid and non-rigid motions with significant occlusions.


european conference on computer vision | 2012

Simultaneous compaction and factorization of sparse image motion matrices

Susanna Ricco; Carlo Tomasi

Matrices that collect the image coordinates of point features tracked through video --- one column per feature --- have often low rank, either exactly or approximately. This observation has led to many matrix factorization methods for 3D reconstruction, motion segmentation, or regularization of feature trajectories. However, temporary occlusions, image noise, and variations in lighting, pose, or object geometry often confound trackers. A feature that reappears after a temporary tracking failure --- whatever the cause --- is regarded as a new feature by typical tracking systems, resulting in very sparse matrices with many columns and rendering factorization problematic. We propose a method to simultaneously factor and compact such a matrix by merging groups of columns that correspond to the same feature into single columns. This combination of compaction and factorization makes trackers more resilient to changes in appearance and short occlusions. Preliminary experiments show that imputation of missing matrix entries --- and therefore matrix factorization --- becomes significantly more reliable as a result. Clean column merging also required us to develop a history-sensitive feature reinitialization method we call feature snapping that aligns merged feature trajectory segments precisely to each other.


computer vision and pattern recognition | 2015

Articulated motion discovery using pairs of trajectories

Luca Del Pero; Susanna Ricco; Rahul Sukthankar; Vittorio Ferrari

We propose an unsupervised approach for discovering characteristic motion patterns in videos of highly articulated objects performing natural, unscripted behaviors, such as tigers in the wild. We discover consistent patterns in a bottom-up manner by analyzing the relative displacements of large numbers of ordered trajectory pairs through time, such that each trajectory is attached to a different moving part on the object. The pairs of trajectories descriptor relies entirely on motion and is more discriminative than state-of-the-art features that employ single trajectories. Our method generates temporal video intervals, each automatically trimmed to one instance of the discovered behavior, and clusters them by type (e.g., running, turning head, drinking water). We present experiments on two datasets: dogs from YouTube-Objects and a new dataset of National Geographic tiger videos. Results confirm that our proposed descriptor outperforms existing appearance- and trajectory-based descriptors (e.g., HOG and DTFs) on both datasets and enables us to segment unconstrained animal video into intervals containing single behaviors.


International Journal of Computer Vision | 2017

Behavior Discovery and Alignment of Articulated Object Classes from Unstructured Video

Luca Del Pero; Susanna Ricco; Rahul Sukthankar; Vittorio Ferrari

We propose an automatic system for organizing the content of a collection of unstructured videos of an articulated object class (e.g., tiger, horse). By exploiting the recurring motion patterns of the class across videos, our system: (1) identifies its characteristic behaviors, and (2) recovers pixel-to-pixel alignments across different instances. Our system can be useful for organizing video collections for indexing and retrieval. Moreover, it can be a platform for learning the appearance or behaviors of object classes from Internet video. Traditional supervised techniques cannot exploit this wealth of data directly, as they require a large amount of time-consuming manual annotations. The behavior discovery stage generates temporal video intervals, each automatically trimmed to one instance of the discovered behavior, clustered by type. It relies on our novel motion representation for articulated motion based on the displacement of ordered pairs of trajectories. The alignment stage aligns hundreds of instances of the class to a great accuracy despite considerable appearance variations (e.g., an adult tiger and a cub). It uses a flexible thin plate spline deformation model that can vary through time. We carefully evaluate each step of our system on a new, fully annotated dataset. On behavior discovery, we outperform the state-of-the-art improved dense trajectory feature descriptor. On spatial alignment, we outperform the popular SIFT Flow algorithm.


computer vision and pattern recognition | 2016

Discovering the Physical Parts of an Articulated Object Class from Multiple Videos

Luca Del Pero; Susanna Ricco; Rahul Sukthankar; Vittorio Ferrari

We propose a motion-based method to discover the physical parts of an articulated object class (e.g. head/torso/leg of a horse) from multiple videos. The key is to find object regions that exhibit consistent motion relative to the rest of the object, across multiple videos. We can then learn a location model for the parts and segment them accurately in the individual videos using an energy function that also enforces temporal and spatial consistency in part motion. Unlike our approach, traditional methods for motion segmentation or non-rigid structure from motion operate on one video at a time. Hence they cannot discover a part unless it displays independent motion in that particular video. We evaluate our method on a new dataset of 32 videos of tigers and horses, where we significantly outperform a recent motion segmentation method on the task of part discovery (obtaining roughly twice the accuracy).


international symposium on biomedical imaging | 2009

Classifiction of scan location in retinal optical coherence tomography

Susanna Ricco; Mei Chen

Spectral-domain optical coherence tomography is a new imaging tool to aid the diagnosis of various diseases of the eye. Two commonly used scan patterns focus on different areas of the retina and are used to measure different properties. We developed an efficient automated classification technique that distinguishes scans of the two types so that algorithms tuned to the specific scan type can be applied during computer-aided analysis. Our algorithm differentiates between scan types based on the presence or absence of vessels converging on the optic disc. We tested its performance on an extensive dataset containing a total of 1015 scans from both healthy and diseased subjects and achieved a sensitivity of 100% and a specificity of 99.7%.

Collaboration


Dive into the Susanna Ricco's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mei Chen

State University of New York System

View shared research outputs
Researchain Logo
Decentralizing Knowledge