Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thiago Vallin Spina is active.

Publication


Featured researches published by Thiago Vallin Spina.


IEEE Transactions on Image Processing | 2012

Riverbed: A Novel User-Steered Image Segmentation Method Based on Optimum Boundary Tracking

Paulo A. V. Miranda; Alexandre X. Falcão; Thiago Vallin Spina

This paper presents an optimum user-steered boundary tracking approach for image segmentation, which simulates the behavior of water flowing through a riverbed. The riverbed approach was devised using the image foresting transform with a never-exploited connectivity function. We analyze its properties in the derived image graphs and discuss its theoretical relation with other popular methods such as live wire and graph cuts. Several experiments show that riverbed can significantly reduce the number of user interactions (anchor points), as compared to live wire for objects with complex shapes. This paper also includes a discussion about how to combine different methods in order to take advantage of their complementary strengths.


international conference on development and learning | 2012

A computer vision approach for the assessment of autism-related behavioral markers

Jordan Hashemi; Thiago Vallin Spina; Mariano Tepper; Amy Esler; Vassilios Morellas; Nikolaos Papanikolopoulos; Guillermo Sapiro

The early detection of developmental disorders is key to child outcome, allowing interventions to be initiated that promote development and improve prognosis. Research on autism spectrum disorder (ASD) suggests behavioral markers can be observed late in the first year of life. Many of these studies involved extensive frame-by-frame video observation and analysis of a childs natural behavior. Although non-intrusive, these methods are extremely time-intensive and require a high level of observer training; thus, they are impractical for clinical purposes. Diagnostic measures for ASD are available for infants but are only accurate when used by specialists experienced in early diagnosis. This work is a first milestone in a long-term multidisciplinary project that aims at helping clinicians and general practitioners accomplish this early detection/measurement task automatically. We focus on providing computer vision tools to measure and identify ASD behavioral markers based on components of the Autism Observation Scale for Infants (AOSI). In particular, we develop algorithms to measure three critical AOSI activities that assess visual attention. We augment these AOSI activities with an additional test that analyzes asymmetrical patterns in unsupported gait. The first set of algorithms involves assessing head motion by facial feature tracking, while the gait analysis relies on joint foreground segmentation and 2D body pose estimation in video. We show results that provide insightful knowledge to augment the clinicians behavioral observations obtained from real in-clinic assessments.


international conference on digital signal processing | 2009

Fast interactive segmentation of natural images using the image foresting transform

Thiago Vallin Spina; Javier A. Montoya-Zegarra; Alexandre X. Falcão; Paulo A. V. Miranda

This paper presents an unified framework for fast interactive segmentation of natural images using the image foresting transform (IFT) — a tool for the design of image processing operators based on connectivity functions (path-value functions) in graphs derived from the image. It mainly consists of three tasks: recognition, enhancement, and extraction. Recognition is the only interactive task, where representative image properties for enhancement and the objects location for extraction are indicated by drawing a few markers in the image. Enhancement increases the dissimilarities between object and background for more effective object extraction, which completes segmentation. We show through extensive experiments that, by exploiting the synergism between user and computer for recognition and enhancement, respectively, as a separated step from recognition and extraction, respectively, one can reduce user involvement with better accuracy. We also describe new methods for enhancement based on fuzzy classification by IFT and for feature selection and/or combination by genetic programming.


IEEE Transactions on Image Processing | 2014

Hybrid Approaches for Interactive Image Segmentation Using the Live Markers Paradigm

Thiago Vallin Spina; Paulo A. V. Miranda; Alexandre X. Falcão

Interactive image segmentation methods normally rely on cues about the foreground imposed by the user as region constraints (markers/brush strokes) or boundary constraints (anchor points). These paradigms often have complementary strengths and weaknesses, which can be addressed to improve the interactive experience by reducing the users effort. We propose a novel hybrid paradigm based on a new form of interaction called live markers, where optimum boundary-tracking segments are turned into internal and external markers for region-based delineation to effectively extract the object. We present four techniques within this paradigm: 1) LiveMarkers; 2) RiverCut; 3) LiveCut; and 4) RiverMarkers. The homonym LiveMarkers couples boundary-tracking via live-wire-on-the-fly (LWOF) with optimum seed competition by the image foresting transform (IFT-SC). The IFT-SC can cope with complex object silhouettes, but presents a leaking problem on weaker parts of the boundary that is solved by the effective live markers produced by LWOF. Conversely, in RiverCut, the long boundary segments computed by Riverbed around complex shapes provide markers for Graph Cuts by the Min-Cut/Max-Flow algorithm (GCMF) to complete segmentation on poorly defined sections of the objects border. LiveCut and RiverMarkers further demonstrate that live markers can improve segmentation even when the combined approaches are not complementary (e.g., GCMFs shrinking bias is also dramatically prevented when using it with LWOF). More- over, since delineation is always region based, our methodology subsumes both paradigms, representing a new way of extending boundary tracking to the 3D image domain, while speeding up the addition of markers close to the objects boundary— a necessary but time consuming task when done manually. We justify our claims through an extensive experimental evaluation on natural and medical images data sets, using recently proposed robot users for boundary-tracking methods.


Autism Research and Treatment | 2014

Computer Vision Tools for Low-Cost and Noninvasive Measurement of Autism-Related Behaviors in Infants

Jordan Hashemi; Mariano Tepper; Thiago Vallin Spina; Amy Esler; Vassilios Morellas; Nikolaos Papanikolopoulos; Helen L. Egger; Geraldine Dawson; Guillermo Sapiro

The early detection of developmental disorders is key to child outcome, allowing interventions to be initiated which promote development and improve prognosis. Research on autism spectrum disorder (ASD) suggests that behavioral signs can be observed late in the first year of life. Many of these studies involve extensive frame-by-frame video observation and analysis of a childs natural behavior. Although nonintrusive, these methods are extremely time-intensive and require a high level of observer training; thus, they are burdensome for clinical and large population research purposes. This work is a first milestone in a long-term project on non-invasive early observation of children in order to aid in risk detection and research of neurodevelopmental disorders. We focus on providing low-cost computer vision tools to measure and identify ASD behavioral signs based on components of the Autism Observation Scale for Infants (AOSI). In particular, we develop algorithms to measure responses to general ASD risk assessment tasks and activities outlined by the AOSI which assess visual attention by tracking facial features. We show results, including comparisons with expert and nonexpert clinicians, which demonstrate that the proposed computer vision tools can capture critical behavioral observations and potentially augment the clinicians behavioral observations obtained from real in-clinic assessments.


Computers & Geosciences | 2013

Segmentation of sandstone thin section images with separation of touching grains using optimum path forest operators

Ivan Mingireanov Filho; Thiago Vallin Spina; Alexandre X. Falcão; Alexandre Campane Vidal

Abstract The segmentation of detrical sedimentary rock images is still a challenge for characterization of grain morphology in sedimentary petrography. We propose a fast and effective approach that first segments the grains from pore in sandstone thin section images and separates the touching grains automatically, and second lets the user to correct the misclassified grains with minimum interaction. The method is mostly based on the image foresting transform (IFT)—a tool for the design of image processing operators using optimum connectivity. The IFT interprets an image as a graph, whose nodes are the image pixels, the arcs are defined by an adjacency relation between pixels, and the paths are valued by a connectivity function. The IFT algorithm transforms the image graph into an optimum-path forest and distinct operators are designed by suitable choice of the IFT parameters and post-processing of the attributes of that forest. The solution involves a sequence of three IFT-based image operators for automatic segmentation and the interactive segmentation combines region- and boundary-based object delineation using two IFT operators. Tests with thin section images of two different sandstone samples have shown very satisfactory results, yielding r 2 and accuracy parameters of 0.8712 and 94.8% on average, respectively. Biases were the presence of the matrix and rock fragments.


computer analysis of images and patterns | 2011

User-steered image segmentation using live markers

Thiago Vallin Spina; Alexandre X. Falcão; Paulo A. V. Miranda

Interactive image segmentation methods have been proposed based on region constraints (user-drawn markers) and boundary constraints (anchor points). However, they have complementary strengths and weaknesses, which can be addressed to further reduce user involvement. We achieve this goal by combining two popular methods in the Image Foresting Transform (IFT) framework, the differential IFT with optimum seed competition (DIFT-SC) and live-wireon-the-fly (LWOF), resulting in a new method called Live Markers (LM). DIFTSC can cope with complex object silhouettes, but presents a leaking problem on weaker parts of the boundary. LWOF provides smoother segmentations and blocks the DIFT-SC leaking, but requires more user interaction. LM combines their strengths and eliminates their weaknesses at the same time, by transforming optimum boundary segments from LWOF into internal and external markers for DIFT-SC. This hybrid approach allows linear-time execution in the first interaction and sublinear-time corrections in the subsequent ones. We demonstrate its ability to reduce user involvement with respect to LWOF and DIFT-SC using several natural and medical images.


brazilian symposium on computer graphics and image processing | 2013

Interactive Segmentation by Image Foresting Transform on Superpixel Graphs

Paulo E. Rauber; Alexandre X. Falcão; Thiago Vallin Spina; Pedro Jussieu de Rezende

There are many scenarios in which user interaction is essential for effective image segmentation. In this paper, we present a new interactive segmentation method based on the Image Foresting Transform (IFT). The method over segments the input image, creates a graph based on these segments (super pixels), receives markers (labels) drawn by the user on some super pixels and organizes a competition to label every pixel in the image. Our method has several interesting properties: it is effective, efficient, capable of segmenting multiple objects in almost linear time on the number of super pixels, readily extendable through previously published techniques, and benefits from domain-specific feature extraction. We also present a comparison with another technique based on the IFT, which can be seen as its pixel-based counterpart. Another contribution of this paper is the description of automatic (robot) users. Given a ground truth image, these robots simulate interactive segmentation by trained and untrained users, reducing the costs and biases involved in comparing segmentation techniques.


Computer Vision and Image Understanding | 2012

IFTrace: Video segmentation of deformable objects using the Image Foresting Transform

Rodrigo Minetto; Thiago Vallin Spina; Alexandre X. Falcão; Neucimar J. Leite; João Paulo Papa; Jorge Stolfi

We introduce IFTrace, a method for video segmentation of deformable objects. The algorithm makes minimal assumptions about the nature of the tracked object: basically, that it consists of a few connected regions, and has a well-defined border. The objects to be tracked are interactively segmented in the first frame of the video, and a set of markers is then automatically selected in the interior and immediate surroundings of the object. These markers are then located in the next frame by a combination of KLT feature finding and motion extrapolation. Object boundaries are then identified from these markers by the Image Foresting Transform (IFT). These steps are repeated for all subsequent frames until the end of the movie. Thanks to the IFT and a special boundary detection operator, IFTrace can reliably track deformable objects in the presence of partial and total occlusions, camera motion, lighting and color changes, and other complications. Tests on real videos show that the IFT is better suited to this task than Graph-Cut methods, and that IFTrace is more robust than other state-of-the art algorithms - namely, the OpenCV Snake and CamShift algorithms, Hesss Particle-Filter, and Zhong and Changs method based on spatio-temporal consistency.


International Journal of Pattern Recognition and Artificial Intelligence | 2011

INTELLIGENT UNDERSTANDING OF USER INTERACTION IN IMAGE SEGMENTATION

Thiago Vallin Spina; Paulo A. V. Miranda; Alexandre X. Falcão

We have developed interactive tools for graph-based segmentation of natural images, in which the user guides object delineation by drawing strokes (markers) inside and outside the object. A suitable arc-weight estimation is paramount to minimize user time and maximize segmentation accuracy in these tools. However, it depends on discriminative image properties for object and background. These properties can be obtained from some marker pixels, but their identification is a hard problem during delineation. Careless arc-weight re-estimation reduces user control and drops performance, while interactive arc-weight estimation in a step before interactive object extraction is the best option so far, albeit it is not intuitive for nonexpert users. We present an effective solution using the unified framework of the image foresting transform (IFT) with three operators: clustering for interpreting user interaction and determining when and where arc weights need to be re-estimated; fuzzy classification for arc-weight estimation; and marker competition based on optimum connectivity for object extraction. For validation, we compared the proposed approach with another interactive IFT-based method, which computes arc weights before extraction. Evaluation involved multiple users (experts and nonexperts), a dataset with several natural images, and measurements to quantify accuracy, precision, efficiency (user time and computation time), and user control, being some of them novel measurements, proposed in this work.

Collaboration


Dive into the Thiago Vallin Spina's collaboration.

Top Co-Authors

Avatar

Alexandre X. Falcão

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar

Paulo A. V. Miranda

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar

Amy Esler

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexandre Cunha

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Elliot M. Meyerowitz

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge