Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bernardo Rodrigues Pires is active.

Publication


Featured researches published by Bernardo Rodrigues Pires.


workshop on applications of computer vision | 2013

Unwrapping the eye for visible-spectrum gaze tracking on wearable devices

Bernardo Rodrigues Pires; Michael Devyver; Akihiro Tsukada; Takeo Kanade

Wearable devices with gaze tracking can assist users in many daily-life tasks. When used for extended periods of time, it is desirable that such devices do not employ active illumination for safety reasons and to minimize interference from other light sources such as the sun. Most non active-illumination methods for gaze tracking attempt to locate the iris contour by fitting an ellipse. Although the camera projection causes the iris to appear as an ellipse in the eye image, it is actually a circle on the eye surface. Instead of searching for an ellipse in the eye image, the method proposed in this paper searches for a circle on the eye surface. To this end, the method calibrates a three-dimensional eye model based on the location of the corners of the eye. Using the 3D eye model, an input image is first transformed so that the eyes spherical surface is warped into a plane, thus “unwrapping” the eye. The iris circle is then detected on the unwrapped image by a three-step robust circle-fitting procedure. The location of the circle corresponds to the gaze orientation on the outside image. The method is fast to calibrate and runs in realtime. Extensive experimentation on embedded hardware and comparisons with alternative methods demonstrate the effectiveness of the proposed solution.


intelligent robots and systems | 2014

Vision based robot localization by ground to satellite matching in GPS-denied situations

Anirudh Viswanathan; Bernardo Rodrigues Pires; Daniel Huber

This paper studies the problem of matching images captured from an unmanned ground vehicle (UGV) to those from a satellite or high-flying vehicle. We focus on situations where the UGV navigates in remote areas with few man-made structures. This is a difficult problem due to the drastic change in perspective between the ground and aerial imagery and the lack of environmental features for image comparison. We do not rely on GPS, which may be jammed or uncertain. We propose a two-step approach: (1) the UGV images are warped to obtain a birds eye view of the ground, and (2) this view is compared to a grid of satellite locations using whole-image descriptors. We analyze the performance of a variety of descriptors for different satellite map sizes and various terrain and environment types. We incorporate the air-ground matching into a particle-filter framework for localization using the best-performing descriptor. The results show that vision-based UGV localization from satellite maps is not only possible, but often provides better position estimates than GPS estimates, enabling us to improve the location estimates of Google Street View.


Journal of Healthcare Engineering | 2015

Usability of a Wearable Camera System for Dementia Family Caregivers

Judith T. Matthews; Jennifer H. Lingler; Grace Campbell; Amanda Hunsaker; Lu Hu; Bernardo Rodrigues Pires; Martial Hebert; Richard M. Schulz

Health care providers typically rely on family caregivers (CG) of persons with dementia (PWD) to describe difficult behaviors manifested by their underlying disease. Although invaluable, such reports may be selective or biased during brief medical encounters. Our team explored the usability of a wearable camera system with 9 caregiving dyads (CGs: 3 males, 6 females, 67.00 ± 14.95 years; PWDs: 2 males, 7 females, 80.00 ± 3.81 years, MMSE 17.33 ± 8.86) who recorded 79 salient events over a combined total of 140 hours of data capture, from 3 to 7 days of wear per CG. Prior to using the system, CGs assessed its benefits to be worth the invasion of privacy; post-wear privacy concerns did not differ significantly. CGs rated the system easy to learn to use, although cumbersome and obtrusive. Few negative reactions by PWDs were reported or evident in resulting video. Our findings suggest that CGs can and will wear a camera system to reveal their daily caregiving challenges to health care providers.


international conference on image processing | 2011

Approximating image filters with box filters

Bernardo Rodrigues Pires; Karanhaar Singh; José M. F. Moura

Box filters have been used to speed up many computation-intensive operations in Image Processing and Computer Vision. They have the advantage of being fast to compute, but their adoption has been hampered by the fact that they present serious restrictions to filter construction. This paper relaxes these restrictions by presenting a method for automatically approximating an arbitrary 2-D filter by a box filter. To develop our method, we first formulate the approximation as a minimization problem and show that it is possible to find a closed form solution to a subset of the parameters of the box filter. To solve for the remaining parameters of the approximation, we develop two algorithms: Exhaustive Search for small filters and Directed Search for large filters. Experimental results show the validity of the proposed method.


computer vision and pattern recognition | 2013

Visible-Spectrum Gaze Tracking for Sports

Bernardo Rodrigues Pires; Myung Hwangbo; Michael Devyver; Takeo Kanade

In sports, wearable gaze tracking devices can enrich the viewer experience and be a powerful training tool. Because devices can be used for long periods of time, often outside, it is desirable that they do not use active illumination (infra-red light sources) for safety reasons and to minimize the interference of the sun. Unlike traditional wearable devices, in sports, the gaze tracking method must be robust to (often dramatic) movements of the user in relation to the device (i.e., the common assumption that because the device is wearable, the eye does not move with regards to the camera no longer holds.) This paper extends a visible-spectrum gaze tracker in the literature to handle the requirements of a motor-sports application. Specifically, the method presented removes the assumption (in the original method) that the eye position is fixed, and proposes the use of template matching to allow for changes in the eye location from frame to frame. Experimental results demonstrate that the proposed method can handle severe changes in the eye location and is very fast to compute (up to 60 frames per second in modern hardware.).


international conference on image processing | 2008

LASIC: A model invariant framework for correspondence

Bernardo Rodrigues Pires; José M. F. Moura; João M. F. Xavier

In this paper we address two closely related problems. The first is the object detection problem, i.e., the automatic decision of whether a given image represents a known object or not. The second is the correspondence problem, i.e., the automatic matching of points of an object in two views. In the first problem, we assume object rigidity and model the distortions by a linear shape model. To solve the decision problem, we derive the uniformly most powerful (UMP) hypothesis test that is invariant to the linear shape model. We use the UMP statistic to formulate the correspondence problem in a model invariant framework. We show that it is equivalent to a quadratic maximization on the space of permutation matrices. We derive LASIC, an iterative computationally feasible solution to the quadratic maximization problem for the particular case where the linear shape model is the affine model. Simulations benchmark LASIC against two standard algorithms.


workshop on applications of computer vision | 2016

Vision-based counting of pedestrians and cyclists

Mehmet Kemal Kocamaz; Jian Gong; Bernardo Rodrigues Pires

This paper describes a vision-based cyclist and pedestrian counting method. It presents a data collection prototype system, as well as pedestrian and cyclist detection, tracking, and counting methodology. The prototype was used to collect approximately 50 hours of data which have been used for training and testing. Counting is done using a cascaded classifier. The first stage of the cascade detects the pedestrians or cyclists, whereas the second stage discriminates between these two classes. The system is based on a state-of-the-art pedestrian detector from the literature, which was augmented to explore the geometry and constraints of the target application. Namely, foreground detection, geometry prior information, and temporal moving direction (optical flow) are used as inputs to a multi-cue clustering algorithm. In this way, false alarms of the detector are reduced and better fitted detection windows are obtained. The presented project was the result of a partnership with the City of Pittsburgh with the objective of providing actionable data for government officials and advocates that promote bicycling and walking.


international conference on robotics and automation | 2016

Vision-based robot localization across seasons and in remote locations

Anirudh Viswanathan; Bernardo Rodrigues Pires; Daniel Huber

This paper studies the problem of GPS-denied unmanned ground vehicle (UGV) localization by matching ground images to a satellite map. We examine the realistic, but particularly challenging problem of navigation in remote areas using maps that may correspond to a different season of the year. The problem is difficult due to the limited UGV sensor horizon, the drastic shift in perspective between ground and aerial views, the absence of discriminative features in the environment due to the remote location, and the high variation in appearance of the satellite map caused by the change in seasons. We present an approach to image matching using semantic information that is invariant to seasonal change. This semantics-based matching is incorporated into a particle filter framework and successful localization of the ground vehicle is demonstrated for satellite maps captured in summer, spring, and winter.


international conference on image processing | 2012

Feature matching in growing databases

Bernardo Rodrigues Pires; José M. F. Moura

As feature-based image matching is applied to increasing larger scale problems, it becomes necessary to match features across increasingly larger databases. Current approaches are able to conduct such feature matching, but are not flexible enough to be applied to databases that may grow at runtime. As a solution to this problem, we present the Iterative k-d tree that allows for the insertion of new features into the database at any time and stores information about previous queries so that previously searched features can updated without having to be re-run. This new data structure was successfully used in the Spry algorithm to achieve better and faster results in situations where there is large movement between images. Additionally, experimental results show that the proposed method is significantly faster than the current state of the art algorithms when the database of features grows at runtime.


international conference on image processing | 2009

Shapes as empirical distributions

Bernardo Rodrigues Pires; José M. F. Moura

We address the problem of shape based classification. We interpret the shape of an object as a probability distribution governing the location of the points of the object. An image of the object, represented as an arbitrary set of unlabeled points, corresponds to a random drawing from the shape probability distribution and can thus be analyzed as an empirical distribution. Using this framework, classification of shapes is robust to the number of points in the image and there is no need to solve the correspondence problem when comparing two images. The framework allows us to estimate geometrical transformations between images in a statistically meaningful way using maximum likelihood. We formulate the decision problem associated with shape classification as a hypothesis test for which we can characterize the performance. We particularize this framework to two-dimensional shapes related by an affine transformation. Under this assumption, we develop a descriptor invariant to affine movement, permutations, and sampling density, and robust to noise, occlusion, and reasonable non-linear deformations. Experimental results demonstrate the quality of our approach.

Collaboration


Dive into the Bernardo Rodrigues Pires's collaboration.

Top Co-Authors

Avatar

José M. F. Moura

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Martial Hebert

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Takeo Kanade

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Akihiro Tsukada

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Amanda Hunsaker

City University of New York

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Huber

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Grace Campbell

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jian Gong

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge