Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gedas Bertasius is active.

Publication


Featured researches published by Gedas Bertasius.


computer vision and pattern recognition | 2015

DeepEdge: A multi-scale bifurcated deep network for top-down contour detection

Gedas Bertasius; Jianbo Shi; Lorenzo Torresani

Contour detection has been a fundamental component in many image segmentation and object detection systems. Most previous work utilizes low-level features such as texture or saliency to detect contours and then use them as cues for a higher-level task such as object detection. However, we claim that recognizing objects and predicting contours are two mutually related tasks. Contrary to traditional approaches, we show that we can invert the commonly established pipeline: instead of detecting contours with low-level cues for a higher-level recognition task, we exploit object-related features as high-level cues for contour detection.


computer vision and pattern recognition | 2016

Semantic Segmentation with Boundary Neural Fields

Gedas Bertasius; Jianbo Shi; Lorenzo Torresani

The state-of-the-art in semantic segmentation is currently represented by fully convolutional networks (FCNs). However, FCNs use large receptive fields and many pooling layers, both of which cause blurring and low spatial resolution in the deep layers. As a result FCNs tend to produce segmentations that are poorly localized around object boundaries. Prior work has attempted to address this issue in post-processing steps, for example using a color-based CRF on top of the FCN predictions. However, these approaches require additional parameters and low-level features that are difficult to tune and integrate into the original network architecture. Additionally, most CRFs use colorbased pixel affinities, which are not well suited for semantic segmentation and lead to spatially disjoint predictions. To overcome these problems, we introduce a Boundary Neural Field (BNF), which is a global energy model integrating FCN predictions with boundary cues. The boundary information is used to enhance semantic segment coherence and to improve object localization. Specifically, we first show that the convolutional filters of semantic FCNs provide good features for boundary detection. We then employ the predicted boundaries to define pairwise potentials in our energy. Finally, we show that our energy decomposes semantic segmentation into multiple binary problems, which can be relaxed for efficient global optimization. We report extensive experiments demonstrating that minimization of our global boundary-based energy yields results superior to prior globalization methods, both quantitatively as well as qualitatively.


computer vision and pattern recognition | 2017

Convolutional Random Walk Networks for Semantic Image Segmentation

Gedas Bertasius; Lorenzo Torresani; Stella X. Yu; Jianbo Shi

Most current semantic segmentation methods rely on fully convolutional networks (FCNs). However, their use of large receptive fields and many pooling layers cause low spatial resolution inside the deep layers. This leads to predictions with poor localization around the boundaries. Prior work has attempted to address this issue by post-processing predictions with CRFs or MRFs. But such models often fail to capture semantic relationships between objects, which causes spatially disjoint predictions. To overcome these problems, recent methods integrated CRFs or MRFs into an FCN framework. The downside of these new models is that they have much higher complexity than traditional FCNs, which renders training and testing more challenging. In this work we introduce a simple, yet effective Convolutional Random Walk Network (RWN) that addresses the issues of poor boundary localization and spatially fragmented predictions with very little increase in model complexity. Our proposed RWN jointly optimizes the objectives of pixelwise affinity and semantic segmentation. It combines these two objectives via a novel random walk layer that enforces consistent spatial grouping in the deep layers of the network. Our RWN is implemented using standard convolution and matrix multiplication. This allows an easy integration into existing FCN frameworks and it enables end-to-end training of the whole network via standard back-propagation. Our implementation of RWN requires just 131 additional parameters compared to the traditional FCNs, and yet it consistently produces an improvement over the FCNs on semantic segmentation and scene labeling.


medical image computing and computer assisted intervention | 2016

Automatic Lymph Node Cluster Segmentation Using Holistically-Nested Neural Networks and Structured Optimization in CT Images

Isabella Nogues; Le Lu; Xiaosong Wang; Holger R. Roth; Gedas Bertasius; Nathan Lay; Jianbo Shi; Yohannes Tsehay; Ronald M. Summers

Lymph node segmentation is an important yet challenging problem in medical image analysis. The presence of enlarged lymph nodes (LNs) signals the onset or progression of a malignant disease or infection. In the thoracoabdominal (TA) body region, neighboring enlarged LNs often spatially collapse into “swollen” lymph node clusters (LNCs) (up to 9 LNs in our dataset). Accurate segmentation of TA LNCs is complexified by the noticeably poor intensity and texture contrast among neighboring LNs and surrounding tissues, and has not been addressed in previous work. This paper presents a novel approach to TA LNC segmentation that combines holistically-nested neural networks (HNNs) and structured optimization (SO). Two HNNs, built upon recent fully convolutional networks (FCNs) and deeply supervised networks (DSNs), are trained to learn the LNC appearance (HNN-A) or contour (HNN-C) probabilistic output maps, respectively. HNN first produces the class label maps with the same resolution as the input image, like FCN. Afterwards, HNN predictions for LNC appearance and contour cues are formulated into the unary and pairwise terms of conditional random fields (CRFs), which are subsequently solved using one of three different SO methods: dense CRF, graph cuts, and boundary neural fields (BNF). BNF yields the highest quantitative results. Its mean Dice coefficient between segmented and ground truth LN volumes is 82.1 % ± 9.6 %, compared to 73.0 % ± 17.6 % for HNN-A alone. The LNC relative volume (\(cm^3\)) difference is 13.7 % ± 13.1 %, a promising result for the development of LN imaging biomarkers based on volumetric measurements.


international conference on computer vision | 2017

Am I a Baller? Basketball Performance Assessment from First-Person Videos

Gedas Bertasius; Hyun Soo Park; Stella X. Yu; Jianbo Shi

This paper presents a method to assess a basketball players performance from his/her first-person video. A key challenge lies in the fact that the evaluation metric is highly subjective and specific to a particular evaluator. We leverage the first-person camera to address this challenge. The spatiotemporal visual semantics provided by a first-person view allows us to reason about the camera wearers actions while he/she is participating in an unscripted basketball game. Our method takes a players first-person video and provides a players performance measure that is specific to an evaluators preference. To achieve this goal, we first use a convolutional LSTM network to detect atomic basketball events from first-person videos. Our networks ability to zoom-in to the salient regions addresses the issue of a severe camera wearers head movement in first-person videos. The detected atomic events are then passed through the Gaussian mixtures to construct a highly non-linear visual spatiotemporal basketball assessment feature. Finally, we use this feature to learn a basketball assessment model from pairs of labeled first-person basketball videos, for which a basketball expert indicates, which of the two players is better. We demonstrate that despite not knowing the basketball evaluators criterion, our model learns to accurately assess the players in real-world games. Furthermore, our model can also discover basketball events that contribute positively and negatively to a players performance.


international conference on computer vision | 2015

High-for-Low and Low-for-High: Efficient Boundary Detection from Deep Object Features and Its Applications to High-Level Vision

Gedas Bertasius; Jianbo Shi; Lorenzo Torresani


robotics: science and systems | 2017

First-Person Action-Object Detection with EgoNet.

Gedas Bertasius; Hyun Soo Park; Stella X. Yu; Jianbo Shi


arXiv: Computer Vision and Pattern Recognition | 2018

Object Detection in Video with Spatiotemporal Sampling Networks.

Gedas Bertasius; Lorenzo Torresani; Jianbo Shi


arXiv: Computer Vision and Pattern Recognition | 2015

Exploiting Egocentric Object Prior for 3D Saliency Detection

Gedas Bertasius; Hyun Soo Park; Jianbo Shi


computer vision and pattern recognition | 2018

Egocentric Basketball Motion Planning From a Single First-Person Image

Gedas Bertasius; Aaron Chan; Jianbo Shi

Collaboration


Dive into the Gedas Bertasius's collaboration.

Top Co-Authors

Avatar

Jianbo Shi

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stella X. Yu

University of California

View shared research outputs
Top Co-Authors

Avatar

Hyun Soo Park

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Aaron Chan

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Isabella Nogues

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Le Lu

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Nathan Lay

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ronald M. Summers

National Institutes of Health

View shared research outputs
Researchain Logo
Decentralizing Knowledge