Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bernard Ghanem is active.

Publication


Featured researches published by Bernard Ghanem.


computer vision and pattern recognition | 2012

Robust visual tracking via multi-task sparse learning

Tianzhu Zhang; Bernard Ghanem; Si Liu; Narendra Ahuja

In this paper, we formulate object tracking in a particle filter framework as a multi-task sparse learning problem, which we denote as Multi-Task Tracking (MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in MTT. By employing popular sparsity-inducing ℓp, q mixed norms (p ∈ {2, ∞} and q = 1), we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular L1 tracker [15] is a special case of our MTT formulation (denoted as the L11 tracker) when p = q = 1. The learning problem can be efficiently solved using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, MTT is computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that MTT methods consistently outperform state-of-the-art trackers.


computer vision and pattern recognition | 2015

ActivityNet: A large-scale video benchmark for human activity understanding

Fabian Caba Heilbron; Victor Escorcia; Bernard Ghanem; Juan Carlos Niebles

In spite of many dataset efforts for human action recognition, current computer vision algorithms are still severely limited in terms of the variability and complexity of the actions that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on simple actions and movements occurring on manually trimmed videos. In this paper we introduce ActivityNet, a new large-scale video benchmark for human activity understanding. Our benchmark aims at covering a wide range of complex human activities that are of interest to people in their daily living. In its current version, ActivityNet provides samples from 203 activity classes with an average of 137 untrimmed videos per class and 1.41 activity instances per video, for a total of 849 video hours. We illustrate three scenarios in which ActivityNet can be used to compare algorithms for human activity understanding: untrimmed video classification, trimmed activity classification and activity detection.


european conference on computer vision | 2012

Low-rank sparse learning for robust visual tracking

Tianzhu Zhang; Bernard Ghanem; Si Liu; Narendra Ahuja

In this paper, we propose a new particle-filter based tracking algorithm that exploits the relationship between particles (candidate targets). By representing particles as sparse linear combinations of dictionary templates, this algorithm capitalizes on the inherent low-rank structure of particle representations that are learned jointly. As such, it casts the tracking problem as a low-rank matrix learning problem. This low-rank sparse tracker (LRST) has a number of attractive properties. (1) Since LRST adaptively updates dictionary templates, it can handle significant changes in appearance due to variations in illumination, pose, scale, etc. (2) The linear representation in LRST explicitly incorporates background templates in the dictionary and a sparse error term, which enables LRST to address the tracking drift problem and to be robust against occlusion respectively. (3) LRST is computationally attractive, since the low-rank learning problem can be efficiently solved as a sequence of closed form update operations, which yield a time complexity that is linear in the number of particles and the template size. We evaluate the performance of LRST by applying it to a set of challenging video sequences and comparing it to 6 popular tracking methods. Our experiments show that by representing particles jointly, LRST not only outperforms the state-of-the-art in tracking accuracy but also significantly improves the time complexity of methods that use a similar sparse linear representation model for particles [1].


International Journal of Computer Vision | 2015

Robust Visual Tracking Via Consistent Low-Rank Sparse Learning

Tianzhu Zhang; Si Liu; Narendra Ahuja; Ming-Hsuan Yang; Bernard Ghanem

Object tracking is the process of determining the states of a target in consecutive video frames based on properties of motion and appearance consistency. In this paper, we propose a consistent low-rank sparse tracker (CLRST) that builds upon the particle filter framework for tracking. By exploiting temporal consistency, the proposed CLRST algorithm adaptively prunes and selects candidate particles. By using linear sparse combinations of dictionary templates, the proposed method learns the sparse representations of image regions corresponding to candidate particles jointly by exploiting the underlying low-rank constraints. In addition, the proposed CLRST algorithm is computationally attractive since temporal consistency property helps prune particles and the low-rank minimization problem for learning joint sparse representations can be efficiently solved by a sequence of closed form update operations. We evaluate the proposed CLRST algorithm against


computer vision and pattern recognition | 2015

Structural Sparse Tracking

Tianzhu Zhang; Si Liu; Changsheng Xu; Shuicheng Yan; Bernard Ghanem; Narendra Ahuja; Ming-Hsuan Yang


international conference on computer vision | 2013

Low-Rank Sparse Coding for Image Classification

Tianzhu Zhang; Bernard Ghanem; Si Liu; Changsheng Xu; Narendra Ahuja

14


european conference on computer vision | 2016

A Benchmark and Simulator for UAV Tracking

Matthias Mueller; Neil Smith; Bernard Ghanem


european conference on computer vision | 2016

DAPs: Deep Action Proposals for Action Understanding

Victor Escorcia; Fabian Caba Heilbron; Juan Carlos Niebles; Bernard Ghanem

14 state-of-the-art tracking methods on a set of


computer vision and pattern recognition | 2016

In Defense of Sparse Tracking: Circulant Sparse Tracker

Tianzhu Zhang; Adel Bibi; Bernard Ghanem


computer vision and pattern recognition | 2016

Fast Temporal Activity Proposals for Efficient Detection of Human Actions in Untrimmed Videos

Fabian Caba Heilbron; Juan Carlos Niebles; Bernard Ghanem

25

Collaboration


Dive into the Bernard Ghanem's collaboration.

Top Co-Authors

Avatar

Tianzhu Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Fabian Caba Heilbron

King Abdullah University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Victor Escorcia

King Abdullah University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ganzhao Yuan

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Adel Bibi

King Abdullah University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Neil Smith

King Abdullah University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Baoyuan Wu

King Abdullah University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Matthias Mueller

King Abdullah University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Si Liu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Peter Wonka

Arizona State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge