Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Guillermo Garcia-Hernando is active.

Publication


Featured researches published by Guillermo Garcia-Hernando.


Computer Vision and Image Understanding | 2016

Spatio-Temporal Hough Forest for efficient detection–localisation–recognition of fingerwriting in egocentric camera

Hyung Jin Chang; Guillermo Garcia-Hernando; Danhang Tang; Tae-Kyun Kim

Abstract Recognising fingerwriting in mid-air is a useful input tool for wearable egocentric camera. In this paper we propose a novel framework to this purpose. Specifically, our method first detects a writing hand posture and locates the position of index fingertip in each frame. From the trajectory of the fingertip, the written character is localised and recognised simultaneously. To achieve this challenging task, we first present a contour-based view independent hand posture descriptor extracted with a novel signature function. The proposed descriptor serves both posture recognition and fingertip detection. As to recognising characters from trajectories, we propose Spatio-Temporal Hough Forest that takes sequential data as input and perform regression on both spatial and temporal domain. Therefore our method can perform character recognition and localisation simultaneously. To establish our contributions, a new handwriting-in-mid-air dataset with labels for postures, fingertips and character locations is proposed. We design and conduct experiments of posture estimation, fingertip detection, character recognition and localisation. In all experiments our method demonstrates superior accuracy and robustness compared to prior arts.


computer vision and pattern recognition | 2017

Transition Forests: Learning Discriminative Temporal Transitions for Action Recognition and Detection

Guillermo Garcia-Hernando; Tae-Kyun Kim

A human action can be seen as transitions between ones body poses over time, where the transition depicts a temporal relation between two poses. Recognizing actions thus involves learning a classifier sensitive to these pose transitions as well as to static poses. In this paper, we introduce a novel method called transitions forests, an ensemble of decision trees that both learn to discriminate static poses and transitions between pairs of two independent frames. During training, node splitting is driven by alternating two criteria: the standard classification objective that maximizes the discrimination power in individual frames, and the proposed one in pairwise frame transitions. Growing the trees tends to group frames that have similar associated transitions and share same action label incorporating temporal information that was not available otherwise. Unlike conventional decision trees where the best split in a node is determined independently of other nodes, the transition forests try to find the best split of nodes jointly (within a layer) for incorporating distant node transitions. When inferring the class label of a new frame, it is passed down the trees and the prediction is made based on previous frame predictions and the current one in an efficient and online manner. We apply our method on varied skeleton action recognition and online detection datasets showing its suitability over several baselines and state-of-the-art approaches.


international conference on machine vision | 2015

Novel spatio-temporal features for fingertip writing recognition in egocentric viewpoint

Muhammad Zaid Hameed; Guillermo Garcia-Hernando

In this paper, we propose a novel feature extraction scheme for fingertip writing recognition in the air for egocentric viewpoint. The inherent challenges in the egocentric vision e.g. rapid camera motion and objects appearance and disappearance in scene may cause the fingertip to be detected in non-uniformly time separated frames. Most existing approaches do not consider this missing temporal information for feature extraction, which could be utilized to improve performance in ego-vision tasks. The novel feature extraction scheme extracts spatio-temporal features from trajectory of hand movement which are used with Hidden Markov Models for classification. The proposed feature set outperforms current trajectory based feature schemes and achieves 96.7% recognition rate on a novel fingertip trajectory dataset.


machine vision applications | 2018

Spatio-temporal elastic cuboid trajectories for efficient fight recognition using Hough forests

Ismael Serrano; Oscar Déniz; Gloria Bueno; Guillermo Garcia-Hernando; Tae-Kyun Kim

While action recognition has become an important line of research in computer vision, the recognition of particular events such as aggressive behaviors, or fights, has been relatively less studied. These tasks may be exceedingly useful in some video surveillance scenarios such as psychiatric centers, prisons or even in personal camera smartphones. Their potential usability has caused a surge of interest in developing fight or violence detectors. The key aspect in this case is efficiency, that is, these methods should be computationally very fast. In this paper, spatio-temporal elastic cuboid trajectories are proposed for fight recognition. This method is based on the use of blob movements to create trajectories that capture and model the different motions that are specific to a fight. The proposed method is robust to the specific shapes and positions of the individuals. Additionally, the standard Hough forests classifier is adapted in order to use it with this descriptor. This method is compared to other nine related methods on four datasets. The results show that the proposed method obtains the best accuracy for each dataset and is also computationally efficient.


computer vision and pattern recognition | 2018

First-Person Hand Action Benchmark With RGB-D Videos and 3D Hand Pose Annotations

Guillermo Garcia-Hernando; Shanxin Yuan; Seungryul Baek; Tae-Kyun Kim


computer vision and pattern recognition | 2018

Depth-Based 3D Hand Pose Estimation: From Current Achievements to Future Goals

Shanxin Yuan; Guillermo Garcia-Hernando; Björn Stenger; Gyeongsik Moon; Ju Yong Chang; Kyoung Mu Lee; Pavlo Molchanov; Jan Kautz; Sina Honari; Liuhao Ge; Junsong Yuan; Xinghao Chen; Guijin Wang; Fan Yang; Kai Akiyama; Yang Wu; Qingfu Wan; Meysam Madadi; Sergio Escalera; Shile Li; Dongheui Lee; Iason Oikonomidis; Antonis A. Argyros; Tae-Kyun Kim


workshop on applications of computer vision | 2016

Transition Hough forest for trajectory-based action recognition

Guillermo Garcia-Hernando; Hyung Jin Chang; Ismael Serrano; Oscar Déniz; Tae-Kyun Kim


arXiv: Computer Vision and Pattern Recognition | 2017

The 2017 Hands in the Million Challenge on 3D Hand Pose Estimation

Shanxin Yuan; Qi Ye; Guillermo Garcia-Hernando; Tae-Kyun Kim


Archive | 2017

3D Hand Pose Estimation: From Current Achievements to Future Goals.

Shanxin Yuan; Guillermo Garcia-Hernando; Björn Stenger; Gyeongsik Moon; Ju Yong Chang; Kyoung Mu Lee; Pavlo Molchanov; Jan Kautz; Sina Honari; Liuhao Ge; Junsong Yuan; Xinghao Chen; Guijin Wang; Fan Yang; Kai Akiyama; Yang Wu; Qingfu Wan; Meysam Madadi; Sergio Escalera; Shile Li; Dongheui Lee; Iason Oikonomidis; Antonis A. Argyros; Tae-Kyun Kim


arXiv: Computer Vision and Pattern Recognition | 2018

Task-Oriented Hand Motion Retargeting for Dexterous Manipulation Imitation.

Dafni Antotsiou; Guillermo Garcia-Hernando; Tae-Kyun Kim

Collaboration


Dive into the Guillermo Garcia-Hernando's collaboration.

Top Co-Authors

Avatar

Tae-Kyun Kim

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Shanxin Yuan

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fan Yang

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Kai Akiyama

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yang Wu

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Gyeongsik Moon

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Ju Yong Chang

Seoul National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge