Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hua Gao is active.

Publication


Featured researches published by Hua Gao.


international conference on image processing | 2014

Detecting emotional stress from facial expressions for driving safety

Hua Gao; Anıl Yüce; Jean-Philippe Thiran

Monitoring the attentive and emotional status of the driver is critical for the safety and comfort of driving. In this work a real-time non-intrusive monitoring system is developed, which detects the emotional states of the driver by analyzing facial expressions. The system considers two negative basic emotions, anger and disgust, as stress related emotions. We detect an individual emotion in each video frame and the decision on the stress level is made on sequence level. Experimental results show that the developed system operates very well on simulated data even with generic models. An additional pose normalization step reduces the impact of pose mismatch due to camera setup and pose variation, and hence improves the detection accuracy further.


ieee international conference on automatic face gesture recognition | 2015

Discriminant multi-label manifold embedding for facial Action Unit detection

Anıl Yüce; Hua Gao; Jean-Philippe Thiran

This article describes a system for participation in the Facial Expression Recognition and Analysis (FERA2015) sub-challenge for spontaneous action unit occurrence detection. The problem of AU detection is a multi-label classification problem by its nature, which is a fact overseen by most existing work. The correlation information between AUs has the potential of increasing the detection accuracy. We investigate the multi-label AU detection problem by embedding the data on low dimensional manifolds which preserve multi-label correlation. For this, we apply the multi-label Discriminant Laplacian Embedding (DLE) method as an extension to our base system. The system uses SIFT features around a set of facial landmarks that is enhanced with the use of additional non-salient points around transient facial features. Both the base system and the DLE extension show better performance than the challenge baseline results for the two databases in the challenge, and achieve close to 50% as F1-measure on the testing partition in average (9.9% higher than the baseline, in the best case). The DLE extension proves useful for certain AUs, but also shows the need for more analysis to assess the benefits in general.


computer vision and pattern recognition | 2015

Towards robust cascaded regression for face alignment in the wild

Chengchao Qu; Hua Gao; Eduardo Monari; Jürgen Beyerer; Jean-Philippe Thiran

Most state-of-the-art solutions for localizing facial feature landmarks build on the recent success of the cascaded regression framework [7, 15, 34], which progressively predicts the shape update based on the previous shape estimate and its feature calculation. We revisit several core aspects of this framework and show that proper selection of regression method, local image feature and fine-tuning of further fitting strategies can achieve top performance for face alignment using the generic cascaded regression algorithm. In particular, our strongest model features Iteratively Reweighted Least Squares (IRLS) [18] for training robust regressors in the presence of outliers in the training data, RootSIFT [2] as the image patch descriptor that replaces the original Euclidean distance in SIFT [24] with the Hellinger distance, as well as coarse-to-fine fitting and in-plane pose normalization during shape update. We show the benefit of each proposed improvement by extensive individual experiments compared to the baseline approach [34] on the LFPW dataset [4]. On the currently most challenging 300-W dataset [28] and COFW dataset [4], we report state-of-the-art results that are superior to or on par with recently published algorithms.


workshop on applications of computer vision | 2015

Towards Convenient Calibration for Cross-Ratio Based Gaze Estimation

Nuri Murat Arar; Hua Gao; Jean-Philippe Thiran

Eye gaze movements are considered as a salient modality for human computer interaction applications. Recently, cross-ratio (CR) based eye tracking methods have attracted increasing interest because they provide remote gaze estimation using a single uncalibrated camera. However, due to the simplification assumptions in CR-based methods, their performance is lower than the model-based approaches [8]. Several efforts have been made to improve the accuracy by compensating for the assumptions with subject specific calibration. This paper presents a CR-based automatic gaze estimation system that accurately works under natural head movements. A subject-specific calibration method based on regularized least-squares regression (LSR) is introduced for achieving higher accuracy compared to other state-of-the-art calibration methods. Experimental results also show that the proposed calibration method generalizes better when fewer calibration points are used. This enables user friendly applications with minimum calibration effort without sacrificing too much accuracy. In addition, we adaptively fuse the estimation of the point of regard (PoR) from both eyes based on the visibility of eye features. The adaptive fusion scheme reduces accuracy error by around 20% and also increases the estimation coverage under natural head movements.


ieee international conference on automatic face gesture recognition | 2015

Robust gaze estimation based on adaptive fusion of multiple cameras

Nuri Murat Arar; Hua Gao; Jean-Philippe Thiran

Gaze movements play a crucial role in human-computer interaction (HCI) applications. Recently, gaze tracking systems with a wide variety of applications have attracted much interest by the industry as well as the scientific community. The state-of-the-art gaze trackers are mostly non-intrusive and report high estimation accuracies. However, they require complex setups such as camera and geometric calibration in addition to subject-specific calibration. In this paper, we introduce a multi-camera gaze estimation system which requires less effort for the users in terms of the system setup and calibration. The system is based on an adaptive fusion of multiple independent camera systems in which the gaze estimation relies on simple cross-ratio (CR) geometry. Experimental results conducted on real data show that the proposed system achieves a significant accuracy improvement, by around 25%, over the traditional CR-based single camera systems through the novel adaptive multi-camera fusion scheme. The real-time system achieves <;0.9° accuracy error with very few calibration data (5 points) under natural head movements, which is competitive with more complex systems. Hence, the proposed system enables fast and user-friendly gaze tracking with minimum user effort without sacrificing too much accuracy.


IEEE Transactions on Circuits and Systems for Video Technology | 2017

A Regression-Based User Calibration Framework for Real-Time Gaze Estimation

Nuri Murat Arar; Hua Gao; Jean-Philippe Thiran

Eye movements play a very significant role in human–computer interaction (HCI) as they are natural and fast, and contain important cues for human cognitive state and visual attention. Over the last two decades, many techniques have been proposed to accurately estimate the gaze. Among these, video-based remote eye trackers have attracted much interest, since they enable nonintrusive gaze estimation. To achieve high estimation accuracies for remote systems, user calibration is inevitable in order to compensate for the estimation bias caused by person-specific eye parameters. Although several explicit and implicit user calibration methods have been proposed to ease the calibration burden, the procedure is still cumbersome and needs further improvement. In this paper, we present a comprehensive analysis of regression-based user calibration techniques. We propose a novel weighted least squares regression-based user calibration method together with a real-time cross-ratio based gaze estimation framework. The proposed system enables to obtain high estimation accuracy with minimum user effort, which leads to user-friendly HCI applications. Experimental results conducted on both simulations and user experiments show that our framework achieves a significant performance improvement over the state-of-the-art user calibration methods when only a few points are available for the calibration.


IEEE Transactions on Affective Computing | 2017

Action Units and Their Cross-Correlations for Prediction of Cognitive Load during Driving

Anıl Yüce; Hua Gao; Gabriel Louis Cuendet; Jean-Philippe Thiran

Driving requires the constant coordination of many body systems and full attention of the person. Cognitive distraction (subsidiary mental load) of the driver is an important factor that decreases attention and responsiveness, which may result in human error and accidents. In this paper, we present a study of facial expressions of such mental diversion of attention. First, we introduce a multi-camera database of 46 people recorded while driving a simulator in two conditions, baseline and induced cognitive load using a secondary task. Then, we present an automatic system to differentiate between the two conditions, where we use features extracted from Facial Action Unit (AU) values and their cross-correlations in order to exploit recurring synchronization and causality patterns. Both the recording and detection system are suitable for integration in a vehicle and a real-world application, e.g., an early warning system. We show that when the system is trained individually on each subject we achieve a mean accuracy and F-score of <inline-formula><tex-math notation=LaTeX>


international conference on biometrics | 2015

Combining view-based pose normalization and feature transform for cross-pose face recognition

Hua Gao; Hazim Kemal Ekenel; Rainer Stiefelhagen

sim 95


workshop on applications of computer vision | 2014

Extending explicit shape regression with mixed feature channels and pose priors

Matthias Richter; Hua Gao; Hazim Kemal Ekenel

</tex-math><alternatives> <inline-graphic xlink:href=yuce-ieq1-2584042.gif/></alternatives></inline-formula> percent, and for the subject independent tests <inline-formula><tex-math notation=LaTeX>


IEEE Transactions on Biomedical Engineering | 2016

Facial Image Analysis for Fully Automatic Prediction of Difficult Endotracheal Intubation

Gabriel Louis Cuendet; Patrick Schoettker; Anıl Yüce; Matteo Sorci; Hua Gao; Christophe Perruchoud; Jean-Philippe Thiran

sim 68

Collaboration


Dive into the Hua Gao's collaboration.

Top Co-Authors

Avatar

Jean-Philippe Thiran

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Hazim Kemal Ekenel

Istanbul Technical University

View shared research outputs
Top Co-Authors

Avatar

Anıl Yüce

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Nuri Murat Arar

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Gabriel Louis Cuendet

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Chengchao Qu

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Rainer Stiefelhagen

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matteo Sorci

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Volker Gass

École Polytechnique Fédérale de Lausanne

View shared research outputs
Researchain Logo
Decentralizing Knowledge