Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ninghang Hu is active.

Publication


Featured researches published by Ninghang Hu.


international conference on robotics and automation | 2014

Learning latent structure for activity recognition

Ninghang Hu; Gwenn Englebienne; Zhongyu Lou; Ben J. A. Kröse

We present a novel latent discriminative model for human activity recognition. Unlike the approaches that require conditional independence assumptions, our model is very flexible in encoding the full connectivity among observations, latent states, and activity states. The model is able to capture richer class of contextual information in both state-state and observation-state pairs. Although loops are present in the model, we can consider the graphical model as a linear-chain structure, where the exact inference is tractable. Thereby the model is very efficient in both inference and learning. The parameters of the graphical model are learned with the Structured-Support Vector Machine (Structured-SVM). A data-driven approach is used to initialize the latent variables, thereby no hand labeling for the latent states is required. Experimental results on the CAD-120 benchmark dataset show that our model outperforms the state-of-the-art approach by over 5% in both precision and recall, while our model is more efficient in computation.


Paladyn: Journal of Behavioral Robotics | 2013

Assistive technology design and development for acceptable robotics companions for ageing years

Farshid Amirabdollahian; R. op den Akker; Sandra Bedaf; Richard Bormann; Heather Draper; Vanessa Evers; J. Gallego Pérez; GertJan Gelderblom; C. Gutierrez Ruiz; David J. Hewson; Ninghang Hu; Ben J. A. Kröse; Hagen Lehmann; Patrizia Marti; H. Michel; H. Prevot-Huille; Ulrich Reiser; Joe Saunders; Tom Sorell; J. Stienstra; Dag Sverre Syrdal; Mick L. Walters; Kerstin Dautenhahn

Abstract A new stream of research and development responds to changes in life expectancy across the world. It includes technologies which enhance well-being of individuals, specifically for older people. The ACCOMPANY project focuses on home companion technologies and issues surrounding technology development for assistive purposes. The project responds to some overlooked aspects of technology design, divided into multiple areas such as empathic and social human-robot interaction, robot learning and memory visualisation, and monitoring persons’ activities at home. To bring these aspects together, a dedicated task is identified to ensure technological integration of these multiple approaches on an existing robotic platform, Care-O-Bot®3 in the context of a smart-home environment utilising a multitude of sensor arrays. Formative and summative evaluation cycles are then used to assess the emerging prototype towards identifying acceptable behaviours and roles for the robot, for example role as a butler or a trainer, while also comparing user requirements to achieved progress. In a novel approach, the project considers ethical concerns and by highlighting principles such as autonomy, independence, enablement, safety and privacy, it embarks on providing a discussion medium where user views on these principles and the existing tension between some of these principles, for example tension between privacy and autonomy over safety, can be captured and considered in design cycles and throughout project developments.


international conference on human system interactions | 2013

Accompany: Acceptable robotiCs COMPanions for AgeiNG Years — Multidimensional aspects of human-system interactions

Farshid Amirabdollahian; Rieks op den Akker; Sandra Bedaf; Richard Bormann; Heather Draper; Vanessa Evers; Gert Jan Gelderblom; Carolina Gutierrez Ruiz; David J. Hewson; Ninghang Hu; Iolanda Iacono; Kheng Lee Koay; Ben J. A. Kröse; Patrizia Marti; H. Michel; Hélène Prevot-Huille; Ulrich Reiser; Joe Saunders; Tom Sorell; Kerstin Dautenhahn

With changes in life expectancy across the world, technologies enhancing well-being of individuals, specifically for older people, are subject to a new stream of research and development. In this paper we present the ACCOMPANY project, a pan-European project which focuses on home companion technologies. The projects aims to progress beyond the state of the art in multiple areas such as empathic and social human-robot interaction, robot learning and memory visualisation, monitoring persons and chores at home, and technological integration of these multiple approaches on an existing robotic platform, Care-O-Bot®3 and in the context of a smart-home environment utilising a multitude of sensor arrays. The resulting prototype from integrating these developments undergoes multiple formative cycles and a summative evaluation cycle towards identifying acceptable behaviours and roles for the robot for example role as a butler or a trainer. Furthermore, the evaluation activities will use an evaluation grid in order to assess achievement of the identified user requirements, formulated in form of distinct scenarios. Finally, the project considers ethical concerns and by highlighting principles such as autonomy, independence, enablement, safety and privacy, it embarks on providing a discussion medium where user views on these principles and the existing tension between some of these principles for example tension between privacy and autonomy over safety, can be captured and considered in design cycles and throughout project developments.


Lecture Notes in Computer Science | 2012

Bayesian fusion of ceiling mounted camera and laser range finder on a mobile robot for people detection and localization

Ninghang Hu; Gwenn Englebienne; Ben J. A. Kröse

Robust people detection and localization is a prerequisite for many applications where service robots interact with humans. Future robots will not be stand-alone any more but will operate in smart environments that are equipped with sensor systems for context awareness and activity recognition. This paper describes a probabilistic framework for the fusion of data from a laser range finder on a mobile robot and an overhead camera fixed in a domestic environment. The contribution of the framework is that it enables seamless integration with other sensors. For tracking multiple people it is possible to use a probabilistic particle filter tracker. We show that the fusion improves the results of the individual subsystems.


british machine vision conference | 2015

Color Constancy by Deep Learning

Zhongyu Lou; Theo Gevers; Ninghang Hu; Marcel P. Lucassen

Computational color constancy aims to estimate the color of the light source. The performance of many vision tasks, such as object detection and scene understanding, may benefit from color constancy by estimating the correct object colors. Since traditional color constancy methods are based on specific assumptions, none of those methods can be used as a universal predictor. Further, shallow learning schemes are used for training-based color constancy approaches, suffering from limited learning capacity. In this paper, we propose a framework using Deep Neural Networks (DNNs) to obtain an accurate light source estimator to achieve color constancy. We formulate color constancy as a DNN-based regression approach to estimate the color of the light source. The model is trained using datasets of more than a million images. Experiments show that the proposed algorithm outperforms the state-of-the-art by 9\%. Especially in cross dataset validation, reducing the median angular error by 35\%. Further, in our implementation, the algorithm operates at more than


robot and human interactive communication | 2014

A two-layered approach to recognize high-level human activities

Ninghang Hu; Gwenn Englebienne; Ben J. A. Kröse

100


international conference on robotics and automation | 2014

Multi-user identification and efficient user approaching by fusing robot and ambient sensors

Ninghang Hu; Richard Bormann; Thomas Zwolfer; Ben J. A. Kröse

fps during


intelligent robots and systems | 2016

Human intent forecasting using intrinsic kinematic constraints

Ninghang Hu; Aaron M. Bestick; Gwenn Englebienne; Ruzena Bajscy; Ben J. A. Kröse

Automated human activity recognition is an essential task for Human Robot Interaction (HRI). A successful activity recognition system enables an assistant robot to provide precise services. In this paper, we present a two-layered approach that can recognize sub-level activities and high-level activities successively. In the first layer, the low-level activities are recognized based on the RGB-D video. In the second layer, we use the recognized low-level activities as input features for estimating high-level activities. Our model is embedded with a latent node, so that it can capture a richer class of sub-level semantics compared with the traditional approach. Our model is evaluated on a challenging benchmark dataset. We show that the proposed approach outperforms the single-layered approach, suggesting that the hierarchical nature of the model is able to better explain the observed data. The results also show that our model outperforms the state-of-the-art approach in accuracy, precision and recall.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2018

Expression-Invariant Age Estimation Using Structured Learning

Zhongyu Lou; Fares Alnajar; Jose M. Alvarez; Ninghang Hu; Theo Gevers

We describe a novel framework that combines an overhead camera and a robot RGB-D sensor for real-time people finding. Finding people is one of the most fundamental tasks in robot home care scenarios and it consists of many components, e.g. people detection, people tracking, face recognition, robot navigation. Researchers have extensively worked on these components, but as isolated tasks. Surprisingly, little attention has been paid on bridging these components as an entire system. In this paper, we integrate the separated modules seamlessly, and evaluate the entire system in a robot-care scenario. The results show largely improved efficiency when the robot system is aided by the localization system of the overhead cameras.


intelligent robots and systems | 2015

A hierarchical representation for human activity recognition with noisy labels

Ninghang Hu; Gwenn Englebienne; Zhongyu Lou; Ben J. A. Kröse

The performance of human-robot collaboration tasks can be improved by incorporating predictions of the human collaborators movement intentions. These predictions allow a collaborative robot to both provide appropriate assistance and plan its own motion so it does not interfere with the human. In the specific case of human reach intent prediction, prior work has divided the task into two pieces: recognition of human activities and prediction of reach intent. In this work, we propose a joint model for simultaneous recognition of human activities and prediction of reach intent based on skeletal pose. Since future reach intent is tightly linked to the action a person is performing at present, we hypothesize that this joint model will produce better performance on the recognition and prediction tasks than past approaches. In addition, our approach incorporates a simple human kinematic model which allows us to generate features that compactly capture the reachability of objects in the environment and the motion cost to reach those objects, which we anticipate will improve performance. Experiments using the CAD-120 benchmark dataset show that both the joint modeling approach and the human kinematic features give improved F1 scores versus the previous state of the art.

Collaboration


Dive into the Ninghang Hu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhongyu Lou

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar

Theo Gevers

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar

Sandra Bedaf

Zuyd University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Heather Draper

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar

Joe Saunders

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Kerstin Dautenhahn

University of Hertfordshire

View shared research outputs
Researchain Logo
Decentralizing Knowledge