Haibin Cai
University of Portsmouth
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Haibin Cai.
Paladyn: Journal of Behavioral Robotics | 2017
Pablo Gómez Esteban; Paul Baxter; Tony Belpaeme; Erik Billing; Haibin Cai; Hoang-Long Cao; Mark Coeckelbergh; Cristina Costescu; Daniel David; Albert De Beir; Yinfeng Fang; Zhaojie Ju; James Kennedy; Honghai Liu; Alexandre Mazel; Amit Kumar Pandey; Kathleen Richardson; Emmanuel Senft; Serge Thill; Greet Van de Perre; Bram Vanderborght; David Vernon; Hui Yu; Tom Ziemke
Abstract Robot-Assisted Therapy (RAT) has successfully been used to improve social skills in children with autism spectrum disorders (ASD) through remote control of the robot in so-called Wizard of Oz (WoZ) paradigms.However, there is a need to increase the autonomy of the robot both to lighten the burden on human therapists (who have to remain in control and, importantly, supervise the robot) and to provide a consistent therapeutic experience. This paper seeks to provide insight into increasing the autonomy level of social robots in therapy to move beyond WoZ. With the final aim of improved human-human social interaction for the children, this multidisciplinary research seeks to facilitate the use of social robots as tools in clinical situations by addressing the challenge of increasing robot autonomy.We introduce the clinical framework in which the developments are tested, alongside initial data obtained from patients in a first phase of the project using a WoZ set-up mimicking the targeted supervised-autonomy behaviour. We further describe the implemented system architecture capable of providing the robot with supervised autonomy.
Sensors | 2017
Yajie Liao; Ying Sun; Gongfa Li; Jianyi Kong; Guozhang Jiang; Du Jiang; Haibin Cai; Zhaojie Ju; Hui Yu; Honghai Liu
Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer’s calibration.
IEEE Systems Journal | 2017
Haibin Cai; Bangli Liu; Jianhua Zhang; Shengyong Chen; Honghai Liu
Estimating people visual focus of attention (VFOA) plays a crucial role in various practical systems such as human–robot interaction. It is challenging to extract the cue of the VFOA of a person due to the difficulty of recognizing gaze directionality. In this paper, we propose an improved integrodifferential approach to represent gaze via efficiently and accurately localizing the eye center in lower resolution image. The proposed method takes advantage of the drastic intensity changes between the iris and the sclera and the grayscale of the eye center as well. The number of kernels is optimized to convolute the original eye region image, and the eye center is located via searching the maximum ratio derivative of the neighbor curve magnitudes in the convolution image. Experimental results confirm that the algorithm outperforms the state-of-the-art methods in terms of computational cost, accuracy, and robustness to illumination changes.
robotics and biomimetics | 2016
Xiaolong Zhou; Haibin Cai; Zhanpeng Shao; Hui Yu; Honghai Liu
In this paper, we address the 3D eye gaze estimation problem using a low-cost, simple-setup, and non-intrusive consumer depth sensor (Kinect sensor). We present an effective and accurate method based on 3D eye model to estimate the point of gaze of a subject with the tolerance of free head movement. To determine the parameters involved in the proposed eye model, we propose i) an improved convolution-based means of gradients iris center localization method to accurately and efficiently locate the iris center in 3D space; ii) a geometric constraints-based method to estimate the eyeball center under the constraints that all the iris center points are distributed on a sphere originated from the eyeball center and the sizes of two eyeballs of a subject are identical; iii) an effective Kappa angle calculation method based on the fact that the visual axes of both eyes intersect at a same point with the screen plane. The final point of gaze is calculated by using the estimated eye model parameters. We experimentally evaluate our gaze estimation method on five subjects. The experimental results show the good performance of the proposed method with an average estimation accuracy of 3.78°, which outperforms several state-of-the-arts.
international symposium on micro-nanomechatronics and human science | 2015
Haibin Cai; Xiaolong Zhou; Hui Yu; Honghai Liu
This paper investigates gaze estimation solutions for interacting children with Autism Spectrum Disorders (ASD). Previous research shows that satisfactory accuracy of gaze estimation can be achieved in constrained settings. However, most of the existing methods can not deal with large head movement (LHM) that frequently happens when interacting with children with ASD scenarios. We propose a gaze estimation method aiming at dealing with large head movement and achieving real time performance. An intervention table equipped with multiple sensors is designed to capture images with LHM. Firstly, reliable facial features and head poses are tracked using supervised decent method. Secondly, a convolution based integer-differential eye localization approach is used to locate the eye center efficiently and accurately. Thirdly, a rotation invariant gaze estimation model is built based on the located facial features, eye center, head pose and the depth data captured from Kinect. Finally, a multi-sensor fusion strategy is proposed to adaptively select the optimal camera to estimate the gaze as well as to fuse the depth information of the Kinect with the web camera. Experimental results showed that the gaze estimation method can achieve acceptable accuracy even in LHM situation and could potentially be applied in therapy for children with ASD.
international conference on intelligent robotics and applications | 2016
Haibin Cai; Hui Yu; Xiaolong Zhou; Honghai Liu
Gaze estimation plays an important role in many practical scenarios such as human robot interaction. Although high accurate gaze estimation could be obtained in constrained settings with additional IR sources or depth sensors, single web-cam based gaze estimation still remains challenging. This paper propose a normalized iris center-eye corner (NIC-EC) vector based gaze estimation methods using a single, low cost web-cam. Firstly, reliable facial features and pupil centers are extracted. Then, the NIC-EC vector is proposed to enhance the robustness and accuracy for pupil center-eye corner vector based gaze estimations. Finally, an interpolation method is employed for the mapping between constructed vectors and points of regard. Experimental results showed that the proposed method has significantly improved the accuracy over the pupil center-eye corner vector based gaze estimation method with average accuracy of \(1.66^\circ \) under slight head movements.
systems, man and cybernetics | 2017
Haibin Cai; Donghee Lee; Hwang Joonkoo; Yinfeng Fang; Song Li; Honghai Liu
Motor vehicle theft has caused massive economic loss over the world. This paper proposes an embedded vision system to detect automotive interior intrusion. The system uses a fusion of an acceleration module and a vision module to meet the requirement of low power consumption for most motor vehicles. Furthermore, an effective intrusion detection algorithm is developed for the on-board vision module. The vision system is able to detect the intrusion even in the dark night due to the employment of infrared lights. Experimental evaluation is conducted under a variety of illumination conditions, such as day time, night time and even shining light. An intrusion detection accuracy of 91.7% is achieved, which shows that the developed embedded vision system is reliable for motor vehicle intrusion detection.
international conference on robotics and automation | 2017
Xiaolong Zhou; Haibin Cai; Youfu Li; Honghai Liu
In this paper, we present an effective and accurate gaze estimation method based on two-eye model of a subject with the tolerance of free head movement from a Kinect sensor. To accurately and efficiently determine the point of gaze, i) we employ two-eye model to improve the estimation accuracy; ii) we propose an improved convolution-based means of gradients method to localize the iris center in 3D space; iii) we present a new personal calibration method that only needs one calibration point. The method approximates the visual axis as a line from the iris center to the gaze point to determine the eyeball centers and the Kappa angles. The final point of gaze can be calculated by using the calibrated personal eye parameters. We experimentally evaluate the proposed gaze estimation method on eleven subjects. Experimental results demonstrate that our gaze estimation method has an average estimation accuracy around 1.99°, which outperforms many leading methods in the state-of-the-art.
international conference on machine learning and cybernetics | 2015
Haibin Cai; Hui Yu; Chun-Yan Yao; Shen-Yong Chen; Honghai Liu
Localizing eye center is primary challenge for application-s involving gaze estimation, face recognition and human machine interaction. The challenge is caused by significant variability of eye appearance in illumination, shape, color, viewing angle and dynamics, and computation related issues. In this paper, we propose a convolution-based means of gradient method to efficiently and accurately locate the eye center in low resolution images. Priority of enhancing its computation is achieved by the use of FFT transform and fewer identified pixels of circular boundary of potential eye centres. The proposed algorithm is validated in the research database platform of BioID face database. The experimental results confirm that the proposed outperforms the-state-of-art methods and its potential in real-time eye gaze tracking related applications.
international conference on image processing | 2017
Bangli Liu; Haibin Cai; Xiaofei Ji; Honghai Liu