Qijie Zhao
Shanghai University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Qijie Zhao.
International Journal of Advanced Robotic Systems | 2012
Qijie Zhao; Xinming Yuan; Dawei Tu; Jianxia Lu
In order to adaptively calibrate the work parameters in the infrared-TV based eye gaze tracking Human-Robot Interaction (HRI) system, a kind of gaze direction sensing model has been provided for detecting the eye gaze identified parameters. We paid more attention to situations where the users head was in a different position to the interaction interface. Furthermore, the algorithm for automatically correcting work parameters of the system has also been put up by defining certain initial reference system states and analysing the historical information of the interaction between a user and the system. Moreover, considering some application cases and factors, and relying on minimum error rate Bayesian decision-making theory, a mechanism for identifying system state and adaptively calibrating parameters has been proposed. Finally, some experiments have been done with the established system and the results suggest that the proposed mechanism and algorithm can identify the system work state in multi- situations, and can automatically correct the work parameters to meet the demands of a gaze tracking HRI system.
Journal on Multimodal User Interfaces | 2015
Qijie Zhao; Xinming Yuan; Dawei Tu; Jianxia Lu
The correct identification of eye moving behaviors is an important foundation for a gaze tracking interaction system. After analyzing the characteristics of eye moving behaviors in gaze tracking human computer interaction, a kind of eye moving behaviors classified method has been presented, in light of the fact that blinks, saccades, and fixations successively take place in certain sequences, and a blink is usually the beginning action or end action for a saccade or a fixation sequence. In addition, there are some other contributions in this paper. Firstly, a blink recognition algorithm has been proposed with eye’s height-width aspect ratio and iris or eyelid edge fitting curvature. Secondly, taking the recognized blink as a starting point to calculate the mean and standard deviation of eye’s moving displacements in a certain period, and then identifying the saccades and fixations in terms of the calculated parameters. At last, some experiments have been done, and the results show that the proposed method, by considering the relationship between eye behaviors, can accurately classify blinks, saccades, and fixations, especially for large-scale saccades and long time fixations. Moreover, the present also provides a new reference for designing an accessible interface to reduce the impacts on the reliability caused by the randomness of eye movements.
intelligent environments | 2014
Hui Shao; Qijie Zhao; Dawei Tu
In order to make the intelligent virtual classroom distinguish out the rotation direction of the users head, a method for head behavior detection based on optical flow is proposed in this paper. Firstly, we select the key feature points of users facial area. Then we calculate the users head movement parameter through the data of feature points which have been recorded. Secondly, a kind of head direction sensing model has been provided for detecting the head identified parameters. Finally, according to the interaction mechanism of the interaction scenario, intelligent virtual classroom will respond to the users head movement based on the result of the head direction judgment. By online tests, the results show that the method is of high recognition rate and quick real-time.
international conference on intelligent human-machine systems and cybernetics | 2013
Xinming Yuan; Qijie Zhao; Dawei Tu; Hui Shao
In order to realize head-free eye-gaze tracking human-computer interaction(HCI) system, a novel gaze direction estimation method is proposed in this paper. The point of gaze (PoG) is estimated by building up the geometric mapping relationship between the vector from the infrared light to the eye-gaze point and the vector from the purkinje spot to the pupil center in 3D space. To calculate the position of purkinje spot and pupil center in 3D space, user-dependent parameters, such as the radius of the cornea and the inter-pupillary distance, should be calibrated before using the eye-gaze tracking HCI system. Different from most monocular(2D) vision techniques simplifying the 3D eye-gaze tracking issue into 2D issue, the proposed method uses a single camera to estimate PoG on 2D plane by means of 3D spatial geometry mapping. Experiments on PoG estimation have been done and the results suggest the proposed method is feasible.
international conference on intelligent human-machine systems and cybernetics | 2013
Peng Cao; Qijie Zhao; Dawei Tu; Hui Shao
Eye characteristics are significant information in pattern recognition. They are widely applied in many fields, such as human-computer interaction, face recognition and 3D face modeling from 2D images. Generally, the eye information detection is Based on gray images. However, color images provide more information than gray images. According to different applications, choosing a suitable color space is very important. In this paper, several eye features are compared from the view point of different color spaces, such as RGB, HIS, HSV. And some experiments have been done in the three color spaces. The extraordinary bright area of pupil in the H channel of HSV color space is confirmed. Moreover, pupil center is detected and estimated using morphology processing method in H channel. The results show that eyes features are more obvious in the HSV space, and feature extraction is more convenient. Furthermore, the experimental analysis can provide reference for future research on eye characteristic recognition.
international congress on image and signal processing | 2010
Defu Cheng; Qijie Zhao; Jianxia Lu; Dawei Tu
A spatial model of infrared TV eye gaze tracking Human-Computer Interaction (HCI) system has been established based on the analysis of the modeling method and the spatial distribution of the models components. With the model, detected parameters of gazing direction and variations of eye position are analyzed when the eye is at different locations in the interactive space. According to the parameters and variations, changes of Pupil-Purkinje vector will be determined in the image coordinate system when the eye moves around, and the gazing direction will be calculated. The simulating tests have been done with Matlab simulink tools, and the results show that the determined deviation of gazing direction is in 7mm when the eye moves in any direction in the interactive space between 0~80mm relative to the optical center of the camera.
Archive | 2017
Xudong Zhang; Qijie Zhao; Qingxu Meng; Dawei Tu; Jin-Gang Yi
Scene segmentation is the basis of autonomous robots environmental understanding. For scene objects show different color characteristics aggregation in the mobile service robot operating indoor environment, this thesis proposes a scene segmentation method based on color layering and Multi-Size filtering. This paper slices the scene by constructing the color layering model and then designs the multi-size filter, which is designed according to the results of detecting the numbers and size of the connected domains in one layer, to segment the target. This paper also builds a robot operating system and the experiments of global environment and local scene are constructed with an average accuracy of scene segmentation and hierarchical, respectively, reaching 96.2 and 92.5 %. The results show that the method can effectively segment the scenes with salient color features.
Archive | 2017
Qingxu Meng; Qijie Zhao; Da-Wei Tu; Jin-Gang Yi
In order to improve the efficiency and adaptability of 3-D laser sensor measurement system, this paper proposed a calibration method based on structural parameters for the 3-D laser sensor measurement system. In this method, by scanning and measuring the structural calibration target with known structural parameters, the models are established in the laser scanning coordinate system and inertial measurement sensor coordinate system respectively, and the linear features are extracted then solve out the relative position and attitude by the constrains of structural parameters. The calibration results are carried out on the 3-D measurement system, which is used to conduct a simulation measurement experiment. In the experiment, the measured results’ relative errors of lengths and angles of measured objects are less than 1.0 and 0.5 % on average, which indicate the accuracy of the calibration method. While the noise is increasing, the relative errors keep stability, which indicates the effectiveness of the calibration method.
international conference on intelligent computing for sustainable energy and environment | 2014
Qijie Zhao; Hui Shao; Xudong Zhang; Dawei Tu
Eye gaze plays a very important role in identifying human’s attention, so it has been considered to be applied in human computer interaction, and one of the main factors in hindering eye gaze application is the complexity of systems and detection method of gaze direction. To build up an eye gaze tracking human-computer interaction system with simple infrastructure and good usability, a kind of gaze direction evaluating approach based on eyes moving trend has been proposed, and the eyes image and feature information are respectively captured and extracted with a Web camera and a computer, and the quantity of eyes moving trend is defined by the ratio of the distances from iris center to the both corners in one eye. Moreover, the image processing algorithms have been provided to detect the characteristics in the image of eyes area, and the eye corners equivalent position detection method has been put up with respect to the shape of eye corners. Some experiments have been done in the test system, and the results show that the proposed methods and algorithms can meet the communication demands for different subjects in multi type work conditions; after completing the initialization, the subjects can freely interact with the computer in a certain work range, and there is no need to frequently calibrate the work parameters, so the limitations to the subjects have been decreased and the system is easy to use, furthermore, it provides a new way for eye gaze tracking technology applied for caring the old and the disability.
international conference on intelligent computing | 2008
Qijie Zhao; Dawei Tu; Jianxia Lu; Zhihua Huang
On the basis of analyzing the information fusion method for human-robot interaction, a kind of information feedback structure for human---robot interaction platform has been provided, the source feedback information is handled with multi-layer processed structure, and the processed multi-mode information is integrated with certain interactive rules and knowledge in interaction knowledge database. A set of feedback information expression and fusion method has been presented in information integration process, the abstract information from feedback multimodality is expressed by the situation of task execution process and the stability of feedback modality, and then these information is fused in light of the result of task execution, at last, the fusion result is delivered to user by the most stable feedback modality. Some experiments have been done with the provided methods in the human---robot interaction system, and parts of the experiment results show that users cognition loads can be decreased by this semantic information fusion and feedback method, moreover, high work efficiency also can be gotten in human-robot interaction.