Junqiu Wang
Osaka University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Junqiu Wang.
IEEE Transactions on Image Processing | 2008
Junqiu Wang; Yasushi Yagi
We extend the standard mean-shift tracking algorithm to an adaptive tracker by selecting reliable features from color and shape-texture cues according to their descriptive ability. The target model is updated according to the similarity between the initial and current models, and this makes the tracker more robust. The proposed algorithm has been compared with other trackers using challenging image sequences, and it provides better performance.
systems man and cybernetics | 2006
Junqiu Wang; Hongbin Zha; Roberto Cipolla
This paper presents a novel coarse-to-fine global localization approach inspired by object recognition and text retrieval techniques. Harris-Laplace interest points characterized by scale-invariant transformation feature descriptors are used as natural landmarks. They are indexed into two databases: a location vector space model (LVSM) and a location database. The localization process consists of two stages: coarse localization and fine localization. Coarse localization from the LVSM is fast, but not accurate enough, whereas localization from the location database using a voting algorithm is relatively slow, but more accurate. The integration of coarse and fine stages makes fast and reliable localization possible. If necessary, the localization result can be verified by epipolar geometry between the representative view in the database and the view to be localized. In addition, the localization system recovers the position of the camera by essential matrix decomposition. The localization system has been tested in indoor and outdoor environments. The results show that our approach is efficient and reliable.
Pattern Recognition | 2010
Md. Altab Hossain; Yasushi Makihara; Junqiu Wang; Yasushi Yagi
Variations in clothing alter an individuals appearance, making the problem of gait identification much more difficult. If the type of clothing differs between the gallery and a probe, certain parts of the silhouettes are likely to change and the ability to discriminate subjects decreases with respect to these parts. A part-based approach, therefore, has the potential of selecting the appropriate parts. This paper proposes a method for part-based gait identification in the light of substantial clothing variations. We divide the human body into eight sections, including four overlapping ones, since the larger parts have a higher discrimination capability, while the smaller parts are more likely to be unaffected by clothing variations. Furthermore, as there are certain clothes that are common to different parts, we present a categorization for items of clothing that groups similar clothes. Next, we exploit the discrimination capability as a matching weight for each part and control the weights adaptively based on the distribution of distances between the probe and all the galleries. The results of the experiments using our large-scale gait dataset with clothing variations show that the proposed method achieves far better performance than other approaches.
international conference on robotics and automation | 2005
Junqiu Wang; Roberto Cipolla; Hongbin Zha
This paper presents a novel coarse-to-fine global localization approach that is inspired by object recognition and text retrieval techniques. Harris-Laplace interest points characterized by SIFT descriptors are used as natural landmarks. These descriptors are indexed into two databases: an inverted index and a location database. The inverted index is built based on a visual vocabulary learned from the feature descriptors. In the location database, each location is directly represented by a set of scale invariant descriptors. The localization process consists of two stages: coarse localization and fine localization. Coarse localization from the inverted index is fast but not accurate enough; whereas localization from the location database using voting algorithm is relatively slow but more accurate. The combination of coarse and fine stages makes fast and reliable localization possible. In addition, if necessary, the localization result can be verified by epipolar geometry between the representative view in database and the view to be localized. Experimental results show that our approach is efficient and reliable.
systems man and cybernetics | 2009
Junqiu Wang; Yasushi Yagi
We present a new approach for robust and efficient tracking by incorporating the efficiency of the mean-shift algorithm with the multihypothesis characteristics of particle filtering in an adaptive manner. The aim of the proposed algorithm is to cope with problems that were brought about by sudden motions and distractions. The mean-shift tracking algorithm is robust and effective when the representation of a target is sufficiently discriminative, the target does not jump beyond the bandwidth, and no serious distractions exist. We propose a novel two-stage motion estimation method that is efficient and reliable. If a sudden motion is detected by the motion estimator, some particle-filtering-based trackers can be used to outperform the mean-shift algorithm, at the expense of using a large particle set. In our approach, the mean-shift algorithm is used, as long as it provides reasonable performance. Auxiliary particles are introduced to cope with distractions and sudden motions when such threats are detected. Moreover, discriminative features are selected according to the separation of the foreground and background distributions when threats do not exist. This strategy is important, because it is dangerous to update the target model when the tracking is in an unsteady state. We demonstrate the performance of our approach by comparing it with other trackers in tracking several challenging image sequences.
international conference on image processing | 2005
Junqiu Wang; Hongbin Zha; Roberto Cipolla
This paper presents a novel approach using combined features to retrieve images containing specific objects, scenes or buildings. The content of an image is characterized by two kinds of features: Harris-Laplace interest points described by the SIFT descriptor and edges described by the edge color histogram. Edges and corners contain the maximal amount of information necessary for image retrieval. The feature detection in this work is an integrated process: edges are detected directly based on the Harris function; Harris interest points are detected at several scales and Harris-Laplace interest points are found using the Laplace function. The combination of edges and interest points brings efficient feature detection and high recognition ratio to the image retrieval system. Experimental results show this system has good performance.
robotics and biomimetics | 2006
Junqiu Wang; Yasushi Yagi
We extend the standard mean shift tracking algorithm to an adaptive tracker by selecting reliable features from color and shape cues. The standard mean shift algorithm assumes that the representation of tracking targets is always sufficiently discriminative enough against background. Most tracking algorithms developed based on the mean shift algorithm use only one cue (such as color) throughout their tracking process. The widely used color features are not always discriminative enough for target localization because illumination and viewpoint tend to change. Moreover, the background may be of a color similar to that of the target. We present an adaptive tracking algorithm that integrates color and shape features. Good features are selected and applied to represent the target according to the descriptive ability of these features. The proposed method has been implemented and tested in different kinds of image sequences. The experimental results demonstrate that our tracking algorithm is robust and efficient in the challenging image sequences.
international conference on robotics and automation | 2008
Junqiu Wang; Yasushi Makihara; Yasushi Yagi
Gait recognition has recently gained attention as an effective approach to identify individuals at a distance from a camera. Most existing gait recognition algorithms assume that people have been tracked and silhouettes have been segmented successfully. Tacking and segmentation are, however, very difficult especially for articulated objects such as human beings. Therefore, we present an integrated algorithm for tracking and segmentation supported by gait recognition. After the tracking module produces initial results consisting of bounding boxes and foreground likelihood images, the gait recognition module searches for the optimal silhouette-based gait models corresponding to the results. Then, the segmentation module tries to segment people out using the provided gait silhouette sequence as shape priors. Experiments on real video sequences show the effectiveness of the proposed approach.
international conference on pattern recognition | 2008
Junqiu Wang; Yasushi Yagi
The covariance tracker finds the targets in consecutive frames by global searching. Covariance tracking has achieved impressive successes thanks to its ability of capturing spatial and statistical properties as well as the correlations between them. Nevertheless, the covariance tracker is relatively inefficient due to its heavy computational cost of model updating and comparing the model with the covariance matrices of the candidate regions. Moreover, it is not good at dealing with articulated object tracking since integral histograms are employed to accelerate the searching process. In this work, we aim to alleviate the computational burden by selecting appropriate tracking approaches. We compute foreground probabilities of pixels and localize the target by local searching when the tracking is in steady states. Covariance tracking is performed when distractions, sudden motions or occlusions are detected. Different from the traditional covariance tracker, we use log-Euclidean metrics instead of Riemannian invariant metrics which are more computationally expensive. The proposed tracking algorithm has been verified on many video sequences. It proves more efficient than the covariance tracker. It is also effective in dealing with occlusions, which are an obstacle for local mode-seeking trackers such as the mean-shift tracker.
robotics and biomimetics | 2004
Junqiu Wang; Roberto Cipolla; Hongbin Zha
In this paper, we propose a vision based mobile robot localization strategy. Local scale-invariant features are used as natural landmarks in unstructured and unmodified environment. The local characteristics of the features we use prove to be robust to occlusion and outliers. In addition, the invariance of the features to viewpoint change makes them suitable landmarks for mobile robot localization. Scale-invariant features detected in the first exploration are indexed into a location database. Indexing and voting allow efficient recognition of global localization. The localization result is verified by epipolar geometry between the representative view in database and the view to be localized, thus the probability of false localization will be decreased. The localization system can recover the pose of the camera mounted on the robot by essential matrix decomposition. Then the position of the robot can be computed easily. Both calibrated and un-calibrated cases are discussed and relative position estimation based on calibrated camera turns out to be the better choice. Experimental results show that our approach is effective and reliable in the case of illumination changes, similarity transformations and extraneous features