Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shuhan Shen is active.

Publication


Featured researches published by Shuhan Shen.


international conference on intelligent transportation systems | 2009

Automatic generation of road network map from massive GPS, vehicle trajectories

Wenhuan Shi; Shuhan Shen; Yuncai Liu

Intelligent transportation systems (ITS) and navigation systems usually demand good timeliness on their used vector road network maps. Unfortunately, existing methods for road network map generation, such as the surveying method and the satellite image digitizing method, are difficult to generate up-to-date road network maps. This paper presents a method for automatic generation of road network map from massive GPS vehicle trajectories, which can generate the up-to-date road network map from the up-to-date GPS vehicle trajectory data that can be widely available. Given a set of GPS vehicle trajectory data, the input first is processed to construct a road network bitmap which depicts the road network. Then the road network skeleton on the bitmap is computed. Finally, the road network graph extraction is implemented on the skeleton to generate the vector road network map data. To evaluate the presented method, we implemented and tested it on the massive GPS vehicle trajectory data collected in Jilin City, China. The test results demonstrate that the presented method can be promising in applications.


IEEE Transactions on Image Processing | 2013

Accurate Multiple View 3D Reconstruction Using Patch-Based Stereo for Large-Scale Scenes

Shuhan Shen

In this paper, we propose a depth-map merging based multiple view stereo method for large-scale scenes which takes both accuracy and efficiency into account. In the proposed method, an efficient patch-based stereo matching process is used to generate depth-map at each image with acceptable errors, followed by a depth-map refinement process to enforce consistency over neighboring views. Compared to state-of-the-art methods, the proposed method can reconstruct quite accurate and dense point clouds with high computational efficiency. Besides, the proposed method could be easily parallelized at image level, i.e., each depth-map is computed individually, which makes it suitable for large-scale scene reconstruction with high resolution images. The accuracy and efficiency of the proposed method are evaluated quantitatively on benchmark data and qualitatively on large data sets.


IEEE Transactions on Image Processing | 2010

Monocular 3-D Tracking of Inextensible Deformable Surfaces Under

Shuhan Shen; Wenhuan Shi; Yuncai Liu

We present a method for recovering the 3-D shape of an inextensible deformable surface from a monocular image sequence. State-of-the-art methods on this problem, utilize L ¿-norm of reprojection residual vectors and formulate the tracking problem as a Second-Order Cone Programming (SOCP) problem. Instead of using L ¿ which is sensitive to outliers, we use L 2-norm of reprojection errors. Generally, using L 2 leads a nonconvex optimization problem which is difficult to minimize. Instead of solving the nonconvex problem directly, we design an iterative L 2-norm approximation process to approximate the nonconvex objective function, in which only a linear system needs to be solved at each iteration. Furthermore, we introduce a shape regularization term into this iterative process in order to keep the inextensibility of the recovered mesh. Compared with previous methods, ours performs more robust to image noises, outliers and large interframe motions with high computational efficiency. The robustness and accuracy of our approach are evaluated quantitatively on synthetic data and qualitatively on real data.


Pattern Recognition Letters | 2008

L_2

Shuhan Shen; Minglei Tong; Haolong Deng; Yuncai Liu; Xiaojun Wu; Kaoru Wakabayashi; Hideki Koike

A novel evolutionary algorithm called probability evolutionary algorithm (PEA), and a method based on PEA for visual tracking of human motion are presented. PEA is inspired by estimation of distribution algorithms and quantum-inspired evolutionary algorithm, and it has a good balance between exploration and exploitation with very fast computation speed. The individual in PEA is encoded by the probability vector, defined as the smallest unit of information, for the probabilistic representation. The observation step is used in PEA to obtain the observed states of the individual, and the update operator is used to evolve the individual. In the PEA based human tracking framework, tracking is considered to be a function optimization problem, so the aim is to optimize the matching function between the model and the image observation. Since the matching function is a very complex function in high-dimensional space, PEA is used to optimize it. Experiments on 2D and 3D human motion tracking demonstrate the effectiveness, significance and computation efficiency of the proposed human tracking method.


Industrial Robot-an International Journal | 2012

-Norm

Haixia Wang; Shuhan Shen; Xiao Lu

Purpose - The purpose of this paper is to propose a screw axis identification (SAI) method based on the product of exponentials (POE) model, which is concerned with calibrating a serial robot with m joints equipped with a stereo-camera vision system.


IEEE Transactions on Image Processing | 2015

Model based human motion tracking using probability evolutionary algorithm

Hainan Cui; Shuhan Shen; Wei Gao; Zhanyi Hu

One of the potentially effective means for large-scale 3D scene reconstruction is to reconstruct the scene in a global manner, rather than incrementally, by fully exploiting available auxiliary information on the imaging condition, such as camera location by Global Positioning System (GPS), orientation by inertial measurement unit (or compass), focal length from EXIF, and so on. However, such auxiliary information, though informative and valuable, is usually too noisy to be directly usable. In this paper, we present an approach by taking advantage of such noisy auxiliary information to improve structure from motion solving. More specifically, we introduce two effective iterative global optimization algorithms initiated with such noisy auxiliary information. One is a robust rotation averaging algorithm to deal with contaminated epipolar graph, the other is a robust scene reconstruction algorithm to deal with noisy GPS data for camera centers initialization. We found that by exclusively focusing on the estimated inliers at the current iteration, the optimization process initialized by such noisy auxiliary information could converge well and efficiently. Our proposed method is evaluated on real images captured by unmanned aerial vehicle, StreetView car, and conventional digital cameras. Extensive experimental results show that our method performs similarly or better than many of the state-of-art reconstruction approaches, in terms of reconstruction accuracy and completeness, but is more efficient and scalable for large-scale image data sets.


Pattern Recognition Letters | 2015

A screw axis identification method for serial robot calibration based on the POE model

Songhao Zhu; Zhe Shi; Chengjian Sun; Shuhan Shen

A multimodal deep learning framework is proposed to improve annotation performance.The proposed framework learns to fine-tune parameters of each individual modality.The proposed framework learns to find optimal combination of diverse modalities.Experiments on NUS-WIDE evaluate the performance of the proposed framework. Multilabel image annotation is one of the most important open problems in computer vision field. Unlike existing works that usually use conventional visual features to annotate images, features based on deep learning have shown potential to achieve outstanding performance. In this work, we propose a multimodal deep learning framework, which aims to optimally integrate multiple deep neural networks pretrained with convolutional neural networks. In particular, the proposed framework explores a unified two-stage learning scheme that consists of (i) learning to fine-tune the parameters of deep neural network with respect to each individual modality, and (ii) learning to find the optimal combination of diverse modalities simultaneously in a coherent process. Experiments conducted on a variety of public datasets evaluate the performance of the proposed framework for multilabel image annotation, in which the encouraging results validate the effectiveness of the proposed algorithms. Display Omitted


asian conference on computer vision | 2009

Efficient Large-Scale Structure From Motion by Fusing Auxiliary Imaging Information

Shuhan Shen; Wenhuan Shi; Yuncai Liu

We present a method for recovering the 3D shape of an inextensible deformable surface from a monocular image sequence. State-of-the-art method on this problem [1] utilizes L∞-norm of reprojection residual vectors and formulate the tracking problem as a Second Order Cone Programming (SOCP) problem. Instead of using L∞ which is sensitive to outliers, we use L2-norm of reprojection errors. Generally, using L2 leads a non-convex optimization problem which is difficult to minimize. Instead of solving the non-convex problem directly, we design an iterative L2-norm approximation process to approximate the non-convex objective function, in which only a linear system needs to be solved at each iteration. Furthermore, we introduce a shape regularization term into this iterative process in order to keep the inextensibility of the recovered mesh. Compared with previous methods, ours performs more robust to outliers and large inter-frame motions with high computational efficiency. The robustness and accuracy of our approach are evaluated quantitatively on synthetic data and qualitatively on real data.


computer vision and pattern recognition | 2017

Deep neural network based image annotation

Hainan Cui; Xiang Gao; Shuhan Shen; Zhanyi Hu

Structure-from-Motion (SfM) methods can be broadly categorized as incremental or global according to their ways to estimate initial camera poses. While incremental system has advanced in robustness and accuracy, the efficiency remains its key challenge. To solve this problem, global reconstruction system simultaneously estimates all camera poses from the epipolar geometry graph, but it is usually sensitive to outliers. In this work, we propose a new hybrid SfM method to tackle the issues of efficiency, accuracy and robustness in a unified framework. More specifically, we propose an adaptive community-based rotation averaging method first to estimate camera rotations in a global manner. Then, based on these estimated camera rotations, camera centers are computed in an incremental way. Extensive experiments show that our hybrid method performs similarly or better than many of the state-of-the-art global SfM approaches, in terms of computational efficiency, while achieves similar reconstruction accuracy and robustness with two other state-of-the-art incremental SfM approaches.


IEEE Transactions on Image Processing | 2014

Monocular template-based tracking of inextensible deformable surfaces under L 2 -norm

Shuhan Shen; Zhanyi Hu

Depth-map merging based 3D modeling is an effective approach for reconstructing large-scale scenes from multiple images. In addition to generate high quality depth maps at each image, how to select suitable neighboring images for each image is also an important step in the reconstruction pipeline, unfortunately to which little attention has been paid in the literature untill now. This paper is intended to tackle this issue for large scale scene reconstruction where many unordered images are captured and used with substantial varying scale and view-angle changes. We formulate the neighboring image selection as a combinatorial optimization problem and use the quantum-inspired evolutionary algorithm to seek its optimal solution. Experimental results on the ground truth data set show that our approach can significantly improve the quality of the depth-maps as well as final 3D reconstruction results with high computational efficiency.

Collaboration


Dive into the Shuhan Shen's collaboration.

Top Co-Authors

Avatar

Yuncai Liu

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Zhanyi Hu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Hainan Cui

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xiang Gao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Chenhao Wang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Wenhuan Shi

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Wenjuan Ma

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Xiaofeng Sun

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yang Zhou

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Haolong Deng

Shanghai Jiao Tong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge