Nanning Zheng
Xi'an Jiaotong University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nanning Zheng.
IEEE Transactions on Intelligent Transportation Systems | 2004
Qing Li; Nanning Zheng; Hong Cheng
This work presents the current status of the Springrobot autonomous vehicle project, whose main objective is to develop a safety-warning and driver-assistance system and an automatic pilot for rural and urban traffic environments. This system uses a high precise digital map and a combination of various sensors. The architecture and strategy for the system are briefly described and the details of lane-marking detection algorithms are presented. The R and G channels of the color image are used to form graylevel images. The size of the resulting gray image is reduced and the Sobel operator with a very low threshold is used to get a grayscale edge image. In the adaptive randomized Hough transform, pixels of the gray-edge image are sampled randomly according to their weights corresponding to their gradient magnitudes. The three-dimensional (3-D) parametric space of the curve is reduced to the two-dimensional (2-D) and the one-dimensional (1-D) space. The paired parameters in two dimensions are estimated by gradient directions and the last parameter in one dimension is used to verify the estimated parameters by histogram. The parameters are determined coarsely and quantization accuracy is increased relatively by a multiresolution strategy. Experimental results in different road scene and a comparison with other methods have proven the validity of the proposed method.
international conference on computer vision | 2001
Hong Chen; Ying-Qing Xu; Heung-Yeung Shum; Song-Chun Zhu; Nanning Zheng
In this paper, we present an example-based facial sketch system. Our system automatically generates a sketch from an input image, by learning from example sketches drawn with a particular style by an artist. There are two key elements in our system: a non-parametric sampling method and a flexible sketch model. Given an input image pixel and its neighborhood, the conditional distribution of a sketch point is computed by querying the examples and finding all similar neighborhoods. An expected sketch image is then drawn from the distribution to reflect the drawing style. Finally, facial sketches are obtained by incorporating the sketch model. Experimental results demonstrate the effectiveness of our techniques.
IEEE Transactions on Intelligent Transportation Systems | 2007
Hong Cheng; Nanning Zheng; Xuetao Zhang; Junjie Qin; H. van de Wetering
Road situation analysis in Interactive Intelligent Driver-Assistance and Safety Warning (I2DASW) systems involves estimation and prediction of the position and size of various on-road obstacles. Real-time processing, given incomplete and uncertain information, is a challenge for current object detection and tracking technologies. This paper proposed a development framework and novel algorithms for road situation analysis based on driving action behavior, where the safety situation is analyzed by simulating real driving action behaviors. First, we review recent development and trends in road situation analysis to provide perspective for the related research. Second, we introduce a road situation analysis framework, where onboard sensors provide information about drivers, traffic environment, and vehicles. Finally, on the basis of the previous frameworks, we proposed multiple-obstacle detection and tracking algorithms using multiple sensors including radar, lidar, and a camera, where a decentralized track-to-track fusion approach is introduced to fuse these sensors. In order to reduce the effect of obstacle shape and appearance, we cluster lidar data and then classify obstacles into two categories: static and moving objects. Future collisions are assessed by computation of local tracks of moving obstacles using extended Kalman filter, maximum likelihood estimation to fuse distributed local tracks into global tracks, and finally, computation of future collision distribution from the global tracks. Our experimental results show that our approach is efficient for road situation evaluation and prediction
IEEE Intelligent Systems | 2005
Li Li; Jingyan Song; Fei-Yue Wang; Wolfgang Niehsen; Nanning Zheng
We discuss several selected topics from IVS 05 to provide a broad overview of intelligent-vehicle research perspectives and innovative projects. Specifically, we focus on advances in vehicle sensing, vehicle motion control and communications, and driver assistance and monitoring.
IEEE Transactions on Intelligent Transportation Systems | 2012
Li Li; Ding Wen; Nanning Zheng; Lin-Cheng Shen
This paper provides a survey of recent works on cognitive cars with a focus on driver-oriented intelligent vehicle motion control. The main objective here is to clarify the goals and guidelines for future development in the area of advanced driver-assistance systems (ADASs). Two major research directions are investigated and discussed in detail: (1) stimuli-decisions-actions, which focuses on the driver side, and (2) perception enhancement-action-suggestion-function-delegation, which emphasizes the ADAS side. This paper addresses the important achievements and major difficulties of each direction and discusses how to combine the two directions into a single integrated system to obtain safety and comfort while driving. Other related topics, including driver training and infrastructure design, are also studied.
acm multimedia | 2002
Hong Chen; Nanning Zheng; Lin Liang; Yan Li; Ying-Qing Xu; Heung-Yeung Shum
In this paper, we present PicToon, a cartoon system which can generate a personalized cartoon face from an input Picture. PicToon is easy to use and requires little user interaction. Our system consists of three major components: an image-based Cartoon Generator, an interactive Cartoon Editor for exaggeration, and a speech-driven Cartoon Animator. First, to capture an artistic style, the cartoon generation is decoupled into two processes: sketch generation and stroke rendering. An example-based approach is taken to automatically generate sketch lines which depict the facial structure. An inhomogeneous non-parametric sampling plus a flexible facial template is employed to extract the vector-based facial sketch. Various styles of strokes can then be applied. Second, with the pre-designed templates in Cartoon Editor, the user can easily make the cartoon exaggerated or more expressive. Third, a real-time lip-syncing algorithm is also developed that recovers a statistical audio-visual mapping between the characters voice and the corresponding lip configuration. Experimental results demonstrate the effectiveness of our system.
IEEE Intelligent Systems | 2011
Ding Wen; Gongjun Yan; Nanning Zheng; Lin-Cheng Shen; Li Li
As a result of more cars on the road, traffic becomes more congested and streets become more risky. In addition, new communication and entertainment applications make drivers ever-more over-burdened and distracted. To relieve the continually increasing stress on drivers and reduce the number of accidents, current intelligent vehicle research is attempting to understand and model drivers. This article surveys recent works on cognitive vehicles that model drivers in a stimuli-decision-reaction mode and, on vehicle system side, improve perception, suggestion, and function delegation of traffic environment. The authors illustrate the relationships between recent models and methods and list related research challenges, while introducing applications of the driver-cognition models in intelligent vehicle control systems.
international symposium on neural networks | 2006
Hong Cheng; Nanning Zheng; Chong Sun; Huub van de Wetering
Robust and reliable vehicle detection is a challenging task under the conditions of variable size and distance, various weather and illumination, cluttered background, the relative motion between the host vehicle and background. In this paper we investigate real-time vehicle detection using machine vision for active safety in vehicle applications. The conventional search method of vehicle detection is a full search one using image pyramid,which processes the image patches in same way and costs same computing time, even for no vehicle region according to prior knowledge. n nOur vehicle detection approach includes two basic phases. In the hypothesis generation phase, we determine the Regions of Interest (ROI) in an image according to lane vanishing points; furthermore, near, middle, and far ROIs, each with a different resolution, are extracted from the image. From the analysis of horizontal and vertical edges in the image, vehicle hypothesis lists are generated for each ROI. Finally, a hypothesis list for the whole image is obtained by combining these three lists. In the hypothesis validation phase, we propose a vehicle validation approach using Support Vector Machine (SVM) and Gabor feature. The experimental results show that the average right detection rate reach 90% and the average execution time is 30ms using a Pentium(R)4 CPU 2.4GHz.
IEEE Transactions on Intelligent Transportation Systems | 2014
Wuling Huang; Ding Wen; Jason Geng; Nanning Zheng
Performance evaluation is considered as an important part of the unmanned ground vehicle (UGV) development; it helps to discover research problems and improves driving safety. In this paper, a task-specific performance evaluation model of UGVs applied in the Intelligent Vehicle Future Challenge (IVFC) annual competitions is discussed. It is defined in functional levels with a formal evaluation process, including metrics analysis, metrics preprocessing, weights calculation, and a technique for order of preference by similarity to ideal solution and fuzzy comprehensive evaluation methods. IVFC 2012 is selected as a case study and overall performances of five UGVs are evaluated with specific analyzed autonomous driving tasks of environment perception, structural on-road driving, unstructured zone driving, and dynamic path planning. The model is proved to be helpful in IVFC serial competition UGVs performance evaluation.
ieee intelligent vehicles symposium | 2009
Xuetao Zhang; Nanning Zheng; Fan Mu; Yongjian He
In this paper, we consider the problem of estimating the pose of a driver from video data. We propose to use isophote features to improve classification performance when illumination varies. In particular, we both use the direction and curvature features. The features are encoded into direction histogram and curvature histogram in the blocks of the image. Experimental results show that the proposed features could well describe the structure of driver face image, and have a good performance in real scene.