Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Clement Chun Cheong Pang is active.

Publication


Featured researches published by Clement Chun Cheong Pang.


IEEE Transactions on Intelligent Transportation Systems | 2004

A novel method for resolving vehicle occlusion in a monocular traffic-image sequence

Clement Chun Cheong Pang; William Wai Leung Lam; Nelson Hon Ching Yung

This paper presents a novel method for resolving the occlusion of vehicles seen in a sequence of traffic images taken from a single roadside mounted camera. Its concept is built upon a previously proposed vehicle-segmentation method, which is able to extract the vehicle shape out of the background accurately without the effect of shadows and other visual artifacts. Based on the segmented shape and that the shape can be represented by a simple cubical model, we propose a two-step method: first, detect the curvature of the shape contour to generate a data set of the vehicles occluded and, second, decompose it into individual vehicle models using a vanishing point in three dimensions and the set of curvature points of the composite model. The proposed method has been tested on a number of monocular traffic-image sequences and found that it detects the presence of occlusion correctly and resolves most of the occlusion cases involving two vehicles. It only fails when the occlusion was very severe. Further analysis of vehicle dimension also shows that the average estimation accuracy for vehicle width, length, and height are 94.78%, 94.09%, and 95.44%, respectively.


IEEE Transactions on Intelligent Transportation Systems | 2007

A Method for Vehicle Count in the Presence of Multiple-Vehicle Occlusions in Traffic Images

Clement Chun Cheong Pang; William Wai Leung Lam; Nelson Hon Ching Yung

This paper proposes a novel method for accurately counting the number of vehicles that are involved in multiple-vehicle occlusions, based on the resolvability of each occluded vehicle, as seen in a monocular traffic image sequence. Assuming that the occluded vehicles are segmented from the road background by a previously proposed vehicle segmentation method and that a deformable model is geometrically fitted onto the occluded vehicles, the proposed method first deduces the number of vertices per individual vehicle from the camera configuration. Second, a contour description model is utilized to describe the direction of the contour segments with respect to its vanishing points, from which individual contour description and vehicle count are determined. Third, it assigns a resolvability index to each occluded vehicle based on a resolvability model, from which each occluded vehicle model is resolved and the vehicle dimension is measured. The proposed method has been tested on 267 sets of real-world monocular traffic images containing 3074 vehicles with multiple-vehicle occlusions and is found to be 100% accurate in calculating vehicle count, in comparison with human inspection. By comparing the estimated dimensions of the resolved generalized deformable model of the vehicle with the actual dimensions published by the manufacturers, the root-mean-square error for width, length, and height estimations are found to be 48, 279, and 76 mm, respectively.


Optical Engineering | 2004

Highly accurate texture-based vehicle segmentation method

William Wai Leung Lam; Clement Chun Cheong Pang; Nelson Hon Ching Yung

In modern traffic surveillance, computer vision methods have often been employed to detect vehicles of interest because of the rich information content contained in an image. Segmentation of moving ve- hicles using image processing and analysis algorithms has been an im- portant research topic in the past decade. However, segmentation re- sults are strongly affected by two issues: moving cast shadows and reflective regions, both of which reduce accuracy and require postpro- cessing to alleviate the degradation. We propose an efficient and highly accurate texture-based method for extracting the boundary of vehicles from the stationary background that is free from the effect of moving cast shadows and reflective regions. The segmentation method utilizes the differences in textural property between the road, vehicle cast shadow, reflection on the vehicle, and the vehicle itself, rather than just the inten- sity differences between them. By further combining the luminance and chrominance properties into an OR map, a number of foreground vehicle masks are constructed through a series of morphological operations, where each mask describes the outline of a moving vehicle. The pro- posed method has been tested on real-world traffic image sequences and achieved an average error rate of 3.44% for 50 tested vehicle im- ages.


IEEE Transactions on Intelligent Transportation Systems | 2007

Vehicle-Component Identification Based on Multiscale Textural Couriers

William Wai Leung Lam; Clement Chun Cheong Pang; Nelson Hon Ching Yung

This paper presents a novel method for identifying vehicle components in a monocular traffic image sequence. In the proposed method, the vehicles are first divided into multiscale regions based on the center of gravity of the foreground vehicle mask and the calibrated-camera parameters. With these multiscale regions, textural couriers are generated based on the localized variances of the foreground vehicle image. A new scale-space model is subsequently created based on the textural couriers to provide a topological structure of the vehicle. In this model, key feature points of the vehicle can significantly be described based on the topological structure to determine the regions that are homogenous in texture from which vehicle components can be identified by segmenting the key feature points. Since no motion information is required in order to segment the vehicles prior to recognition, the proposed system can be used in situations where extensive observation time is not available or motion information is unreliable. This novel method can be used in real-world systems such as vehicle-shape reconstruction, vehicle classification, and vehicle recognition. This method was demonstrated and tested on 200 different vehicle samples captured in routine outdoor traffic images and achieved an average error rate of 6.8% with a variety of vehicles and traffic scenes.


IEEE Transactions on Intelligent Transportation Systems | 2015

Real-Time Estimation of Lane-to-Lane Turning Flows at Isolated Signalized Junctions

Seunghyeon Lee; Sc Wong; Clement Chun Cheong Pang; Keechoo Choi

In this paper, we develop rule- and model-based approaches for the real-time estimation of lane-to-lane turning flows. Our aim is to determine the turning proportions of vehicles based on detector information at isolated signalized junctions and thereby establish effective control strategies for adaptive traffic control systems. The key concept involves identifying the entrance lane of a vehicle detected in an exit lane at the signalized junction. Lane-to-lane turning flows are estimated by tracing the corresponding entrance lanes of the vehicle based on the detector and signal information from the set of potential entrance lanes at the junction. In the rule-based approach, the entrance lane of a vehicle detected in an exit lane is identified according to a set of specified rules. The model-based approach, which is based on utility maximization, is used to identify the most probable turns in a set of potential upstream entrance lanes. Both computer simulations and real-world traffic data show that the model-based approach outperforms the rule-based approach, particularly when turning on red is allowed, and is capable of accurate estimation under a wide range of traffic conditions in real time. However, the rule-based approach is simpler and does not require calibration, which are positive assets when no prior data are available for calibration.


electronic imaging | 2003

Novel method for handling vehicle occlusion in visual traffic surveillance

Clement Chun Cheong Pang; William Wai Leung Lam; Nelson Hon Ching Yung

This paper presents a novel algorithm for handling occlusion in visual traffic surveillance (VTS) by geometrically splitting the model that has been fitted onto the composite binary vehicle mask of two occluded vehicles. The proposed algorithm consists of a critical points detection step, a critical points clustering step and a model partition step using the vanishing point of the road. The critical points detection step detects the major critical points on the contour of the binary vehicle mask. The critical points clustering step selects the best critical points among the detected critical points as the reference points for the model partition. The model partition step partitions the model by exploiting the information of the vanishing point of the road and the selected critical points. The proposed algorithm was tested on a number of real traffic image sequences, and has demonstrated that it can successfully partition the model that has been fitted onto two occluded vehicles. To evaluate the accuracy, the dimensions of each individual vehicle are estimated based on the partitioned model. The estimation accuracies in vehicle width, length and height are 95.5%, 93.4% and 97.7% respectively.


international conference on image processing | 2004

Multiscale space vehicle component identification

William Wai Leung Lam; Clement Chun Cheong Pang; Nelson Hon Ching Yung

Vision based vehicle recognition systems have an important role in traffic surveillance. Most of these systems however fail to distinguish vehicles with similar dimensions due to the lack of other details. This paper presents a new scale space method for identifying components of moving vehicles to enable recognition eventually. In the proposed method, vehicles are first divided into multiscale regions based on the center of gravity of the foreground vehicle mask. It utilizes both the texture scale space and the intensity scale space to determine regions that are homogenous in texture and intensity, from which vehicle components are identified based on the relations between these regions. This method was tested on over a hundred outdoor traffic images and the results are very promising.


Optical Engineering | 2013

Generalized camera calibration model for trapezoidal patterns on the road

Clement Chun Cheong Pang; Seakay Siqi Xie; Sc Wong; Keechoo Choi

Abstract. We introduce a generalized camera calibration model that is able to determine the camera parameters without requiring perfect rectangular road-lane markings, thus overcoming the limitations of state-of-the-art calibration models. The advantage of the new model is that it can cope in situations where road-lane markings do not form a perfect rectangle, making calibration by trapezoidal patterns or parallelograms possible. The model requires only four reference points—the lane width and the length of the left and right lane markings—to determine the camera parameters. Through real-world surveying experiments, the new model is shown to be effective in defining the 2D/3D transformation (or vice versa) when there is no rectangular pattern on the road, and can also cope with trapezoidal patterns, near-parallelograms, and imperfect rectangles. This development greatly increases the flexibility and generality of traditional camera calibration models.


international conference on machine vision | 2007

A methodology for resolving severely occluded vehicles based on component-based multi-resolution relational graph matching

Clement Chun Cheong Pang; Tan Zhigang; Nelson Hon Ching Yung

This paper presents a method for resolving severely occluded vehicles (SOV) frequently appear in images of congested traffic. The proposed method is based on the concept of modeling vehicle components graphically in an object hierarchy. By extracting component description of a vehicle, constructing a representative partial graph and matching it with the vehicle graph model defined a priori, the missing components due to visual occlusion can be identified. Experimental results have shown that the proposed method can partition the clustered graph of SOVs in image that are located far away from the camera as well as identifying the missing components of the vehicles. Moreover, it can classify the vehicle type based on the missing components as well as the vehicle graph model.


Optical Science and Technology, the SPIE 49th Annual Meeting | 2004

A methodology for determining the resolvability of multiple vehicle occlusion in a monocular traffic image sequence

Clement Chun Cheong Pang; Nelson Hon Ching Yung

This paper proposed a knowledge-based methodology for determining the resolvability of N occluded vehicles seen in a monocular image sequence. The resolvability of each vehicle is determined by: firstly, deriving the relationship between the camera position and the number of vertices of a projected cuboid on the image; secondly, finding the direction of the edges of the projected cuboid in the image; and thirdly, modeling the maximum number of occluded cuboid edges of which the occluded cuboid is irresolvable. The proposed methodology has been tested rigorously on a number of real world monocular traffic image sequences that involves multiple vehicle occlusions, and is found to be able to successfully determine the number of occluded vehicles as well as the resolvability of each vehicle. We believe the proposed methodology will form the foundation for a more accurate traffic flow estimation and recognition system.

Collaboration


Dive into the Clement Chun Cheong Pang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sc Wong

University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

S Xie

University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tan Zhigang

University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge