Yi Hsing Tseng
National Cheng Kung University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yi Hsing Tseng.
Photogrammetric Engineering and Remote Sensing | 2003
Yi Hsing Tseng; Sendo Wang
mostly implemented in a semi-automatic manner, solving the Building extraction based on pre-established models has been model-image fitting problem based on some high-level inforrecognized as a promising idea for acquiring 3D data for build- mation given by the operator. The spatial data of a building ings from aerial images. This paper proposes a novel building object are determined, when model-image fitting is achieved. extraction method developed from the concept of fitting CSG In contrast to the traditional point-by-point mapping proce(Constructive Solid Geometry) primitives to aerial images. To dure, model-based building extraction features object-based be practicable, this method adopts a semiautomatic proce- data acquisition. Although the idea and benefits of modeldure, carrying out high-level tasks (building detection, model based building extraction have been acknowledged, the workselection, and attribution) interactively by the operator and ing principle is not well established. Therefore, the focus of performing optimal model-image fitting automatically with a this study is to establish a practical theory for model-based least-squares fitting algorithm. Buildings, represented by CSG building extraction. models, can be reconstructed part by part after fitting each Building modeling and model-image fitting are the key parameterized CSG primitive to the edge pixels of aerial issues in model-based building extraction. The issue on buildimages. Reconstructed building parts can then be combined ing modeling is how to establish a set of representative and using CSG Boolean set operators. Consequently, a building is complete building models. This paper reviews some building represented by a CSG tree in which each node links two model schemes known in the field of digital photogrammetry branches of combined parts. This paper demonstrates ten and discusses how CSG modeling is employed in the proposed examples of building extraction from aerial photos taken at a method. The issue in model-image fitting is how to develop a scale of 1:5,000 and scanned at a pixel size of 25 m. All of computer algorithm that is able to determine the pose and the tests were performed in the prototypal system implemented shape parameters of an object model such that the edge lines of in a CAD-based environment cooperated with a number of the wire frame, as projected into the images, are optimally specially designed programs. The process time for each prim- coincident with the corresponding edge pixels. It is assumed itive is about 20 seconds and the successful rate of model- that the image orientations are known and that the pose and image fitting was about 90 percent. Evaluated with some check shape parameters are approximately determined through an points, the fitting accuracy was about 0.3 m horizontally and interactive manual process. To deal with this problem, this 1m vertically. The test results are encouraging and promote paper proposes a tailored least-squares model-image fitting the theory of model-based building extraction. algorithm as the key component of the building extraction framework.
Cold Regions Science and Technology | 1995
I. M. Whillans; Yi Hsing Tseng
Abstract Measurements of glacier motion and deformation are obtained by automatically matching features, such as crevasses, on repeat images. A computer-based method identifies and tracks groups of features on successive images, and calculates their displacement, and the rotation and distortion of the ice. Ice deformation within each matched area is permitted and calculated using a least-squares method within each area. The method is applied to SPOT satellite images of ice stream B, Antarctica. A quality-checking scheme rejects inappropriate matches. The results compare satisfactorily with velocities obtained by manual methods from repeat photography of the same region. A simpler version of the method, similar to that used by Bindschadler and Scambos, also obtains very satisfactory results.
Photogrammetric Engineering and Remote Sensing | 2010
Miao Wang; Yi Hsing Tseng
Lidar (light detection and ranging) point cloud data contain abundant three-dimensional (3D) information. Dense distribution of scanned points on object surfaces prominently implies surface features. Particularly, plane features commonly appear in a typical lidar dataset of artificial structures. To explore implicitly contained spatial information, this study developed an automatic scheme to segment a lidar point cloud dataset into coplanar point clusters. The central mechanism of the proposed method is a split-and-merge segmentation based on an octree structure. Plane fitting serves as an engine in the mechanism that evaluates how well a group of points fits to a plane. Segmented coplanar points and derived parameters of their best-fit plane are obtained through the process. This paper also provides algorithms to derive various geometric properties of segmented coplanar points, including inherent properties of a plane, intersections of planes, and properties of point distribution on a plane. Several successful cases of handling airborne and terrestrial lidar data as well as a combination of the two are demonstrated. This method should improve the efficiency of object modelling using lidar data.
Sensors | 2011
Jiann Yeou Rau; Ayman Habib; Ana Paula Kersting; Kai Wei Chiang; Ki In Bang; Yi Hsing Tseng; Yu Hua Li
A land-based mobile mapping system (MMS) is flexible and useful for the acquisition of road environment geospatial information. It integrates a set of imaging sensors and a position and orientation system (POS). The positioning quality of such systems is highly dependent on the accuracy of the utilized POS. This limitation is the major drawback due to the elevated cost associated with high-end GPS/INS units, particularly the inertial system. The potential accuracy of the direct sensor orientation depends on the architecture and quality of the GPS/INS integration process as well as the validity of the system calibration (i.e., calibration of the individual sensors as well as the system mounting parameters). In this paper, a novel single-step procedure using integrated sensor orientation with relative orientation constraint for the estimation of the mounting parameters is introduced. A comparative analysis between the proposed single-step and the traditional two-step procedure is carried out. Moreover, the estimated mounting parameters using the different methods are used in a direct geo-referencing procedure to evaluate their performance and the feasibility of the implemented system. Experimental results show that the proposed system using single-step system calibration method can achieve high 3D positioning accuracy.
航測及遙測學刊 | 2006
Pai-Hui Hsu; Yi Hsing Tseng; Peng Gong
The rich and detailed spectral information provided by hyperspectral images can be used to identify and quantify a large range of surface materials which cannot be identified by multispectral images. However, the classification methods that have been successfully applied to multispectral data in the past are not as effective as for hyperspectral data. The main problem is that the training data set does not increase corresponding to the increase of dimensionality of hyperspectral data. Actually, the problem of the ”curse of dimensionality” emerges when a statistical classification method applied to hyperspectral data. A simpler, but sometimes very effective way of dealing with hyperspectral data is to reduce the number of dimensionality. This can be done by feature extraction that a small number of salient features are extracted from the hyperspectral data when confronted with a limited set of training samples. In this study, several methods based on the wavelet transforms are developed to extract useful features for classification. Firstly, wavelet or wavelet packet transforms are implemented on the hyperspectral images and a sequence of wavelet coefficients is produced. Then, a simple feature selection procedure associated with a criterion is used to select the effective features for classification. Because the wavelet-based feature extraction optimizes the criterion in a lower dimensional space, the problems of limited training sample size and the curse of dimensionality can be avoided. Finally, two AVIRIS data sets are used to test the performance of the proposed wavelet-based methods. The experiment results show that the wavelet-based methods perform well for dimensionality reduction and also be effective for classification.
Annals of Gis: Geographic Information Sciences | 2002
Pai Hui Hsu; Yi Hsing Tseng; Peng Gong
Abstract Hyperspectral images contain rich and fine spectral information, an improvement of land use/cover classification accuracy is expected from the use of such images. However, due to the high dimensionality of data and high correlation between adjacent spectral bands, the classification process may involve a large amount of training samples, result in low efficiency and been hard to improve classification accuracy. In this paper, we tested some feature extraction methods based on wavelet transform to reduce the high dimensionality with losing much discriminating power in the new feature space. An AVIRIS data set with 220 bands and an EO-1 data set with 193 bands were tested to illustrate the performance of the wavelet based methods and be compared with the existing methods of feature extraction.
Remote Sensing | 2014
Cheng Kai Wang; Yi Hsing Tseng; Hone Jay Chu
This study demonstrated the potential of using dual-wavelength airborne light detection and ranging (LiDAR) data to classify land cover. Dual-wavelength LiDAR data were acquired from two airborne LiDAR systems that emitted pulses of light in near-infrared (NIR) and middle-infrared (MIR) lasers. The major features of the LiDAR data, such as surface height, echo width, and dual-wavelength amplitude, were used to represent the characteristics of land cover. Based on the major features of land cover, a support vector machine was used to classify six types of suburban land cover: road and gravel, bare soil, low vegetation, high vegetation, roofs, and water bodies. Results show that using dual-wavelength LiDAR-derived information (e.g., amplitudes at NIR and MIR wavelengths) could compensate for the limitations of using single-wavelength LiDAR information (i.e., poor discrimination of low vegetation) when classifying land cover.
International Journal of Applied Earth Observation and Geoinformation | 2015
Yi Hsing Tseng; Cheng Kai Wang; Hone Jay Chu; Yu Chia Hung
Abstract Full-waveform topographic LiDAR data provide more detailed information about objects along the path of a laser pulse than discrete-return (echo) topographic LiDAR data. Full-waveform topographic LiDAR data consist of a succession of cross-section profiles of landscapes and each waveform can be decomposed into a sum of echoes. The echo number reveals critical information in classifying land cover types. Most land covers contain one echo, whereas topographic LiDAR data in trees and roof edges contained multi-echo waveform features. To identify land-cover types, waveform-based classifier was integrated single-echo and multi-echo classifiers for point cloud classification. The experimental area was the Namasha district of Southern Taiwan, and the land-cover objects were categorized as roads, trees (canopy), grass (grass and crop), bare (bare ground), and buildings (buildings and roof edges). Waveform features were analyzed with respect to the single- and multi-echo laser-path samples, and the critical waveform features were selected according to the Bhattacharyya distance. Next, waveform-based classifiers were performed using support vector machine (SVM) with the local, spatial features of waveform topographic LiDAR information, and optical image information. Results showed that by using fused waveform and optical information, the waveform-based classifiers achieved the highest overall accuracy in identifying land-cover point clouds among the models, especially when compared to an echo-based classifier.
energy minimization methods in computer vision and pattern recognition | 2007
Yi Hsing Tseng; Kai Pei Tang; Fu Chen Chou
Surface reconstruction from implicit data of sub-randomly distributed 3D points is the key work of extracting explicit information from LiDAR data. This paper proposes an approach of extended snake theory to surface reconstruction from LiDAR data. The proposed algorithm approximates a surface with connected planar patches. Growing from an initial seed point, a surface is reconstructed by attaching new adjacent planar patches based on the concept of minimizing the deformable energy. A least-squares solution is sought to keep a local balance of the internal and external forces, which are inertial forces maintaining the flatness of a surface and pulls of observed LiDAR points bending the growing surface toward observations. Experiments with some test data acquired with a ground-based LiDAR demonstrate the feasibility of the proposed algorithm. The effects of parameter settings on the delivered results are also investigated.
Journal of The Chinese Institute of Engineers | 2009
Sendo Wang; Yi Hsing Tseng
Abstract Model‐based building extraction (MBBE) from images has received intensive research interest in the field of digital photogrammetry for the last decade. Model description and a fitting algorithm are the primary issues addressed in this paper. This paper proposes a novel approach, called the floating model, to modeling a variety of buildings by fitting primitive models onto images. Each building is represented by a combination of 3D primitive models, and each primitive model is associated with a set of shape and pose parameters. Building extraction is carried out by adjusting these model parameters until the projection of the model fits onto all images. A semi‐automated strategy is proposed to increase the efficiency of the adjustments made to the model. First, the model is manually dragged and dropped to the approximate shape and position of all images. Then, the optimal fit is automatically computed by means of the tailored Least‐squares Model‐image Fitting (LSMIF) algorithm, which is the focus of this paper. The accuracy of the algorithm is assessed, and additional constraints to ensure its robustness are introduced. Finally, the algorithm is tested on real datasets and is compared with manually measured data to assess its empirical accuracy. The results reveal that the function of LSMIF is stable and can generate satisfying 3D information on a building comparable to manually measured data.