Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Changjae Kim is active.

Publication


Featured researches published by Changjae Kim.


Photogrammetric Engineering and Remote Sensing | 2007

New Methodologies for True Orthophoto Generation

Ayman Habib; Eui-Myoung Kim; Changjae Kim

Orthophoto production aims at the elimination of sensor tilt and terrain relief effects from captured perspective imagery. Uniform scale and the absence of relief displacement in orthophotos make them an important component of GIS databases, where the user can directly determine geographic locations, measure distances, compute areas, and derive other useful information about the area in question. Differential rectification has been traditionally used for orthophoto generation. For large scale imagery over urban areas, differential rectification produces serious artifacts in the form of double mapped areas at object space locations with sudden relief variations, e.g., in the vicinity of buildings. Such artifacts are removed through true orthophoto generation methodologies which are based on the identification of occluded portions of the object space in the involved imagery. Existing methodologies suffer from several problems such as their sensitivity to the sampling interval of the digital surface model (DSM) as it relates to the ground sampling distance (GSD) of the imaging sensor. Moreover, current methodologies rely on the availability of a digital building model (DBM), which requires an additional and expensive pre-processing. This paper presents new methodologies for true orthophoto generation while circumventing the problems associated with existing techniques. The feasibility and performance of the suggested techniques are verified through experimental results with simulated and real data.


Photogrammetric Engineering and Remote Sensing | 2007

Comprehensive Analysis of Sensor Modeling Alternatives for High Resolution Imaging Satellites

Ayman Habib; Sung Woong Shin; Kyung-Ok Kim; Changjae Kim; Ki-In Bang; Eui-Myoung Kim; Dong-Cheon Lee

High-resolution imaging satellites are a valuable and cost effective data acquisition tool for a variety of mapping and GIS applications such as topographic mapping, map updating, orthophoto generation, environmental monitoring, and change detection. Sensor modeling that describes the mathematical relationship between corresponding scene and object coordinates is a prerequisite procedure prior to manipulating the acquired imagery from such systems for mapping purposes. Rigorous and approximate sensor models are the two alternatives for describing the mathematics of the involved imaging process. The former explicitly involves the internal and external characteristics of the imaging sensor to faithfully represent the geometry of the scene formation. On the other hand, approximate modeling can be divided into two categories. The first category simplifies the rigorous model after making some assumptions about the system’s trajectory and/or object space. Gupta and Hartley’s model, parallel projection, self-calibrating direct linear transformation, and modified parallel projection are examples of this category. Other approximate models are based on empirical formulation of the scene-to-ground mathematical relationship. This category includes among others, the well-known Rational Function Model (RFM). This paper addresses several aspects of sensor modeling. Namely, it deals with the expected accuracy from rigorous modeling of imaging satellites as it relates to the number of available ground control points, comparative analysis of approximate and rigorous sensor models, robustness of the reconstruction process against biases in the available sensor characteristics, and impact of incorporating multi-source imagery in a single triangulation mechanism. Following a brief theoretical background, these issues will be presented through experimental results from real datasets captured by satellite and aerial imaging platforms.


Sensors | 2009

Object-Based Integration of Photogrammetric and LiDAR Data for Automated Generation of Complex Polyhedral Building Models.

Changjae Kim; Ayman Habib

This research is concerned with a methodology for automated generation of polyhedral building models for complex structures, whose rooftops are bounded by straight lines. The process starts by utilizing LiDAR data for building hypothesis generation and derivation of individual planar patches constituting building rooftops. Initial boundaries of these patches are then refined through the integration of LiDAR and photogrammetric data and hierarchical processing of the planar patches. Building models for complex structures are finally produced using the refined boundaries. The performance of the developed methodology is evaluated through qualitative and quantitative analysis of the generated building models from real data.


3D-GIS | 2006

Integration of Photogrammetric and LIDAR Data in a Multi-Primitive Triangulation Environment

Ayman Habib; Sung Woong Shin; Changjae Kim; Mohamad Al-Durgham

Photogrammetric mapping procedures have gone through major developments as a result of the significant improvements in its underlying technologies. For example, the continuous development of digital imaging systems has lead to the steady adoption of digital frame and line cameras in mapping activities. Moreover, the availability of GPS/INS systems facilitated the direct geo-referencing of the acquired imagery. Still, photogrammetric datasets taken without the aid of positioning and navigation systems need control information for the purpose of surface reconstruction. So far, distinct point features have been the primary source of control for photogrammetric triangulation although other higher-order features are available and can be used. In addition to photogrammetric data, LIDAR systems supply dense geometric surface information in the form of three dimensional coordinates of laser footprints with respect to a global reference system. Considering the accuracy improvement of LIDAR systems in the recent years, which is propelled by the continuous advancement in GPS/INS technology, LIDAR data is considered a viable supply of control for photogrammetric geo-referencing. In this paper, alternative methodologies will be devised for the purpose of integrating LIDAR data into the photogrammetric triangulation. Such methodologies will deal with two main issues: utilized primitives and the respective mathematical models. More specifically, two methodologies will be introduced that utilize straight-line and areal features derived from both datasets as the primitives. The first methodology directly incorporates LIDAR lines as control information in the photogrammetric triangulation, while in the second methodology, LIDAR patches are used to geo-reference the photogrammetric model. The feasibility of the devised methods will be investigated through experimental results with real data


Sensors | 2016

Segmentation of Planar Surfaces from Laser Scanning Data Using the Magnitude of Normal Position Vector for Adaptive Neighborhoods.

Changjae Kim; Ayman Habib; Mu-Wook Pyeon; Goo-Rak Kwon; Jaehoon Jung; Joon Heo

Diverse approaches to laser point segmentation have been proposed since the emergence of the laser scanning system. Most of these segmentation techniques, however, suffer from limitations such as sensitivity to the choice of seed points, lack of consideration of the spatial relationships among points, and inefficient performance. In an effort to overcome these drawbacks, this paper proposes a segmentation methodology that: (1) reduces the dimensions of the attribute space; (2) considers the attribute similarity and the proximity of the laser point simultaneously; and (3) works well with both airborne and terrestrial laser scanning data. A neighborhood definition based on the shape of the surface increases the homogeneity of the laser point attributes. The magnitude of the normal position vector is used as an attribute for reducing the dimension of the accumulator array. The experimental results demonstrate, through both qualitative and quantitative evaluations, the outcomes’ high level of reliability. The proposed segmentation algorithm provided 96.89% overall correctness, 95.84% completeness, a 0.25 m overall mean value of centroid difference, and less than 1° of angle difference. The performance of the proposed approach was also verified with a large dataset and compared with other approaches. Additionally, the evaluation of the sensitivity of the thresholds was carried out. In summary, this paper proposes a robust and efficient segmentation methodology for abstraction of an enormous number of laser points into plane information.


international geoscience and remote sensing symposium | 2005

Image georeferencing using LIDAR data

Ayman Habib; Mwafag Ghanma; Edson Aparecido Mitishita; Eui-Myoung Kim; Changjae Kim

LIDAR technology is increasingly becoming an industry-standard tool for collecting high resolution data about physical surfaces. LIDAR is characterized by directly collecting numerical 3D coordinates of object space points. Still, the discrete and positional nature of LIDAR datasets makes it difficult to derive semantic surface information. Furthermore, reconstructed surfaces from LIDAR data lack any inherent redundancy that can be utilized to enhance the accuracy of acquired data. In comparison to LIDAR systems, photogrammetry produces surfaces rich in semantic information that can be easily identified in the captured imagery. The redundancy associated with photogrammetric intersection results in highly accurate surfaces. However, the extended amount of time needed by the photogrammetric procedure to manually identify conjugate points in overlapping images is a major disadvantage. The automation of the matching problem is still an unreliable task especially when dealing with large scale imagery over urban areas. Also, photogrammetric surface reconstruction demands adequate control in the form of control points and/or GPS/INS units. In view of the complementary characteristics of LIDAR and photogrammetric systems, a more complete surface description can be achieved through the integration of both datasets. The advantages of both systems can be fully utilized only after successful registration of the photogrammetric and LIDAR data relative to a common reference frame. The adopted registration methodology has to define a set of basic components, mainly: registration primitives, mathematical function, and similarity assessment. This paper presents the description and implementation of a registration approach that utilizes straightline features derived from both datasets as the registration primitives. LIDAR lines are used as control for the imagery and are directly incorporated in the photogrammetric triangulation. The performance analysis is based on the quality of fit between the LIDAR and photogrammetric models including derived orthophotos.


international geoscience and remote sensing symposium | 2005

Comprehensive comparisons among alternative sensor models for high resolution satellite imagery

Eui-Myoung Kim; Michel Morgan; Changjae Kim; Kyung-Ok Kim; Soo Jeong; Ayman Habib

Geometric modeling of satellite imagery is a prerequisite for many mapping and GIS applications. The more valid the sensor modeling is, the more accurate the end products are. Two main categories of sensor models exist; rigorous and approximate modeling. The former resembles the true geometry of the image formation procedure. Such a modeling requires the availability of the internal and external characteristics of the camera, which might not be always available. In addition, if these parameters are negatively affected by bias values, the accuracy of the rigorous model becomes questionable. Recently, there has been an increasing interest in approximate models, as they do not require the internal or external characteristics of the sensor. In this paper, a comparison between the rigorous and different approximate models is presented. Experimental results show the sensitivity of the rigorous model to bias values. Using an IKONOS dataset, it was found that the modified parallel projection model performs the best among all approximate models using a small number of control points. KeywordsSatellite Imagery; Rigorous Modeling; Approximate Modeling; Interior Orientation Parameters; Bias


ISPRS international journal of geo-information | 2017

Automatic Room Segmentation of 3D Laser Data Using Morphological Processing

Jaehoon Jung; Cyrill Stachniss; Changjae Kim

In this paper, we introduce an automatic room segmentation approach based on morphological processing. The inputs are registered point-clouds obtained from either a static laser scanner or a mobile scanning system, without any required prior information or initial labeling satisfying specific conditions. The proposed segmentation method’s main concept, based on the assumption that each room is bound by vertical walls, is to project the 3D point cloud onto a 2D binary map and to close all openings (e.g., doorways) to other rooms. This is achieved by creating an initial segment map, skeletonizing the surrounding walls of each segment, and iteratively connecting the closest pixels between the skeletonized walls. By iterating this procedure for all initial segments, the algorithm produces a “watertight” floor map, on which each room can be segmented by a labeling process. Finally, the original 3D points are segmented according to their 2D locations as projected on the segment map. The novel features of our approach are: (1) its robustness against occlusions and clutter in point-cloud input; (2) high segmentation performance regardless of the number of rooms or architectural complexity; and (3) straight segmentation boundary generation, all of which were proved in experiments with various sets of real-world, synthetic, and publicly available data. Additionally, comparisons with the five popular existing methods through both qualitative and quantitative evaluations demonstrated the feasibility of the proposed approach.


Sensors | 2015

Bore-Sight Calibration of Multiple Laser Range Finders for Kinematic 3D Laser Scanning Systems

Jaehoon Jung; Jeonghyun Kim; Sanghyun Yoon; Sangmin Kim; Hyoungsig Cho; Changjae Kim; Joon Heo

The Simultaneous Localization and Mapping (SLAM) technique has been used for autonomous navigation of mobile systems; now, its applications have been extended to 3D data acquisition of indoor environments. In order to reconstruct 3D scenes of indoor space, the kinematic 3D laser scanning system, developed herein, carries three laser range finders (LRFs): one is mounted horizontally for system-position correction and the other two are mounted vertically to collect 3D point-cloud data of the surrounding environment along the system’s trajectory. However, the kinematic laser scanning results can be impaired by errors resulting from sensor misalignment. In the present study, the bore-sight calibration of multiple LRF sensors was performed using a specially designed double-deck calibration facility, which is composed of two half-circle-shaped aluminum frames. Moreover, in order to automatically achieve point-to-point correspondences between a scan point and the target center, a V-shaped target was designed as well. The bore-sight calibration parameters were estimated by a constrained least squares method, which iteratively minimizes the weighted sum of squares of residuals while constraining some highly-correlated parameters. The calibration performance was analyzed by means of a correlation matrix. After calibration, the visual inspection of mapped data and residual calculation confirmed the effectiveness of the proposed calibration approach.


3D-GIS | 2006

LIDAR-Aided True Orthophoto and DBM Generation System

Ayman Habib; Changjae Kim

Orthophotos have been utilized as basic components in various GIS applications due to the uniform scale and the absence of relief displancement. Differential rectification has been traditionally used for orthophoto generation. For large scale imagery over urban areas, differential rectification produces severe artifacts in the form of double mapped areas at object space locations with abrubt changes in slopes. Such artifacts are removed through true orthophoto generation methodologies, which are based on the identification of occluded portions of the object space in the involved imagery. Basically, true orthophotos should have correct positional information and corresponding gray values. There are two requirements for achieving these characteristics of true orthophotos: there must be no false visibilities/occlusions, and the building boundaries must not be wavy. To satisfy the first requirement, a new method for occlusion detection and true orthophoto generation is introduced in this paper. The second requirement, which is for non-wavy building boundaries, can be fulfilled by generating and utilizing a DBM in the true orthophoto generation procedure. A new segmentation algorithm based on a neighborhood definition, which considers the physical shapes of the involved surfaces, is introduced to obtain planar patches of which mainly man-made structures consist. The implementation of DBM generation methodology using segmented planar patches is complicated and requires in-depth investigation; hence the research in the generation of DBM and in the refinement of true orthophotos is still in progress. This paper suggests a new system that achieves accurate true orthophotos without introducing any external DBM information. The feasibility and performance of the suggested techniques are verified through experimental results with real data.

Collaboration


Dive into the Changjae Kim's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sung Woong Shin

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Kyung-Ok Kim

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yang-Dam Eo

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge