Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chang-Jo Chung is active.

Publication


Featured researches published by Chang-Jo Chung.


Geographical information systems in assessing natural hazards selected contributions from an international workshop held in Perugia on September 20-22, 1993. (Advances in natural and technological hazards research ; 5) | 1995

Multivariate Regression Analysis for Landslide Hazard Zonation

Chang-Jo Chung; Andrea G. Fabbri; Cees J. van Westen

Based on several layers of spatial map patterns, multivariate regression methods have been developed for the construction of landslide hazard maps. The method proposed in this paper assumes that future landslides can be predicted by the statistical relationships established between the past landslides and the spatial data set of map patterns. The application of multivariate regression techniques for delineating landslide hazard areas runs into two critical problems using GIS (geographic information systems): (i) the need to handle thematic data; and (ii) the sample unit for the observations. To overcome the first problem related to the thematic data, favourability function approaches or dummy variable techniques can be used.


Nonrenewable Resources | 1993

The representation of geoscience information for data integration

Chang-Jo Chung; Andrea G. Fabbri

In mineral exploration, resource assessment, or natural hazard assessment, many layers of geoscience maps such as lithology, structure, geophysics, geochemistry, hydrology, slope stability, mineral deposits, and preprocessed remotely sensed data can be used as evidence to delineate potential areas for further investigation. Todays PC-based data base management systems, statistical packages, spreadsheets, image processing systems, and geographical information systems provide almost unlimited capabilities of manipulating data. Generally such manipulations make a strategic separation of spatial and nonspatial attributes, which are conveniently linked in relational data bases. The first step in integration procedures usually consists of studying the individual charateristics of map features and interrelationships, and then representing them in numerical form (statistics) for finding the areas of high potential (or impact).Data representation is a transformation of our experience of the real world into a computational domain. As such, it must comply with models and rules to provide us with useful information. Quantitative representation of spatially distributed map patterns or phenomena plays a pivotal role in integration because it also determines the types of combination rules applied to them.Three representation methods—probability measures, Dempster-Shafer belief functions, and membership functions in fuzzy sets—and their corresponding estimation procedures are presented here with analyses of the implications and of the assumptions that are required in each approach to thematic mapping. Difficulties associated with the construction of probability measures, belief functions, and membership functions are also discussed; alternative procedures to overcome these difficulties are proposed. These proposed techniques are illustrated by using a simple, artificially constructed data set.


Natural Hazards | 2003

Is Prediction of Future Landslides Possible with a GIS

Andrea G. Fabbri; Chang-Jo Chung; Antonio Cendrero; Juan Remondo

This contribution explores a strategy for landslide hazard zonation inwhich layers of spatial data are used to represent typical settings inwhich given dynamic types of landslides are likely to occur. Theconcepts of assessment and prediction are defined to focus on therepresentation of future hazardous events and in particular on themyths that often provide obstacles in the application of quantitativemethods. The prediction rate curves for different applications describethe support provided by the different data layers in experiments inwhich the typical setting of hazardous events is approximated bystatistically integrating the spatial information.


Computers & Geosciences | 2006

Using likelihood ratio functions for modeling the conditional probability of occurrence of future landslides for risk assessment

Chang-Jo Chung

Abstract The most crucial and difficult task in landslide hazard analysis is estimating the conditional probability of occurrence of future landslides in a study area within a specific time period, given specific geomorphic and topographic features. This task can be addressed with a mathematical model that estimates the required conditional probability in two stages: “relative hazard mapping” and “empirical probability estimation.” The first stage divides the study area into a number of “prediction” classes according to their relative likelihood of occurrence of future landslides, based on the geomorphic and topographic data. Each prediction class represents a relative level of hazard with respect to other prediction classes. The number of classes depends on the quantity and quality of input data. Several quantitative models have been developed and tested for use in this stage; the objective is to delineate typical settings in which future landslides are likely to occur. In this stage, problems related to different degrees of resolution in the input data layers are resolved. The second stage is to empirically estimate the conditional probability of landslide occurrence in each prediction class by a cross-validation technique. The basic strategy is to divide past occurrences of landslides into two groups, a “modeling group” and a “validation group”. The first mapping stage is repeated, but the prediction is limited to only those landslide occurrences in the modeling group that are used to construct a new set of prediction classes. The new set of prediction classes is compared to the distribution of landslide occurrences in the validation group. Statistics from the comparison provide a quantitative measure of the conditional probability of occurrence of future landslides.


Environmental Impact Assessment Review | 2003

Accounting for uncertainty factors in biodiversity impact assessment: lessons from a case study

Davide Geneletti; E. Beinat; Chang-Jo Chung; A.G. Fabbri; H.J. Scholten

For an Environmental Impact Statement (EIS) to effectively contribute to decision-making, it must include one crucial step: the estimation of the uncertainty factors affecting the impact evaluation and of their effect on the evaluation results. Knowledge of the uncertainties better orients the strategy of the decision-makers and underlines the most critical data or methodological steps of the procedure. Accounting for uncertainty factors is particularly relevant when dealing with ecological impacts, whose forecasts are typically affected by a high degree of simplification. By means of a case study dealing with the evaluation of road alternatives, this paper explores and discusses the main uncertainties that are related to the typical stages of a biodiversity impact assessment: uncertainty in the data that are used, in the methodologies that are applied, and in the value judgments provided by the experts. Subsequently, the effects of such uncertainty factors are tracked back to the result of the evaluation, i.e., to the relative performance of the project alternatives under consideration. This allows to test the sensitivity of the results, and consequently to provide a more informative ranking of the alternatives. The papers concludes by discussing the added-value for decision-making provided by uncertainty analysis within EIA.


Computers & Geosciences | 2006

An empirical evaluation of spatial regression models

Xiaolu Gao; Yasushi Asami; Chang-Jo Chung

Conventional statistical methods are often ineffective to evaluate spatial regression models. One reason is that spatial regression models usually have more parameters or smaller sample sizes than a simple model, so their degree of freedom is reduced. Thus, it is often unlikely to evaluate them based on traditional tests. Another reason, which is theoretically associated with statistical methods, is that statistical criteria are crucially dependent on such assumptions as normality, independence, and homogeneity. This may create problems because the assumptions are open for testing. In view of these problems, this paper proposes an alternative empirical evaluation method. To illustrate the idea, a few hedonic regression models for a house and land price data set are evaluated, including a simple, ordinary linear regression model and three spatial models. Their performance as to how well the price of the house and land can be predicted is examined. With a cross-validation technique, the prices at each sample point are predicted with a model estimated with the samples excluding the one being concerned. Then, empirical criteria are established whereby the predicted prices are compared with the real, observed prices. The proposed method provides an objective guidance for the selection of a suitable model specification for a data set. Moreover, the method is seen as an alternative way to test the significance of the spatial relationships being concerned in spatial regression models.


Computers & Geosciences | 2006

Two models for evaluating landslide hazards

John C. Davis; Chang-Jo Chung; Gregory C. Ohlmacher

Abstract Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards.


Geo-information for Disaster Management | 2005

Risk assessment using spatial prediction model for natural disaster preparedness

Chang-Jo Chung; A.G. Fabbri; Dong-Ho Jang; H.J. Scholten

The spatial mapping of risk is critical in planning for disaster preparedness. An application from a study area affected by mass movements is used as an example to portray the desirable relations between hazard prediction and disaster management. We have developed a three-stage procedure in spatial data analysis not only to estimate the probability of the occurrence of the natural hazardous events but also to evaluate the uncertainty of the estimators of that probability. The three-stage procedure consists of: (i) construction of a hazard prediction map of “future” hazardous events; (ii) validation/reliability of prediction results and estimation of the probability of occurrence for each predicted hazard level; and (iii) generation of risk maps with the introduction of socio-economic factors representing assumed or established vulnerability levels by combining the prediction map in the first stage and the estimated probabilities in the second stage with socio-economic data. Three-dimensional dynamic display techniques can be used to obtain the contextual setting of the risk space/time/level distribution and to plan measures for risk avoidance or mitigation, or for disaster preparedness and risk management. A software approach provides the analytical structure and modeling power as a fundamental tool for decision making.


Computers & Geosciences | 2006

Optimizing the use of aeromagnetic data for predictive geological interpretation: an example from the Grenville Province, Quebec

Sharon Parsons; Léopold Nadeau; Pierre Keating; Chang-Jo Chung

Predictive geological mapping relies largely on the empirical and statistical analysis of aeromagnetic data. However, in most applications the analysis remains essentially visual and unconstrained. The lithological and structural diversity of rock units underlying the Mingan Region make it an ideal test area to apply more rigorous approaches to magnetic data processing and interpretation, and to assess their usefulness and limitations. In the application discussed here, various derivatives and transformations of the total field magnetic data are evaluated empirically by photo-interpretation using a Geographic Information System. We show that rock types are best represented using the total field and vertical derivative of the magnetic data, whereas contacts between rock types are best delineated using the horizontal derivative of the total field and the analytic signal. In addition, the maxima of the analytic signal are used to estimate the direction of dip of large-scale geological units. Statistical analyses show that the correlation between geology and magnetic data is not directly proportional. Finally, the source of discrepancies between mapped geological units and magnetic response are evaluated through theoretical data modeling of representative geological bodies.


Computers & Geosciences | 1989

FORTRAN 77 program for constructing and plotting confidence bands for the distribution and quantile functions for randomly censored data

Chang-Jo Chung

Abstract For randomly censored data, a FORTRAN 77 computer program has been developed for plotting the product-limit estimator for the distribution function, the product-limit quantile function for the quantile function. Csorgo-Horvath confidence bands for the distribution function and Chung—Csorgo-Horvath confidence bands for the quantile function. The program is written for an IBM PC and HP plotter (or any compatible plotter such as Roland DG plotter).

Collaboration


Dive into the Chang-Jo Chung's collaboration.

Top Co-Authors

Avatar

Andrea G. Fabbri

University of Milano-Bicocca

View shared research outputs
Top Co-Authors

Avatar

A.G. Fabbri

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Angelo Cavallin

University of Milano-Bicocca

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Juan Remondo

University of Cantabria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Byung-Doo Kwon

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Pierre Keating

Geological Survey of Canada

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge