Alexander Gribov
Esri
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alexander Gribov.
graphics recognition | 2001
Eugene Bodansky; Alexander Gribov; Morakot Pilouk
This paper presents analyses of different methods of post-processing lines that have resulted from the raster-to-vector conversion of black and white line drawing. Special attention was paid to the borders of connected components of maps. These methods are implemented with compression and smoothing algorithms. Smoothing algorithms can enhance accuracy, so using both smoothing and compression algorithms in succession gives a more accurate result than using only a compression algorithm. The paper also shows that a map in vector format may require more memory than a map in raster format. The Appendix contains a detailed description of the new smoothing method (continuous local weighted averaging) suggested by the authors.
Lecture Notes in Computer Science | 2004
Alexander Gribov; Eugene Bodansky
Many methods of a raw vectorization produce lines with redundant vertices. Therefore the results of vectorization usually need to be compressed. Approximating methods based on throwing out inessential vertices are widely disseminated. The result of using any of these methods is a polyline, the vertices of which are a subset of source polyline vertices. When the vertices of the source polyline contain noise, vertices of the result polyline will have the same noise. Reduction of vertices without noise filtering can disfigure the shape of the source polyline. We suggested a new optimal method of the piecewise linear approximation that produces noise filtering. Our method divides the source polyline into clusters and approximates each cluster with a straight line. Our optimal method of dividing polylines into clusters guarantees that the functional, which is the integral square error of approximation plus the penalty for each cluster, will be the minimum one.
Mathematical Geosciences | 2004
Alexander Gribov; Konstantin Krivoruchko
An issue that often arises in such GIS applications as digital elevation modeling (DEM) is how to create a continuous surface using a limited number of point observations. In hydrological applications, such as estimating drainage areas, direction of water flow is easier to detect from a smooth DEM than from a grid created using standard interpolation programs. Another reason for continuous mapping is esthetic; like a picture, a map should be visually appealing, and for some GIS users this is more important than map accuracy. There are many methods for local smoothing. Spline algorithms are usually used to create a continuous map, because they minimize curvature of the surface. Geostatistical models are commonly used approaches to spatial prediction and mapping in many scientific disciplines, but classical kriging models produce noncontinuous surfaces when local neighborhood is used. This motivated us to develop a continuous version of kriging. We propose a modification of kriging that produces continuous prediction and prediction standard error surfaces. The idea is to modify kriging systems so that data outside a specified distance from the prediction location have zero weights. We discuss simple kriging and conditional geostatistical simulation, models that essentially use information about mean value or trend surface. We also discuss how to modify ordinary and universal kriging models to produce continuous predictions, and limitations using the proposed models.
Archive | 2012
Alexander Gribov; Konstantin Krivoruchko
This paper proposes a new flexible non-parametric data transformation to Gaussian distribution. This option is often required because kriging is the best predictor under squared-error minimization criterion only if the data follow multivariate Gaussian distribution, while environmental data are often best described by skewed distributions with non-negative values and a heavy right tail. We assume that the modeling random field is the result of some nonlinear transformation of a Gaussian random field. In this case, the researchers commonly use a certain parametric monotone (for example, power or logarithmic) or variants of normal score transformation. We discuss drawbacks of these methods and propose a new flexible non-parametric transformation. We compare the performance of simple kriging with the proposed data transformation to several other data transformation methods, including transformation based on a mixture of Gaussian kernels and multiplicative skewing with several base distributions. Our method is flexible, and it can be used for automatic data transformation, for example, in black-box kriging models in emergency situations.
Archive | 2014
Konstantin Krivoruchko; Alexander Gribov
We discuss two flexible and fast empirical Bayesian kriging models: (1) intrinsic random function of order zero and one and (2) kriging with local data transformation to a Gaussian distribution. In the case of large datasets, all calculations are made in the data subsets, and predictions are made using weighted sums of predictions from different subsets, possibly overlapping. The methodology is illustrated using 1.35 billion samples collected using LiDAR technology.
international conference on image analysis and recognition | 2006
Eugene Bodansky; Alexander Gribov
The problem of recognition of a polyline as a sequence of geometric primitives is important for the resolution of applied tasks such as post-processing of lines obtained as a result of vectorization; polygonal line compression; recognition of characteristic features; noise filtering; and text, symbol, and shape recognition. Here, a method is proposed for the approximation of polylines with straight segments, circular arcs, and free curves.
document analysis systems | 2006
Alexander Gribov; Eugene Bodansky
An orthogonal polygonal line is a line consisting of adjacent straight segments having only two directions orthogonal to each other. Because of noise and vectorization errors, the result of vectorization of such a line may differ from an orthogonal polygonal line. This paper contains the description of an optimal method for the restoration of orthogonal polygonal lines. It is based on the method of restoration of arbitrary ground truth lines from the paper [1]. Specificity of the algorithm suggested in the paper consists of filtering vectorization errors using a priori information about orthogonality of the ground truth contour. The suggested algorithm guarantees that obtained polygonal lines will be orthogonal and have minimal deviations from the ground truth line. The algorithm has a low computational complexity and can be used for restoration of orthogonal polygonal lines with many vertices. It was developed for a raster-to-vector conversion system ArcScan for ArcGIS and can be used for interactive vectorization of orthogonal polygonal lines.
international conference on document analysis and recognition | 2003
Alexander Gribov; Eugene Bodansky
A new precision vectorization method has beendeveloped for building centerlines of plain shapes. Firsta dense skeleton is computed. Centerlines are obtained asa subset of branches of the dense skeleton. The denseskeleton can also be used for obtaining medial axes ofshapes. To obtain high precision, the distancetransformation 12-17-38 was developed, which gives agood approximation of the Euclidean metrics. Theexpression the Voronoi L-diagram was coined.
Archive | 2006
Alexander Gribov; Konstantin Krivoruchko; J. M. Ver Hoef
This chapter proposes some new methods for computing empirical semivariograms and covariances and for fitting semivariogram and covariance models to empirical data. Grid-based empirical semivariograms and covariances are described, in which the grid values are smoothed using triangular kernels. A model-fitting procedure using modified iterative weighted least squares is presented. This algorithm is shown to be reliable for a wide range of data types and conditions, and its implementation in commercial software is discussed. Comparisons to restricted maximum likelihood estimation are also discussed.
graphics recognition | 2005
Alexander Gribov; Eugene Bodansky
In the paper, we analyze the vectorization methods and errors of vectorization of monochrome images obtained by scanning line drawings. We focused our attention on widespread errors inherent in many commercial and academic universal vectorization systems. This error, an error of parity, depends on scanning resolution, thickness of line, and the type of vectorization method. The method of removal of parity errors is suggested. The problems of accuracy, required storage capacity, and admissible slowing of vectorization are discussed in the conclusion.