Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Scott A. Starks is active.

Publication


Featured researches published by Scott A. Starks.


Reliable Computing | 2006

Towards Combining Probabilistic and Interval Uncertainty in Engineering Calculations: Algorithms for Computing Statistics under Interval Uncertainty, and Their Computational Complexity

Vladik Kreinovich; Gang Xiang; Scott A. Starks; Luc Longpré; Martine Ceberio; Roberto Araiza; Jan Beck; Raj Kandathi; Asis Nayak; Roberto Torres; Janos Hajagos

In many engineering applications, we have to combine probabilistic and interval uncertainty. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such as mean, variance, autocorrelation, correlation with other measurements. In environmental measurements, we often only measure the values with interval uncertainty. We must therefore modify the existing statistical algorithms to process such interval data.In this paper, we provide a survey of algorithms for computing various statistics under interval uncertainty and their computational complexity. The survey includes both known and new algorithms.


Reliable Computing | 2006

Monte-Carlo-Type Techniques for Processing Interval Uncertainty, and Their Potential Engineering Applications

Vladik Kreinovich; Jan Beck; Carlos M. Ferregut; Araceli Sanchez; G. Randy Keller; Matthew George Averill; Scott A. Starks

In engineering applications, we need to make decisions under uncertainty. Traditionally, in engineering, statistical methods are used, methods assuming that we know the probability distribution of different uncertain parameters. Usually, we can safely linearize the dependence of the desired quantities y (e.g., stress at different structural points) on the uncertain parameters xi–thus enabling sensitivity analysis. Often, the number n of uncertain parameters is huge, so sensitivity analysis leads to a lot of computation time. To speed up the processing, we propose to use special Monte-Carlo-type simulations.


Numerical Algorithms | 2004

Interval Arithmetic, Affine Arithmetic, Taylor Series Methods: Why, What Next?

Nedialko S. Nedialkov; Vladik Kreinovich; Scott A. Starks

In interval computations, the range of each intermediate result r is described by an interval r. To decrease excess interval width, we can keep some information on how r depends on the input x=(x1,...,xn). There are several successful methods of approximating this dependence; in these methods, the dependence is approximated by linear functions (affine arithmetic) or by general polynomials (Taylor series methods). Why linear functions and polynomials? What other classes can we try? These questions are answered in this paper.


north american fuzzy information processing society | 2005

Using expert knowledge in solving the seismic inverse problem

Matthew G. Averill; Kate C. Miller; George R. Keller; Vladik Kreinovich; Roberto Araiza; Scott A. Starks

In this talk, we analyze how expert knowledge can be used in solving the seismic inverse problem. To determine the geophysical structure of a region, we measure seismic travel times and reconstruct velocities at different depths from this data. There are several algorithms for solving this inverse problem.


parallel computing | 2004

New algorithms for statistical analysis of interval data

Gang Xiang; Scott A. Starks; Vladik Kreinovich; Luc Longpré

It is known that in general, statistical analysis of interval data is an NP-hard problem: even computing the variance of interval data is, in general, NP-hard. Until now, only one case was known for which a feasible algorithm can compute the variance of interval data: the case when all the measurements are accurate enough – so that even after the measurement, we can distinguish between different measured values


ieee international conference on fuzzy systems | 2002

Uncertainty in risk analysis: towards a general second-order approach combining interval, probabilistic, and fuzzy techniques

Scott Ferson; Lev R. Ginzburg; Vladik Kreinovich; Hung T. Nguyen; Scott A. Starks

\widetilde x_i


systems man and cybernetics | 2001

Towards more realistic (e.g., non-associative) AND- and OR-operations in fuzzy logic

Jesus Martinez; Leopoldo Macias; Ammar J. Esper; Jesus Chaparro; Vick Alvarado; Scott A. Starks; Vladik Kreinovich

. In this paper, we describe several new cases in which feasible algorithms are possible – e.g., the case when all the measurements are done by using the same (not necessarily very accurate) measurement instrument – or at least a limited number of different measuring instruments.


Reliable Computing | 2004

Eliminating Duplicates under Interval and Fuzzy Uncertainty: An Asymptotically Optimal Algorithm and Its Geospatial Applications

Roberto Torres; G. Randy Keller; Vladik Kreinovich; Luc Longpré; Scott A. Starks

Uncertainty is very important in risk analysis. A natural way to describe this uncertainty is to describe a set of possible values of each unknown quantity (this set is usually an interval), plus any additional information that we may have about the probability of different values within this set. Traditional statistical techniques deal with the situations in which we have a complete information about the probabilities; in real life, however, we often have only partial information about them. We therefore need to describe methods of handling such partial information in risk analysis. Several such techniques have been presented, often on a heuristic basis. The main goal of the paper is to provide a justification for a general second-order formalism for handling different types of uncertainty.


Data mining and computational intelligence | 2001

Intelligent mining in image databases, with applications to satellite imaging and to web search

Stephen Gibson; Vladik Kreinovich; Luc Longpré; Brian S. Penn; Scott A. Starks

How is fuzzy logic usually formalized? There are many seemingly reasonable requirements that a logic should satisfy: e.g., since AB and BA are the same, the corresponding and-operation should be commutative. Similarly, since AA means the same as A, we should expect that the and-operation should also satisfy this property, etc. It turns out to be impossible to satisfy all these seemingly natural requirements, so usually, some requirements are picked as absolutely true (like commutativity or associativity), and others are ignored if they contradict to the picked ones. This idea leads to a neat mathematical theory, but the analysis of real-life expert reasoning shows that all the requirements are only approximately satisfied. we should require all of these requirements to be satisfied to some extent. In this paper, we show the preliminary results of analyzing such operations. In particular, we show that non-associative operations explain the empirical 7±2 law in psychology according to which a person can normally distinguish between no more than 7 plus minus 2 classes.


north american fuzzy information processing society | 2003

Fast quantum algorithms for handling probabilistic, interval, and fuzzy uncertainty

Mark Martinez; Luc Longpré; Vladik Kreinovich; Scott A. Starks; Hung T. Nguyen

Geospatial databases generally consist of measurements related to points (or pixels in the case of raster data), lines, and polygons. In recent years, the size and complexity of these databases have increased significantly and they often contain duplicate records, i.e., two or more close records representing the same measurement result. In this paper, we address the problem of detecting duplicates in a database consisting of point measurements. As a test case, we use a database of measurements of anomalies in the Earths gravity field that we have compiled. In this paper, we show that a natural duplicate deletion algorithm requires (in the worst case) quadratic time, and we propose a new asymptotically optimal O(n⋅ log (n)) algorithm. These algorithms have been successfully applied to gravity databases. We believe that they will prove to be useful when dealing with many other types of point data.

Collaboration


Dive into the Scott A. Starks's collaboration.

Top Co-Authors

Avatar

Vladik Kreinovich

University of Texas at El Paso

View shared research outputs
Top Co-Authors

Avatar

Luc Longpré

University of Texas at El Paso

View shared research outputs
Top Co-Authors

Avatar

Olga Kosheleva

University of Texas at El Paso

View shared research outputs
Top Co-Authors

Avatar

Gang Xiang

University of Texas at El Paso

View shared research outputs
Top Co-Authors

Avatar

Hung T. Nguyen

New Mexico State University

View shared research outputs
Top Co-Authors

Avatar

Roberto Araiza

University of Texas at El Paso

View shared research outputs
Top Co-Authors

Avatar

Bryan Usevitch

University of Texas at El Paso

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karen Villaverde

New Mexico State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge