Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jennifer L. Davidson is active.

Publication


Featured researches published by Jennifer L. Davidson.


IEEE Transactions on Power Systems | 1994

Application of artificial neural networks in power system security and vulnerability assessment

Qin Zhou; Jennifer L. Davidson; A. A. Fouad

In a companion paper by A.A. Fouad et al. the concept of system vulnerability is introduced as a new framework for power system dynamic security assessment. Using the transient energy function (TEF) method of transient stability analysis, the energy margin /spl Delta/V is used as an indicator of the level of security, and its sensitivity to a changing system parameter p (/spl delta//spl Delta/V//spl delta/p) as an indicator of its trend with changing system conditions. These two indicators are combined to determine the degree of system vulnerability to contingent disturbances in a stability-limited power system. Thresholds for acceptable levels of the security indicator and its trend are related to the stability limits of a critical system parameter (plant generation limits). Operating practices and policies are used to determine these thresholds. In this paper the artificial neural networks (ANNs) technique is applied to the concept of system vulnerability within the recently developed framework, for fast pattern recognition and classification of system dynamic security status. A suitable topology for the neural network is developed, and the appropriate training method and input and output signals are selected. The procedure developed is successfully applied to the IEEE 50-generator test system. Data previously obtained by heuristic techniques are used for training the ANN. >


Circuits Systems and Signal Processing | 1993

Morphology neural networks: an introduction with applications

Jennifer L. Davidson; Frank Hummer

The area of artificial neural networks has recently seen an explosion of theoretical and practical results. In this paper, we present an artificial neural network that is algebraically distinct from the classical artificial neural networks, and several applications which are different from the typical ones. In fact, this new class of networks, calledmorphology neural networks, is a special case of a general theory of artificial neural nets, which includes the classical neural nets. The main difference between a classical neural net and a morphology neural net lies in the way each node algebraically combines the numerical information. Each node in a classical neural net combines information by multiplying output values and corresponding weights and summing, while in a morphology neural net, the combining operation consists of adding values and corresponding weights, and taking the maximum value. We lay a theoretical foundation for morphology neural nets, describe their roots, and give several applications in image processing. In addition, theoretical results on the convergence issues for two networks are presented.


Computational Statistics & Data Analysis | 1998

Image analysis with partially ordered Markov models

Noel A Cressie; Jennifer L. Davidson

Statistical approaches to image analysis, such as image restoration, segmentation, object classification, and reconstruction often require specification of a distributional model for the variability of the pixel intensities around the true image and a prior distributional model for the true image itself. Spatial dependence (i.e., nearby values tend to be more - or less - alike than those far apart) is often modeled by assuming a Markov random field (MRF) for the prior model and sometimes for the pixel-intensity model. When dealing with MRFs, there is typically an unwieldy normalizing constant that can cause inference to be either inefficient or computationally intensive. In this article, we propose a class of models that are a subset of the class of MRFs but whose members have probability distributions that can be written in closed form. This class, called the partially ordered Markov models (POMMs), contains as a special case the Markov mesh models (MMMs) and is seen to be an important subclass of graphical models used in the analysis of (Bayesian) networks. POMMs are used in experiments for both the forward problem of texture synthesis and the inverse problem of parameter estimation. Various images of textures are generated using POMMs and are seen not to exhibit any obvious directional patterns. Also, parameter estimates from maximum likelihood estimators are found using a real texture image, and the estimates are then used to generate a texture that is similar to the real data.


OE/LASE '90, 14-19 Jan., Los Angeles, CA | 1990

Theory of morphological neural networks

Jennifer L. Davidson; Gerhard X. Ritter

The theory of classical artificial neural networks has been used to solve pattern recognition problems in image processing that is different from traditional pattern recognition approaches. In standard neural network theory, the first step in performing a neural network calculation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for non-linearity of the network. This paper presents the fundamental theory for a morphological neural network which, instead of multiplication and summation, uses the non-linear operation of addition and maximum. Several basic applications which are distinctly different from pattern recognition techniques are given, including a net which performs a sieving algorithm.


Pattern Recognition | 1999

Texture synthesis and pattern recognition for partially ordered Markov models

Jennifer L. Davidson; Noel A Cressie; Xia Hua

Abstract The uses of texture in image analysis are widespread, ranging from remotely sensed data to medical imaging to military applications. Image processing tasks that use texture characteristics include classification, region segmentation, and synthesis of data. While there are several approaches available for texture modeling, the research presented here is concerned with stochastic texture models. Stochastic approaches view a texture as the realization of a random field and are most useful when the texture appears noisy or when it lacks smooth geometric features. The model introduced in this paper is a subclass of Markov random fields (MRFs) called partially ordered Markov models (POMMs). Markov random fields are a class of stochastic models that incorporate spatial dependency between data points. One major disadvantage of MRFs is that, in general, an explicit form of the joint probability of the random variables describing the model is not obtainable. However, a popular subclass of MRFs, called Markov mesh models (MMMs), allows the explicit description of the joint probability in terms of spatially local conditional probabilities. We show how POMMs are a generalization of MMMs and demonstrate the versatility of POMMs to texture synthesis and pattern recognition in imaging. Specifically, we give a fast, one-pass algorithm for simulating textures using POMMs, and introduce examples of heterogeneous models that suggest potential applications for object recognition purposes. Then we address an inverse problem, where we present results from a series of statistical experiments designed to estimate parameters of stochastic texture models for both binary and gray value data. Although the applications in this paper focus on imaging, in their most general form, POMMs can be found in areas such as probabilistic expert systems, Bayesian hierarchical modeling, influence diagrams, and random graphs and networks.


Advances in electronics and electron physics | 1992

Foundation and Applications of Lattice Transforms in Image Processing

Jennifer L. Davidson

Publisher Summary This chapter presents the background and history of the mathematical structures pertinent to lattice transforms—namely, mathematical morphology, the minimax algebra, and the image algebra. It introduces the theoretical foundation for lattice transformations in image processing and presents detailed discussions on the three algebras and the relationship among them. The four major applications of the theory to specific problems are also discussed in the chapter. First is the mapping of minimax algebra properties to image algebra to describe the way a series of minimax algebra results can be readily formulated in an image processing environment, thus, providing new tools for solving a certain class of image processing problems. Second is a general skeletonizing technique, which can be viewed as a division algorithm. Third is an application to image complexity measures. . The dual transportation problem in context of lattice transforms is stated in the chapter.


Advances in Applied Probability | 1994

A statistical approach to identifying closed object boundaries in images

Jeffrey D. Helterbrand; Noel A Cressie; Jennifer L. Davidson

In this research, we present a statistical theory, and an algorithm, to identify one-pixel-wide closed object boundaries in gray-scale images. Closed-boundary identification is an important problem because boundaries of objects are major features in images. In spite of this, most statistical approaches to image restoration and texture identification place inappropriate stationary model assumptions on the image domain. One way to characterize the structural components present in images is to identify one-pixel-wide closed boundaries that delineate objects


Image Algebra and Morphological Image Processing III | 1992

Simulated annealing and morphology neural networks

Jennifer L. Davidson

Artificial neural networks have proven to be quite useful for a variety of different applications. A recent addition to the arena of neural networks, morphology neural networks use a morphology-like operation as their basic nodal calculation, instead of the usual linear operation. Several morphology neural nets have been developed, and lattice-type learning rules have been used to train these networks. In this paper, we present a different kind of learning rule for morphology neural nets that is based on the simulated annealing algorithm. Simulated annealing has been applied to many different areas involving optimization.


information hiding | 2010

Steganalysis using partially ordered Markov models

Jennifer L. Davidson; Jaikishan Jalan

The field of steganalysis has blossomed prolifically in the past few years, providing the community with a number of very good blind steganalyzers. Features for blind steganalysis are generated in many different ways, typically using statistical measures. This paper presents a new image modeling technique for steganalysis that uses as features the conditional probabilities described by a stochastic model called a partially ordered Markov model (POMM). The POMM allows concise modeling of pixel dependencies among quantized discrete cosine transform coefficients. We develop a steganalyzer based on support vector machines that distinguishes between cover and stego JPEG images using 98 POMM features. We show that the proposed steganalyzer outperforms two comparative Markov-based steganalyzers [25,6] and outperforms a third steganalyzer [23] on half of the tested classes, by testing our approach with many different image databases on five embedding algorithms, with a total of 20,000 images.


Image Understanding in the '90s: Building Systems that Work | 1991

Recursion and feedback in image algebra

Gerhard X. Ritter; Jennifer L. Davidson

Recursion and feedback are two important processes in image processing. Image algebra, a unified algebraic structure developed for use in image processing and image analysis, provides a common mathematical environment for expressing image processing transforms. It is only recently that image algebra has been extended to include recursive operations [1]. Recently image algebra was shown to incorporate neural nets [2], including a new type of neural net, the morphological neural net [3]. This paper presents the relationship of the recursive image algebra to the field of fractions of the ring of matrices, and gives the two dimensional moving average filter as an example. Also, the popular multilayer perceptron with back propagation and a morphology neural network with learning rule are presented in image algebra notation. These examples show that image algebra can express these important feedback concepts in a succinct way.© (1991) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Collaboration


Dive into the Jennifer L. Davidson's collaboration.

Top Co-Authors

Avatar

Noel A Cressie

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xia Hua

Iowa State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge