Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kathryn Blackmond Laskey is active.

Publication


Featured researches published by Kathryn Blackmond Laskey.


Social Networks | 1983

Stochastic blockmodels: First steps

Paul W. Holland; Kathryn Blackmond Laskey; Samuel Leinhardt

Abstract A stochastic model is proposed for social networks in which the actors in a network are partitioned into subgroups called blocks. The model provides a stochastic generalization of the blockmodel. Estimation techniques are developed for the special case of a single relation social network, with blocks specified a priori. An extension of the model allows for tendencies toward reciprocation of ties beyond those explained by the partition. The extended model provides a one degree-of-freedom test of the model. A numerical example from the social network literature is used to illustrate the methods.


Artificial Intelligence | 2008

MEBN: A language for first-order Bayesian knowledge bases

Kathryn Blackmond Laskey

Although classical first-order logic is the de facto standard logical foundation for artificial intelligence, the lack of a built-in, semantically grounded capability for reasoning under uncertainty renders it inadequate for many important classes of problems. Probability is the best-understood and most widely applied formalism for computational scientific reasoning under uncertainty. Increasingly expressive languages are emerging for which the fundamental logical basis is probability. This paper presents Multi-Entity Bayesian Networks (MEBN), a first-order language for specifying probabilistic knowledge bases as parameterized fragments of Bayesian networks. MEBN fragments (MFrags) can be instantiated and combined to form arbitrarily complex graphical probability models. An MFrag represents probabilistic relationships among a conceptually meaningful group of uncertain hypotheses. Thus, MEBN facilitates representation of knowledge at a natural level of granularity. The semantics of MEBN assigns a probability distribution over interpretations of an associated classical first-order theory on a finite or countably infinite domain. Bayesian inference provides both a proof theory for combining prior knowledge with observations, and a learning theory for refining a representation as evidence accrues. A proof is given that MEBN can represent a probability distribution on interpretations of any finitely axiomatizable first-order theory.


uncertainty in artificial intelligence | 1995

Sensitivity analysis for probability assessments in Bayesian networks

Kathryn Blackmond Laskey

When eliciting a probability model from experts, knowledge engineers may compare the results of the model with expert judgment on test scenarios, then adjust model parameters to bring the behavior of the model more in line with the experts intuition. This paper presents a methodology for analytic computation of sensitivity values in Bayesian network models. Sensitivity values are partial derivatives of output probabilities with respect to parameters being varied in the sensitivity analysis. They measure the impact of small changes in a network parameter on a target probability value or distribution. Sensitivity values can be used to focus knowledge elicitation effort on those parameters having the most impact on outputs of concern. Analytic sensitivity values are computed for an example and compared to sensitivity analysis by direct variation of parameters. >


international semantic web conference | 2005

PR-OWL: a Bayesian ontology language for the semantic web

Paulo C. G. Costa; Kathryn Blackmond Laskey; Kenneth J. Laskey

This paper addresses a major weakness of current technologies for the Semantic Web, namely the lack of a principled means to represent and reason about uncertainty. This not only hinders the realization of the original vision for the Semantic Web, but also creates a barrier to the development of new, powerful features for general knowledge applications that require proper treatment of uncertain phenomena. We present PR-OWL, a probabilistic extension to the OWL web ontology language that allows legacy ontologies to interoperate with newly developed probabilistic ontologies. PR-OWL moves beyond the current limitations of deterministic classical logic to a full first-order probabilistic logic. By providing a principled means of modeling uncertainty in ontologies, PR-OWL can be seen as a supporting tool for many applications that can benefit from probabilistic inference within an ontology language, thus representing an important step toward the W3Cs vision for the Semantic Web. In order to fully present the concepts behind PR-OWL, we also cover Multi-Entity Bayesian Networks (MEBN), the Bayesian first-order logic supporting the language, and UnBBayes-MEBN, an open source GUI and reasoner that implements PR-OWL concepts. Finally, a use case of PR-OWL probabilistic ontologies is illustrated here in order to provide a grasp of the potential of the framework.


Neural Computation | 2000

Neural Coding: Higher-Order Temporal Patterns in the Neurostatistics of Cell Assemblies

Laura Martignon; Gustavo Deco; Kathryn Blackmond Laskey; Mathew E. Diamond; Winrich A. Freiwald; Eilon Vaadia

Recent advances in the technology of multiunit recordings make it possible to test Hebbs hypothesis that neurons do not function in isolation but are organized in assemblies. This has created the need for statistical approaches to detecting the presence of spatiotemporal patterns of more than two neurons in neuron spike train data. We mention three possible measures for the presence of higher-order patterns of neural activationcoefficients of log-linear models, connected cumulants, and redundanciesand present arguments in favor of the coefficients of log-linear models. We present test statistics for detecting the presence of higher-order interactions in spike train data by parameterizing these interactions in terms of coefficients of log-linear models. We also present a Bayesian approach for inferring the existence or absence of interactions and estimating their strength. The two methods, the frequentist and the Bayesian one, are shown to be consistent in the sense that interactions that are detected by either method also tend to be detected by the other. A heuristic for the analysis of temporal patterns is also proposed. Finally, a Bayesian test is presented that establishes stochastic differences between recorded segments of data. The methods are applied to experimental data and synthetic data drawn from our statistical models. Our experimental data are drawn from multiunit recordings in the prefrontal cortex of behaving monkeys, the somatosensory cortex of anesthetized rats, and multiunit recordings in the visual cortex of behaving monkeys.


Artificial Intelligence | 1989

Assumptions, beliefs and probabilities

Kathryn Blackmond Laskey; Paul E. Lehner

Abstract A formal equivalence is demonstrated between Shafer-Dempster belief theory and assumption-based truth maintenance with a probability calculus on the assumptions. This equivalence means that any Shafer-Dempster inference network can be represented as a set of ATMS justifications with probabilities attached to assumptions. A propositions belief is equal to the probability of its label conditioned on label consistency. An algorithm is given for computing these beliefs. When the ATMS is used to manage beliefs, non-independencies between nodes are automatically and correctly accounted for. The approach described here unifies symbolic and numeric approaches to uncertainty management, thus facilitating dynamic construction of quantitative belief arguments, explanation of beliefs, and resolution of conflicts.


Machine Learning | 2003

Population Markov Chain Monte Carlo

Kathryn Blackmond Laskey; James W. Myers

Stochastic search algorithms inspired by physical and biological systems are applied to the problem of learning directed graphical probability models in the presence of missing observations and hidden variables. For this class of problems, deterministic search algorithms tend to halt at local optima, requiring random restarts to obtain solutions of acceptable quality. We compare three stochastic search algorithms: a Metropolis-Hastings Sampler (MHS), an Evolutionary Algorithm (EA), and a new hybrid algorithm called Population Markov Chain Monte Carlo, or popMCMC. PopMCMC uses statistical information from a population of MHSs to inform the proposal distributions for individual samplers in the population. Experimental results show that popMCMC and EAs learn more efficiently than the MHS with no information exchange. Populations of MCMC samplers exhibit more diversity than populations evolving according to EAs not satisfying physics-inspired local reversibility conditions.


IEEE Transactions on Knowledge and Data Engineering | 2000

Network engineering for agile belief network models

Kathryn Blackmond Laskey; Suzanne M. Mahoney

The construction of a large, complex belief network model, like any major system development effort, requires a structured process to manage system design and development. This paper describes a belief network engineering process based on the spiral system lifecycle model. The problem of specifying numerical probability distributions for random variables in a belief network is best treated not in isolation, but within the broader context of the system development effort as a whole. Because structural assumptions determine which numerical probabilities or parameter values need to be specified, there is an interaction between specification of structure and parameters. Evaluation of successive prototypes serves to refine system requirements, ensure that modeling and elicitation effort are focused productively, and prioritize directions of enhancement and improvement for future prototypes. Explicit representation of semantic information associated with probability assessments facilitates tracing of the rationale for modeling decisions, as well as supporting maintenance and enhancement of the knowledge base.


systems man and cybernetics | 1996

Model uncertainty: theory and practical implications

Kathryn Blackmond Laskey

A model is a representation of a system that can be used to answer questions about the system. In many situations in which models are used, there exists no set of universally accepted modeling assumptions. The term model uncertainty commonly refers to uncertainty about a models structure, as distinguished from uncertainty about parameters. This paper presents alternative formal approaches to treating model uncertainty, discusses methods for using data to reduce model uncertainty, presents approaches for diagnosing inadequate models, and discusses appropriate use of models that are subject to model uncertainty.


european conference on machine learning | 2010

Nonparametric Bayesian clustering ensembles

Pu Wang; Carlotta Domeniconi; Kathryn Blackmond Laskey

Forming consensus clusters from multiple input clusterings can improve accuracy and robustness. Current clustering ensemble methods require specifying the number of consensus clusters. A poor choice can lead to under or over fitting. This paper proposes a nonparametric Bayesian clustering ensemble (NBCE) method, which can discover the number of clusters in the consensus clustering. Three inference methods are considered: collapsed Gibbs sampling, variational Bayesian inference, and collapsed variational Bayesian inference. Comparison of NBCE with several other algorithms demonstrates its versatility and superior stability.

Collaboration


Dive into the Kathryn Blackmond Laskey's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Erik Blasch

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge