Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Diane Oyen is active.

Publication


Featured researches published by Diane Oyen.


international conference on data mining | 2013

Bayesian Discovery of Multiple Bayesian Networks via Transfer Learning

Diane Oyen; Terran Lane

Bayesian network structure learning algorithms with limited data are being used in domains such as systems biology and neuroscience to gain insight into the underlying processes that produce observed data. Learning reliable networks from limited data is difficult, therefore transfer learning can improve the robustness of learned networks by leveraging data from related tasks. Existing transfer learning algorithms for Bayesian network structure learning give a single maximum a posteriori estimate of network models. Yet, many other models may be equally likely, and so a more informative result is provided by Bayesian structure discovery. Bayesian structure discovery algorithms estimate posterior probabilities of structural features, such as edges. We present transfer learning for Bayesian structure discovery which allows us to explore the shared and unique structural features among related tasks. Efficient computation requires that our transfer learning objective factors into local calculations, which we prove is given by a broad class of transfer biases. Theoretically, we show the efficiency of our approach. Empirically, we show that compared to single task learning, transfer learning is better able to positively identify true edges. We apply the method to whole-brain neuroimaging data.


Knowledge and Information Systems | 2015

Transfer learning for Bayesian discovery of multiple Bayesian networks

Diane Oyen; Terran Lane

Bayesian network structure learning algorithms with limited data are being used in domains such as systems biology and neuroscience to gain insight into the underlying processes that produce observed data. Learning reliable networks from limited data is difficult; therefore, transfer learning can improve the robustness of learned networks by leveraging data from related tasks. Existing transfer learning algorithms for Bayesian network structure learning give a single maximum a posteriori estimate of network models. Yet, many other models may be equally likely, and so a more informative result is provided by Bayesian structure discovery. Bayesian structure discovery algorithms estimate posterior probabilities of structural features, such as edges. We present transfer learning for Bayesian structure discovery which allows us to explore the shared and unique structural features among related tasks. Efficient computation requires that our transfer learning objective factors into local calculations, which we prove is given by a broad class of transfer biases. Theoretically, we show the efficiency of our approach. Empirically, we show that compared to single-task learning, transfer learning is better able to positively identify true edges. We apply the method to whole-brain neuroimaging data.


international symposium on memory management | 2015

Learning Watershed Cuts Energy Functions

Reid B. Porter; Diane Oyen; Beate G. Zimmer

In recent work, several popular segmentation methods have been unified as energy minimization on a graph. In other work, supervised learning methods have been generalized from predicting labels to predicting structured, graph-like objects. A recent contribution to this second area showed how the Rand Index could be directly minimized when using Connected Components as a segmentation method. We build on this work and present an efficient mini-batch learning method for Connected Component segmentation and also show how it can be generalized to the Watershed Cuts segmentation method. We present initial results applying these new contributions to image segmentation problems in materials microscopy and discuss challenges and future directions.


Statistical Analysis and Data Mining | 2017

Order priors for Bayesian network discovery with an application to malware phylogeny

Diane Oyen; Blake Anderson; Kari Sentz; Christine M. Anderson-Cook

Bayesian networks have been used extensively to model and discover dependency relationships among sets of random variables. We learn Bayesian network structure with a combination of human knowledge about the partial ordering of variables and statistical inference of conditional dependencies from observed data. Our approach leverages complementary information from human knowledge and inference from observed data to produce networks that reflect human beliefs about the system as well as to fit the observed data. Applying prior beliefs about partial orderings of variables is an approach distinctly different from existing methods that incorporate prior beliefs about direct dependencies (or edges) in a Bayesian network. We provide an efficient implementation of the partial-order prior in a Bayesian structure discovery learning algorithm, as well as an edge prior, showing that both priors meet the local modularity requirement necessary for an efficient Bayesian discovery algorithm. In benchmark studies, the partial-order prior improves the accuracy of Bayesian network structure learning as well as the edge prior, even though order priors are more general. Our primary motivation is in characterizing the evolution of families of malware to aid cyber security analysts. For the problem of malware phylogeny discovery, we find that our algorithm, compared to existing malware phylogeny algorithms, more accurately discovers true dependencies that are missed by other algorithms.


applied imagery pattern recognition workshop | 2015

Discovering compositional trends in Mars rock targets from ChemCam spectroscopy and remote imaging

Diane Oyen; N. Lanza; Reid B. Porter

Onboard the Mars rover “Curiosity”, ChemCam contains two instruments that gather geological data in the form of remote micro images (RMI) for geologic context and laser-induced breakdown spectroscopy (LIBS) for chemical composition. By analyzing the geochemical compositional depth trends of rocks, surface layers are identified that provide clues to the past atmospheric and aqueous conditions of the planet. LIBS produces the necessary data of chemical depth profiles with successive laser shots. To quickly identify these surface layers, we fit a Gaussian graphical model (GGM) to LIBS depth profiles on rock targets. The learned GGM is a visual representation of conditional dependencies among the set of shots making for faster identification of targets with interesting depth trends that warrant more in-depth analysis by experts. We show that our learned GGMs reveal information about the compositional trends present in rock targets that match observations made in more focused studies on these same targets. RMI images provide complementary details about the rock surface. Using RMI and LIBS features, we can cluster similar rock targets by the properties of the rocks surface texture and depth profile. We present results that show our machine learning methods can help analyze both the breadth and depth of data collected by ChemCam.


international conference on data mining | 2011

Active Learning of Transfer Relationships for Multiple Related Bayesian Network Structures

Diane Oyen

Multitask network structure learning is an important problem in several scientific domains, such as, computational neuroscience and bioinformatics. However, existing algorithms do not leverage valuable domain knowledge about the relatedness of tasks. We present the first multitask Bayesian network learning algorithm that incorporates task-relatedness. Empirical results demonstrate that our algorithm learns more robust networks than existing algorithms. Defining the tasks themselves is also a challenge for multitask learning. Typically, the data is a priori partitioned into tasks. However, domain experts often modify the splitting of data into tasks based on the learned networks and then re-run the multitask algorithm with a new data partitioning. We introduce a framework to actively learn the tasks as data partitions using feedback from a domain expert.


national conference on artificial intelligence | 2012

Leveraging domain knowledge in multitask Bayesian network structure learning

Diane Oyen; Terran Lane


Journal of Machine Learning Research | 2016

Learning planar ising models

Jason K. Johnson; Diane Oyen; Michael Chertkov; Praneeth Netrapalli


national conference on artificial intelligence | 2016

Bayesian Networks with Prior Knowledge for Malware Phylogenetics.

Diane Oyen; Blake Anderson; Christine M. Anderson-Cook


Archive | 2016

MAMA User Guide v2.0.1

Brian Keith Gaschen; Jeffrey J. Bloch; Reid B. Porter; Christy E. Ruggiero; Diane Oyen; Kevin M. Schaffer

Collaboration


Dive into the Diane Oyen's collaboration.

Top Co-Authors

Avatar

Reid B. Porter

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christy E. Ruggiero

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Jason K. Johnson

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jeffrey J. Bloch

Los Alamos National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge