Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jarad Niemi is active.

Publication


Featured researches published by Jarad Niemi.


Cytometry Part A | 2009

Image segmentation and dynamic lineage analysis in single-cell fluorescence microscopy

Quanli Wang; Jarad Niemi; Cheemeng Tan; Lingchong You; Mike West

An increasingly common component of studies in synthetic and systems biology is analysis of dynamics of gene expression at the single‐cell level, a context that is heavily dependent on the use of time‐lapse movies. Extracting quantitative data on the single‐cell temporal dynamics from such movies remains a major challenge. Here, we describe novel methods for automating key steps in the analysis of single‐cell, fluorescent images—segmentation and lineage reconstruction—to recognize and track individual cells over time. The automated analysis iteratively combines a set of extended morphological methods for segmentation, and uses a neighborhood‐based scoring method for frame‐to‐frame lineage linking. Our studies with bacteria, budding yeast and human cells, demonstrate the portability and usability of these methods, whether using phase, bright field or fluorescent images. These examples also demonstrate the utility of our integrated approach in facilitating analyses of engineered and natural cellular networks in diverse settings. The automated methods are implemented in freely available, open‐source software.


Journal of Computational and Graphical Statistics | 2014

Estimation and Prediction in Spatial Models With Block Composite Likelihoods

Jo Eidsvik; Benjamin A. Shaby; Brian J. Reich; Matthew Wheeler; Jarad Niemi

This article develops a block composite likelihood for estimation and prediction in large spatial datasets. The composite likelihood (CL) is constructed from the joint densities of pairs of adjacent spatial blocks. This allows large datasets to be split into many smaller datasets, each of which can be evaluated separately, and combined through a simple summation. Estimates for unknown parameters are obtained by maximizing the block CL function. In addition, a new method for optimal spatial prediction under the block CL is presented. Asymptotic variances for both parameter estimates and predictions are computed using Godambe sandwich matrices. The approach considerably improves computational efficiency, and the composite structure obviates the need to load entire datasets into memory at once, completely avoiding memory limitations imposed by massive datasets. Moreover, computing time can be reduced even further by distributing the operations using parallel computing. A simulation study shows that CL estimates and predictions, as well as their corresponding asymptotic confidence intervals, are competitive with those based on the full likelihood. The procedure is demonstrated on one dataset from the mining industry and one dataset of satellite retrievals. The real-data examples show that the block composite results tend to outperform two competitors; the predictive process model and fixed-rank kriging. Supplementary materials for this article is available online on the journal web site.


BMC Bioinformatics | 2012

Accelerated maximum likelihood parameter estimation for stochastic biochemical systems

Bernie J. Daigle; Min K. Roh; Linda R. Petzold; Jarad Niemi

BackgroundA prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs). MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence.ResultsWe have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2): an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM) algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods.ConclusionsThis work provides a novel, accelerated version of a likelihood-based parameter estimation method that can be readily applied to stochastic biochemical systems. In addition, our results suggest opportunities for added efficiency improvements that will further enhance our ability to mechanistically simulate biological processes.


Journal of Computational and Graphical Statistics | 2010

Adaptive Mixture Modelling Metropolis Methods for Bayesian Analysis of Non-linear State-Space Models

Jarad Niemi; Mike West

We describe a strategy for Markov chain Monte Carlo analysis of nonlinear, non-Gaussian state-space models involving batch analysis for inference on dynamic, latent state variables and fixed model parameters. The key innovation is a Metropolis–Hastings method for the time series of state variables based on sequential approximation of filtering and smoothing densities using normal mixtures. These mixtures are propagated through the nonlinearities using an accurate, local mixture approximation method, and we use a regenerating procedure to deal with potential degeneracy of mixture components. This provides accurate, direct approximations to sequential filtering and retrospective smoothing distributions, and hence a useful construction of global Metropolis proposal distributions for simulation of posteriors for the set of states. This analysis is embedded within a Gibbs sampler to include uncertain fixed parameters. We give an example motivated by an application in systems biology. Supplemental materials provide an example based on a stochastic volatility model as well as MATLAB code.


arXiv: Computation | 2014

Massively Parallel Approximate Gaussian Process Regression

Robert B. Gramacy; Jarad Niemi; Robin M. Weiss

We explore how the big-three computing paradigms---symmetric multiprocessor, graphical processing units (GPUs), and cluster computing---can together be brought to bear on large-data Gaussian processes (GP) regression problems via a careful implementation of a newly developed local approximation scheme. Our methodological contribution focuses primarily on GPU computation, as this requires the most care and also provides the largest performance boost. However, in our empirical work we study the relative merits of all three paradigms to determine how best to combine them. The paper concludes with two case studies. One is a real data fluid-dynamics computer experiment which benefits from the local nature of our approximation; the second is a synthetic example designed to find the largest data set for which (accurate) GP emulation can be performed on a commensurate predictive set in under an hour.


Information Fusion | 2012

Bayesian CAR models for syndromic surveillance on multiple data streams: Theory and practice

David Banks; Gauri Sankar Datta; Alan F. Karr; James Lynch; Jarad Niemi; Francisco Vera

Syndromic surveillance has, so far, considered only simple models for Bayesian inference. This paper details the methodology for a serious, scalable solution to the problem of combining symptom data from a network of US hospitals for early detection of disease outbreaks. The approach requires high-end Bayesian modeling and significant computation, but the strategy described in this paper appears to be feasible and offers attractive advantages over the methods that are currently used in this area. The method is illustrated by application to ten quarters worth of data on opioid drug abuse surveillance from 636 reporting centers, and then compared to two other syndromic surveillance methods using simulation to create known signal in the drug abuse database.


Proceedings of the National Academy of Sciences of the United States of America | 2017

Prairie strips improve biodiversity and the delivery of multiple ecosystem services from corn–soybean croplands

Lisa A. Schulte; Jarad Niemi; Matthew J. Helmers; Matt Liebman; J. Gordon Arbuckle; David E. James; Randall K. Kolka; Matthew E. O’Neal; Mark D. Tomer; John C. Tyndall; Heidi Asbjornsen; Pauline Drobney; Jeri Neal; Gary Van Ryswyk; Chris Witte

Significance Prairie strips are a new conservation technology designed to alleviate biodiversity loss and environmental damage associated with row-crop agriculture. Results from a multiyear, catchment-scale experiment comparing corn and soybean fields with and without prairie vegetation indicated prairie strips raised pollinator and bird abundance, decreased water runoff, and increased soil and nutrient retention. These benefits accrued at levels disproportionately greater than the land area occupied by prairie strips. Social surveys revealed demand among both farm and nonfarm populations for the outcomes prairie strips produced. We estimated prairie strips could be used to improve biodiversity and ecosystem services across 3.9 million ha of cropland in Iowa and a large portion of the 69 million ha under similar management in the United States. Loss of biodiversity and degradation of ecosystem services from agricultural lands remain important challenges in the United States despite decades of spending on natural resource management. To date, conservation investment has emphasized engineering practices or vegetative strategies centered on monocultural plantings of nonnative plants, largely excluding native species from cropland. In a catchment-scale experiment, we quantified the multiple effects of integrating strips of native prairie species amid corn and soybean crops, with prairie strips arranged to arrest run-off on slopes. Replacing 10% of cropland with prairie strips increased biodiversity and ecosystem services with minimal impacts on crop production. Compared with catchments containing only crops, integrating prairie strips into cropland led to greater catchment-level insect taxa richness (2.6-fold), pollinator abundance (3.5-fold), native bird species richness (2.1-fold), and abundance of bird species of greatest conservation need (2.1-fold). Use of prairie strips also reduced total water runoff from catchments by 37%, resulting in retention of 20 times more soil and 4.3 times more phosphorus. Corn and soybean yields for catchments with prairie strips decreased only by the amount of the area taken out of crop production. Social survey results indicated demand among both farming and nonfarming populations for the environmental outcomes produced by prairie strips. If federal and state policies were aligned to promote prairie strips, the practice would be applicable to 3.9 million ha of cropland in Iowa alone.


arXiv: Methodology | 2014

BAYESIAN INFERENCE FOR A COVARIANCE MATRIX

Ignacio Alvarez; Jarad Niemi; Matt Simpson

Covariance matrix estimation arises in multivariate problems including multivariate normal sampling models and regression models where random effects are jointly modeled, e.g. random-intercept, random-slope models. A Bayesian analysis of these problems requires a prior on the covariance matrix. Here we assess, through a simulation study and a real data set, the impact this prior choice has on posterior inference of the covariance matrix. Inverse Wishart distribution is the natural choice for a covariance matrix prior because its conjugacy on normal model and simplicity, is usually available in Bayesian statistical software. However inverse Wishart distribution presents some undesirable properties from a modeling point of view. It can be too restrictive because assume the same amount of prior information about every variance parameters and, more important, it shows a prior relationship between the variances and correlations. Some alternatives distributions has been proposed. The scaled inverse Wishart distribution, which give more flexibility on the variance priors conserving the conjugacy property but does not eliminate the prior relationship between variances and correlations. Secondly, it is possible to fit separate priors for individual correlations and standard deviations. This strategy eliminates any prior relationship within the covariance matrix parameters, but it is not conjugate and therefore computationally slow.


CBE- Life Sciences Education | 2014

Student Interpretations of Phylogenetic Trees in an Introductory Biology Course

Jonathan Dees; Jennifer L. Momsen; Jarad Niemi; Lisa Montplaisir

Phylogenetic trees are essential to understanding evolutionary relatedness, yet undergraduates struggle to interpret these visualizations. This research uses data from students enrolled in a majors introductory biology course to characterize patterns in students’ tree thinking and how students’ reasoning changes over time and in response to instruction.


Chance | 2008

Contrarian strategies for NCAA tournament pools: A cure for march madness?

Jarad Niemi; Bradley P. Carlin; Jonathan M. Alexander

Every March, the National Collegiate Athletic Association selects 65 Division I men’s basketball teams to compete in a single-elimination tournament to determine a single national champion. Due to the frequency of upsets that occur every year, this event has been dubbed “March Madness” by the media, who cover the much-hyped and much-wagered-upon event. The tournament tempts people to wager money in online or office pools in which the goal is to predict— prior to its onset—the outcome of every game. A prespecified scoring scheme, typically assigning more points to correct picks in later tournament Contrarian Strategies for NCAA Tournament Pools: A Cure for March Madness?

Collaboration


Dive into the Jarad Niemi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian J. Reich

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Cheemeng Tan

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge