Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Russell Zaretzki is active.

Publication


Featured researches published by Russell Zaretzki.


computer vision and pattern recognition | 2013

Beta Process Joint Dictionary Learning for Coupled Feature Spaces with Application to Single Image Super-Resolution

Li He; Hairong Qi; Russell Zaretzki

This paper addresses the problem of learning over-complete dictionaries for the coupled feature spaces, where the learned dictionaries also reflect the relationship between the two spaces. A Bayesian method using a beta process prior is applied to learn the over-complete dictionaries. Compared to previous couple feature spaces dictionary learning algorithms, our algorithm not only provides dictionaries that customized to each feature space, but also adds more consistent and accurate mapping between the two feature spaces. This is due to the unique property of the beta process model that the sparse representation can be decomposed to values and dictionary atom indicators. The proposed algorithm is able to learn sparse representations that correspond to the same dictionary atoms with the same sparsity but different values in coupled feature spaces, thus bringing consistent and accurate mapping between coupled feature spaces. Another advantage of the proposed method is that the number of dictionary atoms and their relative importance may be inferred non-parametrically. We compare the proposed approach to several state-of-the-art dictionary learning methods by applying this method to single image super-resolution. The experimental results show that dictionaries learned by our method produces the best super-resolution results compared to other state-of-the-art methods.


Biometrics | 2008

The Skill Plot: A Graphical Technique for Evaluating Continuous Diagnostic Tests

William M. Briggs; Russell Zaretzki

We introduce the Skill Plot, a method that it is directly relevant to a decision maker who must use a diagnostic test. In contrast to ROC curves, the skill curve allows easy graphical inspection of the optimal cutoff or decision rule for a diagnostic test. The skill curve and test also determine whether diagnoses based on this cutoff improve upon a naive forecast (of always present or of always absent). The skill measure makes it easy to directly compare the predictive utility of two different classifiers in an analogy to the area under the curve statistic related to ROC analysis. Finally, this article shows that the skill-based cutoff inferred from the plot is equivalent to the cutoff indicated by optimizing the posterior odds in accordance with Bayesian decision theory. A method for constructing a confidence interval for this optimal point is presented and briefly discussed.


Genetics | 2009

Measuring and Detecting Molecular Adaptation in Codon Usage Against Nonsense Errors During Protein Translation

Michael A. Gilchrist; Premal Shah; Russell Zaretzki

Codon usage bias (CUB) has been documented across a wide range of taxa and is the subject of numerous studies. While most explanations of CUB invoke some type of natural selection, most measures of CUB adaptation are heuristically defined. In contrast, we present a novel and mechanistic method for defining and contextualizing CUB adaptation to reduce the cost of nonsense errors during protein translation. Using a model of protein translation, we develop a general approach for measuring the protein production cost in the face of nonsense errors of a given allele as well as the mean and variance of these costs across its coding synonyms. We then use these results to define the nonsense error adaptation index (NAI) of the allele or a contiguous subset thereof. Conceptually, the NAI value of an allele is a relative measure of its elevation on a specific and well-defined adaptive landscape. To illustrate its utility, we calculate NAI values for the entire coding sequence and across a set of nonoverlapping windows for each gene in the Saccharomyces cerevisiae S288c genome. Our results provide clear evidence of adaptation to reduce the cost of nonsense errors and increasing adaptation with codon position and expression. The magnitude and nature of this adaptation are also largely consistent with simulation results in which nonsense errors are the only selective force driving CUB evolution. Because NAI is derived from mechanistic models, it is both easier to interpret and more amenable to future refinement than other commonly used measures of codon bias. Further, our approach can also be used as a starting point for developing other mechanistically derived measures of adaptation such as for translational accuracy.


Signal, Image and Video Processing | 2015

Image color transfer to evoke different emotions based on color combinations

Li He; Hairong Qi; Russell Zaretzki

In this paper, a color transfer framework to evoke different emotions for images based on color combinations is proposed. The purpose of this color transfer is to change the “look and feel” of images, i.e., evoking different emotions. Colors are confirmed as the most attractive factor in images. In addition, various studies in both art and science areas have concluded that other than single color, color combinations are necessary to evoke specific emotions. Therefore, we propose a novel framework to transfer color of images based on color combinations, using a predefined color emotion model. The contribution of this new framework is threefold. First, users do not need to provide reference images as used in traditional color transfer algorithms. In most situations, users may not have enough esthetic knowledge or path to choose desired reference images. Second, because of the usage of color combinations instead of single color for emotions, a new color transfer algorithm that does not require an image library is proposed. Third, again because of the usage of color combinations, artifacts that are normally seen in traditional frameworks using single color are avoided. We present encouraging results generated from this new framework and its potential in several possible applications including color transfer of photos and paintings.


BMC Bioinformatics | 2007

Modeling SAGE tag formation and its effects on data interpretation within a Bayesian framework

Michael A. Gilchrist; Hong Qin; Russell Zaretzki

BackgroundSerial Analysis of Gene Expression (SAGE) is a high-throughput method for inferring mRNA expression levels from the experimentally generated sequence based tags. Standard analyses of SAGE data, however, ignore the fact that the probability of generating an observable tag varies across genes and between experiments. As a consequence, these analyses result in biased estimators and posterior probability intervals for gene expression levels in the transcriptome.ResultsUsing the yeast Saccharomyces cerevisiae as an example, we introduce a new Bayesian method of data analysis which is based on a model of SAGE tag formation. Our approach incorporates the variation in the probability of tag formation into the interpretation of SAGE data and allows us to derive exact joint and approximate marginal posterior distributions for the mRNA frequency of genes detectable using SAGE. Our analysis of these distributions indicates that the frequency of a gene in the tag pool is influenced by its mRNA frequency, the cleavage efficiency of the anchoring enzyme (AE), and the number of informative and uninformative AE cleavage sites within its mRNA.ConclusionWith a mechanistic, model based approach for SAGE data analysis, we find that inter-genic variation in SAGE tag formation is large. However, this variation can be estimated and, importantly, accounted for using the methods we develop here. As a result, SAGE based estimates of mRNA frequencies can be adjusted to remove the bias introduced by the SAGE tag formation process.


Traffic Injury Prevention | 2013

Two-Vehicle Injury Severity Models Based on Integration of Pavement Management and Traffic Engineering Factors

Ximiao Jiang; Baoshan Huang; Xuedong Yan; Russell Zaretzki; Stephen H Richards

Objective: The severity of traffic-related injuries has been studied by many researchers in recent decades. However, the evaluation of many factors is still in dispute and, until this point, few studies have taken into account pavement management factors as points of interest. The objective of this article is to evaluate the combined influences of pavement management factors and traditional traffic engineering factors on the injury severity of 2-vehicle crashes. Methods: This study examines 2-vehicle rear-end, sideswipe, and angle collisions that occurred on Tennessee state routes from 2004 to 2008. Both the traditional ordered probit (OP) model and Bayesian ordered probit (BOP) model with weak informative prior were fitted for each collision type. The performances of these models were evaluated based on the parameter estimates and deviances. Results: The results indicated that pavement management factors played identical roles in all 3 collision types. Pavement serviceability produces significant positive effects on the severity of injuries. The pavement distress index (PDI), rutting depth (RD), and rutting depth difference between right and left wheels (RD_df) were not significant in any of these 3 collision types. The effects of traffic engineering factors varied across collision types, except that a few were consistently significant in all 3 collision types, such as annual average daily traffic (AADT), rural–urban location, speed limit, peaking hour, and light condition. Conclusions: The findings of this study indicated that improved pavement quality does not necessarily lessen the severity of injuries when a 2-vehicle crash occurs. The effects of traffic engineering factors are not universal but vary by the type of crash. The study also found that the BOP model with a weak informative prior can be used as an alternative but was not superior to the traditional OP model in terms of overall performance.


Genome Biology and Evolution | 2015

Estimating gene expression and codon specific translational efficiencies, mutation biases, and selection coefficients from genomic data alone

Michael A. Gilchrist; Wei-Chen Chen; Premal Shah; Cedric Landerer; Russell Zaretzki

Extracting biologically meaningful information from the continuing flood of genomic data is a major challenge in the life sciences. Codon usage bias (CUB) is a general feature of most genomes and is thought to reflect the effects of both natural selection for efficient translation and mutation bias. Here we present a mechanistically interpretable, Bayesian model (ribosome overhead costs Stochastic Evolutionary Model of Protein Production Rate [ROC SEMPPR]) to extract meaningful information from patterns of CUB within a genome. ROC SEMPPR is grounded in population genetics and allows us to separate the contributions of mutational biases and natural selection against translational inefficiency on a gene-by-gene and codon-by-codon basis. Until now, the primary disadvantage of similar approaches was the need for genome scale measurements of gene expression. Here, we demonstrate that it is possible to both extract accurate estimates of codon-specific mutation biases and translational efficiencies while simultaneously generating accurate estimates of gene expression, rather than requiring such information. We demonstrate the utility of ROC SEMPPR using the Saccharomyces cerevisiae S288c genome. When we compare our model fits with previous approaches we observe an exceptionally high agreement between estimates of both codon-specific parameters and gene expression levels (ρ>0.99 in all cases). We also observe strong agreement between our parameter estimates and those derived from alternative data sets. For example, our estimates of mutation bias and those from mutational accumulation experiments are highly correlated (ρ=0.95). Our estimates of codon-specific translational inefficiencies and tRNA copy number-based estimates of ribosome pausing time (ρ=0.64), and mRNA and ribosome profiling footprint-based estimates of gene expression (ρ=0.53−0.74) are also highly correlated, thus supporting the hypothesis that selection against translational inefficiency is an important force driving the evolution of CUB. Surprisingly, we find that for particular amino acids, codon usage in highly expressed genes can still be largely driven by mutation bias and that failing to take mutation bias into account can lead to the misidentification of an amino acid’s “optimal” codon. In conclusion, our method demonstrates that an enormous amount of biologically important information is encoded within genome scale patterns of codon usage, accessing this information does not require gene expression measurements, but instead carefully formulated biologically interpretable models.


Traffic Injury Prevention | 2013

Estimating Safety Effects of Pavement Management Factors Utilizing Bayesian Random Effect Models

Ximiao Jiang; Baoshan Huang; Russell Zaretzki; Stephen H Richards; Xuedong Yan

Objective: Previous studies of pavement management factors that relate to the occurrence of traffic-related crashes are rare. Traditional research has mostly employed summary statistics of bidirectional pavement quality measurements in extended longitudinal road segments over a long time period, which may cause a loss of important information and result in biased parameter estimates. The research presented in this article focuses on crash risk of roadways with overall fair to good pavement quality. Real-time and location-specific data were employed to estimate the effects of pavement management factors on the occurrence of crashes. Methods: This research is based on the crash data and corresponding pavement quality data for the Tennessee state route highways from 2004 to 2009. The potential temporal and spatial correlations among observations caused by unobserved factors were considered. Overall 6 models were built accounting for no correlation, temporal correlation only, and both the temporal and spatial correlations. These models included Poisson, negative binomial (NB), one random effect Poisson and negative binomial (OREP, ORENB), and two random effect Poisson and negative binomial (TREP, TRENB) models. The Bayesian method was employed to construct these models. The inference is based on the posterior distribution from the Markov chain Monte Carlo (MCMC) simulation. These models were compared using the deviance information criterion. Results: Analysis of the posterior distribution of parameter coefficients indicates that the pavement management factors indexed by Present Serviceability Index (PSI) and Pavement Distress Index (PDI) had significant impacts on the occurrence of crashes, whereas the variable rutting depth was not significant. Among other factors, lane width, median width, type of terrain, and posted speed limit were significant in affecting crash frequency. Conclusions: The findings of this study indicate that a reduction in pavement roughness would reduce the likelihood of traffic-related crashes. Hence, maintaining a low level of pavement roughness is strongly suggested. In addition, the results suggested that the temporal correlation among observations was significant and that the ORENB model outperformed all other models.


Computational Statistics & Data Analysis | 2011

A note on mean-field variational approximations in Bayesian probit models

Artin Armagan; Russell Zaretzki

We correct some conclusions presented by Consonni and Marin (2007) on the performance of mean-field variational approximations to Bayesian inferences in the case of a simple probit model. We show that some of their presentations are misleading and thus their results do not fairly present the performance of such approximations in terms of point estimation under the specified model.


2011 Future of Instrumentation International Workshop (FIIW) Proceedings | 2011

Non-parametric Bayesian dictionary learning for image super resolution

Li He; Hairong Qi; Russell Zaretzki

This paper addresses the problem of generating a super-resolution (SR) image from a single low-resolution input image. A non-parametric Bayesian method is implemented to train the over-complete dictionary. The first advantage of using non-parametric Bayesian approach is the number of dictionary atoms and their relative importance may be inferred non-parametrically. In addition, sparsity level of the coefficients may be inferred automatically. Finally, the non-parametric Bayesian approach may learn the dictionary in situ. Two previous state-of-the-art methods including the efficient l1 method and the (K-SVD) are implemented for comparison. Although the efficient l1 method overall produces the best quality super-resolution images, the 837-atom dictionary trained by non-parametric Bayesian method produces super-resolution images that very close to quality of images produced by the 1024-atom efficient l1 dictionary. Finally, the non-parametric Bayesian method has the fastest speed in training the over-complete dictionary.

Collaboration


Dive into the Russell Zaretzki's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hairong Qi

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar

Li He

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William M. Briggs

Central Michigan University

View shared research outputs
Top Co-Authors

Avatar

Ximiao Jiang

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar

Xuedong Yan

Beijing Jiaotong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge