Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard A. Levine is active.

Publication


Featured researches published by Richard A. Levine.


American Journal of Human Genetics | 2003

Fragile X premutation tremor/ataxia syndrome: molecular, clinical, and neuroimaging correlates.

Sébastien Jacquemont; Randi J. Hagerman; Maureen A. Leehey; Jim Grigsby; Lin Zhang; James A. Brunberg; Claudia M. Greco; Vincent Des Portes; Tristan Jardini; Richard A. Levine; Elizabeth Berry-Kravis; W. Ted Brown; Stephane Schaeffer; John T. Kissel; Flora Tassone; Paul J. Hagerman

We present a series of 26 patients, all >50 years of age, who are carriers of the fragile X premutation and are affected by a multisystem, progressive neurological disorder. The two main clinical features of this new syndrome are cerebellar ataxia and/or intention tremor, which were chosen as clinical inclusion criteria for this series. Other documented symptoms were short-term memory loss, executive function deficits, cognitive decline, parkinsonism, peripheral neuropathy, lower limb proximal muscle weakness, and autonomic dysfunction. Symmetrical regions of increased T2 signal intensity in the middle cerebellar peduncles and adjacent cerebellar white matter are thought to be highly sensitive for this neurologic condition, and their presence is the radiological inclusion criterion for this series. Molecular findings include elevated mRNA and low-normal or mildly decreased levels of fragile X mental retardation 1 protein. The clinical presentation of these patients, coupled with a specific lesion visible on magnetic resonance imaging and with neuropathological findings, affords a more complete delineation of this fragile X premutation-associated tremor/ataxia syndrome and distinguishes it from other movement disorders.


Ecological Applications | 2004

ALIEN FISHES IN CALIFORNIA WATERSHEDS: CHARACTERISTICS OF SUCCESSFUL AND FAILED INVADERS

Michael P. Marchetti; Peter B. Moyle; Richard A. Levine

The literature on alien animal invaders focuses largely on successful invasions over broad geographic scales and rarely examines failed invasions. As a result, it is difficult to make predictions about which species are likely to become successful invaders or which environments are likely to be most susceptible to invasion. To address these issues, we developed a data set on fish invasions in watersheds throughout California (USA) that includes failed introductions. Our data set includes information from three stages of the invasion process (establishment, spread, and integration). We define seven categorical predictor variables (trophic status, size of native range, parental care, maximum adult size, physiological tolerance, distance from nearest native source, and propagule pressure) and one continuous predictor variable (prior invasion success) for all introduced species. Using an information-theoretic approach we evaluate 45 separate hypotheses derived from the invasion literature over these three sta...


Journal of Computational and Graphical Statistics | 2001

Implementations of the Monte Carlo EM Algorithm

Richard A. Levine; George Casella

The Monte Carlo EM (MCEM) algorithm is a modification of the EM algorithm where the expectation in the E-step is computed numerically through Monte Carlo simulations. The most exible and generally applicable approach to obtaining a Monte Carlo sample in each iteration of an MCEM algorithm is through Markov chain Monte Carlo (MCMC) routines such as the Gibbs and Metropolis–Hastings samplers. Although MCMC estimation presents a tractable solution to problems where the E-step is not available in closed form, two issues arise when implementing this MCEM routine: (1) how do we minimize the computational cost in obtaining an MCMC sample? and (2) how do we choose the Monte Carlo sample size? We address the first question through an application of importance sampling whereby samples drawn during previous EM iterations are recycled rather than running an MCMC sampler each MCEM iteration. The second question is addressed through an application of regenerative simulation. We obtain approximate independent and identical samples by subsampling the generated MCMC sample during different renewal periods. Standard central limit theorems may thus be used to gauge Monte Carlo error. In particular, we apply an automated rule for increasing the Monte Carlo sample size when the Monte Carlo error overwhelms the EM estimate at any given iteration. We illustrate our MCEM algorithm through analyses of two datasets fit by generalized linear mixed models. As a part of these applications, we demonstrate the improvement in computational cost and efficiency of our routine over alternative MCEM strategies.


Quality of Life Research | 2005

Comparing SF-36 scores across three groups of women with different health profiles

Kathleen J. Yost; Mary N. Haan; Richard A. Levine; Ellen B. Gold

Background: The widespread use of the Medical Outcomes Study (MOS) 36-item Short-Form Health Survey (SF-36) facilitates the comparison of health-related quality of life (HRQL) across independent studies. Objectives: To compare the scores of eight scales and two summary scales of the SF-36 across participants in the Women’s Healthy Eating and Living (WHEL) trial, the Women’s Health Initiative-Dietary Modification trial (WHI-DM), and the MOS, and to illustrate the use of effect sizes for interpreting the importance of group differences. Methods: WHEL and WHI-DM are both multi-center dietary interventions; only data from the UC Davis sites were used in our study. WHEL participants had a recent history of breast cancer, WHI-DM participants were healthy, postmenopausal women, and women in the MOS had a history of hypertension, diabetes, heart disease, or depression. General linear models were used to identify statistically significant differences in scale scores. Meaningful differences were determined by effect sizes computed using a common within-group standard deviation (SD) and SDs from normative data. Results: After adjusting for age and marital status, SF-36 scores for the WHI-DM and WHEL samples were similar and both had statistically significantly higher scores than the MOS sample. Relative to the WHEL or WHI-DM studies, MOS scores for scales related to the physical domain were clearly meaningfully lower whereas scale scores related to the mental health domain were potentially meaningfully lower. Conclusions: The HRQL of breast cancer survivors is comparable to that of healthy women and better than that of women with chronic health conditions, particularly with respect to physical health. This study illustrated the use of ranges of effects sizes for aiding the interpretation of SF-36 scores differences across independent studies.


Journal of Glaucoma | 2007

Visual field quality control in the Ocular Hypertension Treatment Study (OHTS).

John L. Keltner; Chris A. Johnson; Kimberly E. Cello; Shannan E. Bandermann; Juanjuan Fan; Richard A. Levine; Michael A. Kass; Mae O. Gordon

ObjectiveTo report the impact of visual field quality control (QC) procedures on the rates of visual field unreliability, test parameter errors, and visual field defects attributed to testing artifacts in the Ocular Hypertension Treatment Study (OHTS). MethodsOHTS technicians were certified for perimetry and were required to submit 2 sets of visual fields that met study criteria before testing study participants. The OHTS Visual Field Reading Center (VFRC) evaluated 46,777 visual fields completed by 1618 OHTS participants between February 1994 and December 2003. Visual field QC errors, rates of unreliability, and defects attributed to testing artifacts were assessed. The OHTS QC system addressed 3 areas of clinic performance: (1) test parameter errors, (2) patient data errors, and (3) shipment errors. A visual field was classified as unreliable if any of the reliability indices exceeded the 33% limit. Clinical sites were immediately contacted by the VFRC via fax, e-mail, and/or phone and instructed on how to prevent further testing errors on fields with defects attributed to testing artifacts. Main Outcome MeasuresQC errors (test parameter errors) and unreliability rates. ResultsA total of 2.4% (1136/ 46,777) of the visual fields were unreliable and 0.23% (107/46,777) had incorrect test parameters. Visual field defects attributed to testing artifacts occurred in approximately 1% (483/46,777) of the visual fields. ConclusionsPrompt transmission of visual fields to the VFRC for ongoing and intensive QC monitoring and rapid feedback to technicians helps to reduce the frequency of unreliable visual fields and incorrect testing parameters. Visual field defects attributed to testing artifacts were infrequent in the OHTS.


Journal of Statistical Computation and Simulation | 2004

An automated (Markov chain) Monte Carlo EM algorithm

Richard A. Levine; Juanjuan Fan

We present an automated Monte Carlo EM (MCEM) algorithm which efficiently assesses Monte Carlo error in the presence of dependent Monte Carlo, particularly Markov chain Monte Carlo, E-step samples and chooses an appropriate Monte Carlo sample size to minimize this Monte Carlo error with respect to progressive EM step estimates. Monte Carlo error is gauged though an application of the central limit theorem during renewal periods of the MCMC sampler used in the E-step. The resulting normal approximation allows us to construct a rigorous and adaptive rule for updating the Monte Carlo sample size each iteration of the MCEM algorithm. We illustrate our automated routine and compare the performance with competing MCEM algorithms in an analysis of a data set fit by a generalized linear mixed model. †Supported by National Science Foundation/Environmental Protection Agency Grant DMS-99-78321 ‡E-mail: [email protected]


European Journal of Operational Research | 2007

Bayesian demand updating in the lost sales newsvendor problem: A two-moment approximation

Emre Berk; Ülkü Gürler; Richard A. Levine

We consider Bayesian updating of demand in a lost sales newsvendor model with censored observations. In a lost sales environment, where the arrival process is not recorded, the exact demand is not observed if it exceeds the beginning stock level, resulting in censored observations. Adopting a Bayesian approach for updating the demand distribution, we develop expressions for the exact posteriors starting with conjugate priors, for negative binomial, gamma, Poisson and normal distributions. Having shown that non-informative priors result in degenerate predictive densities except for negative binomial demand, we propose an approximation within the conjugate family by matching the first two moments of the posterior distribution. The conjugacy property of the priors also ensure analytical tractability and ease of computation in successive updates. In our numerical study, we show that the posteriors and the predictive demand distributions obtained exactly and with the approximation are very close to each other, and that the approximation works very well from both probabilistic and operational perspectives in a sequential updating setting as well.


Journal of the American Statistical Association | 2006

Trees for Correlated Survival Data by Goodness of Split, With Applications to Tooth Prognosis

Juanjuan Fan; Xiaogang Su; Richard A. Levine; Martha E. Nunn; Michael LeBlanc

In this article the regression tree method is extended to correlated survival data and applied to the problem of developing objective prognostic classification rules in periodontal research. The robust logrank statistic is used as the splitting statistic to measure the between-node difference in survival, while adjusting for correlation among failure times from the same patient. The partition-based survival function estimator is shown to converge to the true conditional survival function. Tooth loss data from 100 periodontal patients (2,509 teeth) was analyzed using the proposed method. The goal is to assign each tooth to one of the five prognosis categories (good, fair, poor, questionable, or hopeless). After the best-sized tree was identified, an amalgamation procedure was used to form five prognostic groups. The prognostic rules established here may be used by periodontists, general dentists, and insurance companies in devising appropriate treatment plans for periodontal patients.


Archive | 2005

Bayesian Approaches to Modeling Stated Preference Data

David F. Layton; Richard A. Levine

Bayesian econometric approaches to modeling non-market valuation data have not often been applied, but they offer a number of potential advantages. Bayesian models incorporate prior information often available in the form of past studies or pre-tests in Stated Preference (SP) based valuation studies; model computations are easily and efficiently performed within an intuitively constructed Markov chain Monte Carlo framework; and asymptotic approximations, unreasonable for the relatively small sample sizes seen in some SP data sets, need not be invoked to draw (posterior) inferences. With these issues in mind, we illustrate computationally feasible approaches for fitting a series of surveys in a sequential manner, and for comparing a variety of models within the Bayesian paradigm. We apply these approaches to a series of SP surveys that examined policies to conserve old growth forests, northern spotted owls, and salmon in the U.S. Pacific Northwest.


Periodontology 2000 | 2012

Development of prognostic indicators using classification and regression trees for survival

Martha E. Nunn; Juanjuan Fan; Xiaogang Su; Richard A. Levine; Hyo Jung Lee; Michael K. McGuire

The development of an accurate prognosis is an integral component of treatment planning in the practice of periodontics. Prior work has evaluated the validity of using various clinical measured parameters for assigning periodontal prognosis as well as for predicting tooth survival and change in clinical conditions over time. We critically review the application of multivariate Classification And Regression Trees (CART) for survival in developing evidence-based periodontal prognostic indicators. We focus attention on two distinct methods of multivariate CART for survival: the marginal goodness-of-fit approach, and the multivariate exponential approach. A number of common clinical measures have been found to be significantly associated with tooth loss from periodontal disease, including furcation involvement, probing depth, mobility, crown-to-root ratio, and oral hygiene. However, the inter-relationships among these measures, as well as the relevance of other clinical measures to tooth loss from periodontal disease (such as bruxism, family history of periodontal disease, and overall bone loss), remain less clear. While inferences drawn from any single current study are necessarily limited, the application of new approaches in epidemiologic analyses to periodontal prognosis, such as CART for survival, should yield important insights into our understanding, and treatment, of periodontal diseases.

Collaboration


Dive into the Richard A. Levine's collaboration.

Top Co-Authors

Avatar

Juanjuan Fan

San Diego State University

View shared research outputs
Top Co-Authors

Avatar

Xiaogang Su

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joshua Beemer

San Diego State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew J. Bohonak

San Diego State University

View shared research outputs
Top Co-Authors

Avatar

Jeanne Stronach

San Diego State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge