Daniel T. Larose
Central Connecticut State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daniel T. Larose.
Statistics in Medicine | 1997
Daniel T. Larose; Dipak K. Dey
Meta-analysis refers to quantitative methods to combine results from independent studies so as to draw overall conclusions. Frequently, results from dissimilar studies are inappropriately combined, resulting in suspect inferential synthesis. We present a straightforward method to identify and address this problem through the development of grouped random effect models for meta-analysis. We examine 15 comparative studies that investigate the efficacy of a new anti-epileptic drug, progabide. The flexibility of this modelling scheme is exemplified by the result that the open studies support the efficacy of progabide while the closed studies support the reverse hypothesis. Bayesian approaches for meta-analysis are preferable because of the small number of studies prevalent in meta-analysis. We specify diffuse proper prior and hyperprior distributions to assure posterior propriety. We investigate sensitivity of the posterior to choice of prior. We use Gibbs sampling and the Metropolis algorithm to generate samples from the relevant posteriors. We analyse posterior summaries and plots of model parameters to suggest solutions to questions of interest.
Computational Statistics & Data Analysis | 1998
Daniel T. Larose; Dipak K. Dey
Meta-analysis refers to the quantitative synthesis of evidence from a set of related studies. Inference based on naive use of meta-analysis may be erroneous, however, due to publication bias, the tendency of investigators or editors to base decisions regarding submission or acceptance of manuscripts for publication depending on the strength of the investigators study findings. Weighted distributions are ideally suited to model this phenomenon, since the weight function is proportional to the probability that the measurement (in this case, the results of a study) gets observed (published). Models induced by several competing weight functions are compared, including one model which does not account for publication bias, using the education data of Hedges and Olkin (1985). This allows us to investigate the sensitivity of the overall effect estimates over a range of models. Here, such models are fit hierarchically from a Bayesian perspective using non-informative priors in order to let the data drive the inference. Bayesian calculations are carried out using Markov-chain Monte-Carlo methods such as Gibbs sampling, the Metropolis algorithm, and Monte-Carlo estimation. Several questions of interest are posed, and possible solutions suggested.
Test | 1996
Daniel T. Larose; Dipak K. Dey
SummaryClosed form expressions and Monte Carlo estimates for the Bayes factor are obtained for selection among weighted and unweighted models. Weighted distributions occur naturally in contexts where the probability that a particular observation enters the sample gets multiplied by some non-negative weight function. Suppose a realizationy of Y under the generalized densityf(y|ϑ) enters the investigator’s record with probability proportional tow(y,τ). Clearly, the recorded y is not an observation onY, but on the random variableYw, say, having pdf:
Archives of Physical Medicine and Rehabilitation | 2016
Timothy Belliveau; Alan M. Jette; Subramani Seetharama; Jeffrey Axt; David Rosenblum; Daniel T. Larose; Bethlyn Houlihan; Mary Slavin; Chantal D. Larose
Journal of Vascular Access | 2014
Brenda A. Nurse; Rita Bonczek; Randall W. Barton; Daniel T. Larose
f^w (y|\theta ,\tau ) = w(y|\tau )f(y|\theta )/E_{y|\theta } [w(y,\tau )],
Journal of intelligent systems | 2009
Elijah Gaioni; Dipak K. Dey; Daniel T. Larose
Archive | 1995
Dipak K. Dey; Fengchun Peng; Daniel T. Larose
which is called a weighted distribution. Closed form expressions for the Bayes factor are obtained for models arising from the exponential family for commonly used weight functions, and the behavior of these expressions is analyzed. Unknown weight functions are also considered. A convenient form for Monte Carlo estimation of the Bayes factor is presented, and a computational example is discussed, which uses this method to compare weighted mixture models of some aircraft data from Proschan (1963).
Archive | 2005
Daniel T. Larose
OBJECTIVE To develop mathematical models for predicting level of independence with specific functional outcomes 1 year after discharge from inpatient rehabilitation for spinal cord injury. DESIGN Statistical analyses using artificial neural networks and logistic regression. SETTING Retrospective analysis of data from the national, multicenter Spinal Cord Injury Model Systems (SCIMS) Database. PARTICIPANTS Subjects (N=3142; mean age, 41.5y) with traumatic spinal cord injury who contributed data for the National SCIMS Database longitudinal outcomes studies. INTERVENTIONS Not applicable. MAIN OUTCOME MEASURES Self-reported ambulation ability and FIM-derived indices of level of assistance required for self-care activities (ie, bed-chair transfers, bladder and bowel management, eating, toileting). RESULTS Models for predicting ambulation status were highly accurate (>85% case classification accuracy; areas under the receiver operating characteristic curve between .86 and .90). Models for predicting nonambulation outcomes were moderately accurate (76%-86% case classification accuracy; areas under the receiver operating characteristic curve between .70 and .82). The performance of models generated by artificial neural networks closely paralleled the performance of models analyzed using logistic regression constrained by the same independent variables. CONCLUSIONS After further prospective validation, such predictive models may allow clinicians to use data available at the time of admission to inpatient spinal cord injury rehabilitation to accurately predict longer-term ambulation status, and whether individual patients are likely to perform various self-care activities with or without assistance from another person.
Archive | 2006
Daniel T. Larose
Purpose Patients at long-term acute care hospitals (LTACs) are medically complex with multiple comorbidities and high rates of antibiotic and device use. The objective of the study was to analyze the incidence and rate of central line-associated bloodstream infections (CLABSI) and the critical factors for patient care, management, placement and maintenance of the implanted central venous access device at this LTAC. Methods A 13-year retrospective chart review was performed comprising 191 medically complex patients with multiple comorbidities who had an implanted central line port. Information analyzed included (1) number of catheters; (2) number of patients; (3) number of catheter line days; (4) patient demographics; (5) port location; (6) admission diagnoses; (7) type, incidence and rate of catheter-related complications. Results The total number of catheter days was over 183,183 with a mean of 959 catheter days per patient. The mean rate of CLABSI was 0.087 per 1,000 days; incidence was less than 8% of patients with catheters. Conclusions The study found a markedly lower rate of CLABSI than reported for other LTACs as well as intensive care units, over 14- to 100-fold lower than other LTACs. The authors propose that standardized catheter placement with implementation of rigorous, prospective catheter care plans and a team approach to management were responsible for extremely low complication rates. These results can be extrapolated to different settings across the healthcare continuum.
Journal of Statistical Software | 2007
Zdravko Markov; Daniel T. Larose
This paper suggests a modeling framework for constrained decision making in a constantly evolving environment. The goal is to enable a user or system to anticipate impending points of interest and prepare accordingly. The methodology presented leads to a decision process that is based on a single score, which has simple and desirable statistical properties. This approach allows one to easily compute point and confidence interval estimates for future scores. The crucial advantage gained by employing this approach is the simplification of the decision process from a complex set of decision rules (as might be generated by a decision tree) down to a single number. Storage constraints imposed by the need to model phenomena in real time are also addressed. Scan statistics are used in this context to deal with the random number of observations encountered in some situations. The impact of randomly occurring observations on the amount of memory necessary is contrasted with that of regularly occurring observations by means of an example. We illustrate this process using a real-world weather data set. The ultimate goal in this case is to identify future points in time when the weather is predicted to be unsafe for the operation of outdoor machinery.