Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Srinivas Reddy Geedipally is active.

Publication


Featured researches published by Srinivas Reddy Geedipally.


Accident Analysis & Prevention | 2008

Application of the Conway–Maxwell–Poisson generalized linear model for analyzing motor vehicle crashes

Dominique Lord; Seth D. Guikema; Srinivas Reddy Geedipally

This paper documents the application of the Conway-Maxwell-Poisson (COM-Poisson) generalized linear model (GLM) for modeling motor vehicle crashes. The COM-Poisson distribution, originally developed in 1962, has recently been re-introduced by statisticians for analyzing count data subjected to over- and under-dispersion. This innovative distribution is an extension of the Poisson distribution. The objectives of this study were to evaluate the application of the COM-Poisson GLM for analyzing motor vehicle crashes and compare the results with the traditional negative binomial (NB) model. The comparison analysis was carried out using the most common functional forms employed by transportation safety analysts, which link crashes to the entering flows at intersections or on segments. To accomplish the objectives of the study, several NB and COM-Poisson GLMs were developed and compared using two datasets. The first dataset contained crash data collected at signalized four-legged intersections in Toronto, Ont. The second dataset included data collected for rural four-lane divided and undivided highways in Texas. Several methods were used to assess the statistical fit and predictive performance of the models. The results of this study show that COM-Poisson GLMs perform as well as NB models in terms of GOF statistics and predictive performance. Given the fact the COM-Poisson distribution can also handle under-dispersed data (while the NB distribution cannot or has difficulties converging), which have sometimes been observed in crash databases, the COM-Poisson GLM offers a better alternative over the NB model for modeling motor vehicle crashes, especially given the important limitations recently documented in the safety literature about the latter type of model.


Accident Analysis & Prevention | 2010

Investigating the effect of modeling single-vehicle and multi-vehicle crashes separately on confidence intervals of Poisson-gamma models

Srinivas Reddy Geedipally; Dominique Lord

Crash prediction models still constitute one of the primary tools for estimating traffic safety. These statistical models play a vital role in various types of safety studies. With a few exceptions, they have often been employed to estimate the number of crashes per unit of time for an entire highway segment or intersection, without distinguishing the influence different sub-groups have on crash risk. The two most important sub-groups that have been identified in the literature are single- and multi-vehicle crashes. Recently, some researchers have noted that developing two distinct models for these two categories of crashes provides better predicting performance than developing models combining both crash categories together. Thus, there is a need to determine whether a significant difference exists for the computation of confidence intervals when a single model is applied rather than two distinct models for single- and multi-vehicle crashes. Building confidence intervals have many important applications in highway safety. This paper investigates the effect of modeling single- and multi-vehicle (head-on and rear-end only) crashes separately versus modeling them together on the prediction of confidence intervals of Poisson-gamma models. Confidence intervals were calculated for total (all severities) crash models and fatal and severe injury crash models. The data used for the comparison analysis were collected on Texas multilane undivided highways for the years 1997-2001. This study shows that modeling single- and multi-vehicle crashes separately predicts larger confidence intervals than modeling them together as a single model. This difference is much larger for fatal and injury crash models than for models for all severity levels. Furthermore, it is found that the single- and multi-vehicle crashes are not independent. Thus, a joint (bivariate) model which accounts for correlation between single- and multi-vehicle crashes is developed and it predicts wider confidence intervals than a univariate model for all severities. Finally, the simulation results show that separate models predict values that are closer to the true confidence intervals, and thus this research supports previous studies that recommended modeling single- and multi-vehicle crashes separately for analyzing highway segments.


Accident Analysis & Prevention | 2012

The negative binomial-Lindley generalized linear model: Characteristics and application using crash data

Srinivas Reddy Geedipally; Dominique Lord; Soma Sekhar Dhavala

There has been a considerable amount of work devoted by transportation safety analysts to the development and application of new and innovative models for analyzing crash data. One important characteristic about crash data that has been documented in the literature is related to datasets that contained a large amount of zeros and a long or heavy tail (which creates highly dispersed data). For such datasets, the number of sites where no crash is observed is so large that traditional distributions and regression models, such as the Poisson and Poisson-gamma or negative binomial (NB) models cannot be used efficiently. To overcome this problem, the NB-Lindley (NB-L) distribution has recently been introduced for analyzing count data that are characterized by excess zeros. The objective of this paper is to document the application of a NB generalized linear model with Lindley mixed effects (NB-L GLM) for analyzing traffic crash data. The study objective was accomplished using simulated and observed datasets. The simulated dataset was used to show the general performance of the model. The model was then applied to two datasets based on observed data. One of the dataset was characterized by a large amount of zeros. The NB-L GLM was compared with the NB and zero-inflated models. Overall, the research study shows that the NB-L GLM not only offers superior performance over the NB and zero-inflated models when datasets are characterized by a large number of zeros and a long tail, but also when the crash dataset is highly dispersed.


Risk Analysis | 2010

Extension of the Application of Conway-Maxwell-Poisson Models: Analyzing Traffic Crash Data Exhibiting Underdispersion

Dominique Lord; Srinivas Reddy Geedipally; Seth D. Guikema

The objective of this article is to evaluate the performance of the COM-Poisson GLM for analyzing crash data exhibiting underdispersion (when conditional on the mean). The COM-Poisson distribution, originally developed in 1962, has recently been reintroduced by statisticians for analyzing count data subjected to either over- or underdispersion. Over the last year, the COM-Poisson GLM has been evaluated in the context of crash data analysis and it has been shown that the model performs as well as the Poisson-gamma model for crash data exhibiting overdispersion. To accomplish the objective of this study, several COM-Poisson models were estimated using crash data collected at 162 railway-highway crossings in South Korea between 1998 and 2002. This data set has been shown to exhibit underdispersion when models linking crash data to various explanatory variables are estimated. The modeling results were compared to those produced from the Poisson and gamma probability models documented in a previous published study. The results of this research show that the COM-Poisson GLM can handle crash data when the modeling output shows signs of underdispersion. Finally, they also show that the model proposed in this study provides better statistical performance than the gamma probability and the traditional Poisson models, at least for this data set.


Transportation Research Record | 2008

Effects of Varying Dispersion Parameter of Poisson-Gamma Models on Estimation of Confidence Intervals of Crash Prediction Models

Srinivas Reddy Geedipally; Dominique Lord

In estimating safety performance, the most common probabilistic structures of the popular statistical models used by transportation safety analysts for modeling motor vehicle crashes are the traditional Poisson and Poisson–gamma (or negative binomial) distributions. Because crash data often exhibit overdispersion, Poisson–gamma models are usually the preferred model. The dispersion parameter of Poisson–gamma models had been assumed to be fixed, but recent research in highway safety has shown that the parameter can potentially be dependent on the covari-ates, especially for flow-only models. Given that the dispersion parameter is a key variable for computing confidence intervals, there is reason to believe that a varying dispersion parameter could affect the computation of confidence intervals compared with confidence intervals produced from Poisson–gamma models with a fixed dispersion parameter. This study evaluates whether the varying dispersion parameter affects the computation of the confidence intervals for the gamma mean (m) and predicted response (y) on sites that have not been used for estimating the predictive model. To accomplish that objective, predictive models with fixed and varying dispersion parameters were estimated by using data collected in California at 537 three-leg rural unsignalized intersections. The study shows that models developed with a varying dispersion parameter greatly influence the confidence intervals of the gamma mean and predictive response. More specifically, models with a varying dispersion parameter usually produce smaller confidence intervals, and hence more precise estimates, than models with a fixed dispersion parameter, both for the gamma mean and for the predicted response. Therefore, it is recommended to develop models with a varying dispersion whenever possible, especially if they are used for screening purposes.


Accident Analysis & Prevention | 2012

Analysis of crash severities using nested logit model--accounting for the underreporting of crashes.

Sunil Patil; Srinivas Reddy Geedipally; Dominique Lord

Recent studies in the area of highway safety have demonstrated the usefulness of logit models for modeling crash injury severities. Use of these models enables one to identify and quantify the effects of factors that contribute to certain levels of severity. Most often, these models are estimated assuming equal probability of the occurrence for each injury severity level in the data. However, traffic crash data are generally characterized by underreporting, especially when crashes result in lower injury severity. Thus, the sample used for an analysis is often outcome-based, which can result in a biased estimation of model parameters. This is more of a problem when a nested logit model specification is used instead of a multinomial logit model and when true shares of the outcomes-injury severity levels in the population are not known (which is almost always the case). This study demonstrates an application of a recently proposed weighted conditional maximum likelihood estimator in tackling the problem of underreporting of crashes when using a nested logit model for crash severity analyses.


Accident Analysis & Prevention | 2011

The negative binomial–Lindley distribution as a tool for analyzing crash data characterized by a large amount of zeros

Dominique Lord; Srinivas Reddy Geedipally

The modeling of crash count data is a very important topic in highway safety. As documented in the literature, given the characteristics associated with crash data, transportation safety analysts have proposed a significant number of analysis tools, statistical methods and models for analyzing such data. Among the data issues, we find the one related to crash data which have a large amount of zeros and a long or heavy tail. It has been found that using this kind of dataset could lead to erroneous results or conclusions if the wrong statistical tools or methods are used. Thus, the purpose of this paper is to introduce a new distribution, known as the negative binomial-Lindley (NB-L), which has very recently been introduced for analyzing data characterized by a large number of zeros. The NB-L offers the advantage of being able to handle this kind of datasets, while still maintaining similar characteristics as the traditional negative binomial (NB). In other words, the NB-L is a two-parameter distribution and the long-term mean is never equal to zero. To examine this distribution, simulated and observed data were used. The results show that the NB-L can provide a better statistical fit than the traditional NB for datasets that contain a large amount of zeros.


Risk Analysis | 2012

Characterizing the Performance of the Conway-Maxwell Poisson Generalized Linear Model

Royce A. Francis; Srinivas Reddy Geedipally; Seth D. Guikema; Soma Sekhar Dhavala; Dominique Lord; Sarah LaRocca

Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression.


Transportation Research Record | 2011

Analysis of Motorcycle Crashes in Texas with Multinomial Logit Model

Srinivas Reddy Geedipally; Patricia Turner; Sunil Patil

Motorcyclists accounted for 15% of all traffic-related deaths in Texas in 2008. This proportion increased threefold during the past decade. Knowledge of the associated causes of motorcycle crashes and the factors that contributed to the severity of injuries to motorcyclists involved in crashes is useful in suggesting approaches for reducing their frequency and severity. In this study, crash data from police-reported motorcycle crashes in Texas were used to estimate multinomial logit models to identify differences in factors likely to affect the severity of crash injuries of motorcyclists. In addition, probabilistic models of the injury severity of motorcyclists in urban and rural crashes were estimated. Average direct and cross-pseudoelasticity results supported the development of probabilistic models for identifying factors that significantly influenced injury severity in urban and rural motorcycle crashes. Key findings showed that alcohol, gender, lighting, and presence of both horizontal and vertical curves played significant roles in injury outcomes of motorcyclist crashes in urban areas. Similar factors were found to have significantly affected the injury severity of motorcyclists in rural areas, but older riders (older than 55), single-vehicle crashes, angular crashes, and divided highways also affected injury severity outcomes in rural motorcycle crashes. From the study findings, recommendations to reduce the severity of motorcyclists’ crash injuries are presented.


Transportation Research Record | 2010

Examination of Methods to Estimate Crash Counts by Collision Type

Srinivas Reddy Geedipally; Sunil Patil; Dominique Lord

Multinomial logit (MNL) models have been applied extensively in transportation engineering, marketing, and recreational demand modeling. Thus far, this type of model has not been used to estimate the proportion of crashes by collision type. This study investigated the applicability of MNL models to predict the proportion of crashes by collision type and to estimate crash counts by collision type. MNL models were compared with two other methods described in recent publications to estimate crash counts by collision type: (a) fixed proportions of crash counts for all collision types and (b) collision type models. This study employed data collected between 2002 and 2006 on crashes that occurred on rural, two-lane, undivided highway segments in Minnesota. The study results showed that the MNL model could be used to predict the proportion of crashes by collision type, at least for the data set used. Furthermore, the method based on the MNL model was found useful to estimate crash counts by collision type, and it performed better than the method based on the use of fixed proportions. The use of collision type models, however, was still found to be the best way to estimate crash counts by specific collision type. In cases where collision type models are affected by the small sample size and a low sample-mean problem, the method based on the MNL model is recommended.

Collaboration


Dive into the Srinivas Reddy Geedipally's collaboration.

Researchain Logo
Decentralizing Knowledge