Raghu Nandan Sengupta
Indian Institute of Technology Kanpur
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Raghu Nandan Sengupta.
European Journal of Operational Research | 2009
Sunil Agrawal; Raghu Nandan Sengupta; Kripa Shanker
This work analyzes a two echelon (warehouse-retailer) serial supply chain to study the impact of information sharing (IS) and lead time on bullwhip effect and on-hand inventory. The customer demand at the retailer is assumed to be an autoregressive (AR(1)) process. Both the echelons use a minimum mean squared error (MMSE) model for forecasting lead time demand (LTD), and follow an adaptive base-stock inventory policy to determine their respective order quantities. For the cases of without IS and inter as well as intra echelon IS, expressions for the bullwhip effect and on-hand inventory for the warehouse are obtained, considering deterministic lead-time. The results are compared with the previous research work and an easy analysis of the various bullwhip effect expressions under different scenarios, is done to understand the impact of IS on the bullwhip effect phenomenon. It is shown that some part of bullwhip effect will always remain even after sharing both inter as well as intra echelon information. Further, with the help of a numerical example it is shown that the lead time reduction is more beneficial in comparison to the sharing of information in terms of reduction in the bullwhip effect phenomenon.
Sequential Analysis | 2005
Saibal Chattopadhyay; Sujay Datta; Raghu Nandan Sengupta
Abstract Prediction using a multiple-regression model is addressed when the penalties for overpredicting and underpredicting the true future value are not equal. Such asymmetric penalty functions are appropriate in many practical situations. If one imposes some preassigned precision on the prediction procedure, it is shown that in the presence of nuisance parameters in the model, the sample size needed to achieve the fixed precision is unknown. Some adaptive multistage sampling techniques are discussed that offer solutions to this problem. A prediction procedure based on a purely sequential sampling scheme is introduced, followed by a batch sequential scheme. Finally, a real-life example is provided to illustrate the use of these procedures, and computational evidence is supplied to demonstrate the efficiency of the latter procedure compared to the former one.
Quantitative Finance | 2013
Raghu Nandan Sengupta; Siddharth Sahoo
This paper builds on the work of Roman et al. [Quant. Finance, 2007, 7, 443–458], whereby we incorporate the concept of the reliability-based design optimization (RBDO) technique. We reformulate Roman et al.’s model by including both non-deterministic design variables as well as probabilistic parameter values of returns of assets, and solve it with a relevant probabilistic constraint. Apart from a similar set of conclusions as derived by Roman et al., we deduce a few other interesting observations, some of which are: (i) reliability forces diversification and hence reduces portfolio risk; (ii) an increase in the level of reliability aids in better portfolio management, as it aids diversification; and (iii) a decrease in the investor’s attitude with respect to how reliable the input data is, has an adverse effect on the optimal value of the portfolio risk/variance.
Statistics | 2006
Saibal Chattopadhyay; Raghu Nandan Sengupta
Consider a normal population with unknown mean μ and unknown variance σ2. We estimate μ under an asymmetric LINEX loss function such that the associated risk is bounded from above by a known quantity w. This necessitates the use of a random number (N) of observations. Under a fairly broad set of assumptions on N, we derive the asymptotic second-order expansion of the associated risk function. Some examples have been included involving accelerated sequential and three-stage sampling techniques. Performance comparisons of these procedures are considered using a Monte-Carlo study.
Journal of Applied Statistics | 2008
Raghu Nandan Sengupta
When estimating in a practical situation, asymmetric loss functions are preferred over squared error loss functions, as the former is more appropriate than the latter in many estimation problems. We consider here the problem of fixed precision point estimation of a linear parametric function in beta for the multiple linear regression model using asymmetric loss functions. Due to the presence of nuissance parameters, the sample size for the estimation problem is not known beforehand and hence we take the recourse of adaptive multistage sampling methodologies. We discuss here some multistage sampling techniques and compare the performances of these methodologies using simulation runs. The implementation of the codes for our proposed models is accomplished utilizing MATLAB 7.0.1 program run on a Pentium IV machine. Finally, we highlight the significance of such asymmetric loss functions with few practical examples.Abstract When estimating in a practical situation, asymmetric loss functions are preferred over squared error loss functions, as the former is more appropriate than the latter in many estimation problems. We consider here the problem of fixed precision point estimation of a linear parametric function in beta for the multiple linear regression model using asymmetric loss functions. Due to the presence of nuissance parameters, the sample size for the estimation problem is not known beforehand and hence we take the recourse of adaptive multistage sampling methodologies. We discuss here some multistage sampling techniques and compare the performances of these methodologies using simulation runs. The implementation of the codes for our proposed models is accomplished utilizing MATLAB 7.0.1 program run on a Pentium IV machine. Finally, we highlight the significance of such asymmetric loss functions with few practical examples.
international conference on artificial immune systems | 2007
Rohit Singh; Raghu Nandan Sengupta
In this paper we articulate the idea of utilizing Artificial Immune System (AIS) for the prediction of bankruptcy of companies. Our proposed AIS model considers the financial ratios as input parameters. The novelty of our algorithms is their hybrid nature, where we use modified Negative Selection, Positive Selection and the Clonal Selection Algorithms adopted from Human Immune System. Finally we compare our proposed models with a few existing statistical and mathematical sickness prediction methods.
Communications in Statistics - Simulation and Computation | 2012
Raghu Nandan Sengupta; Sachin Srivastava
Consider the estimation problem for the multiple linear regression (MLR) setup, under the balanced loss function (BLF), where goodness of fit and precision of estimation are modeled using either squared error loss (SEL) or linear exponential (LINEX) loss functions. The authors derive the minimum risk estimates for two different variants of BLF and prove for both the cases the existence of the ubiquitous SEL and LINEX estimates at the boundary conditions. Conclusions draw from the exhaustive simulation runs prove the general nature of proposed theorems.
Computational Statistics & Data Analysis | 2011
Raghu Nandan Sengupta; Angana Sengupta
Sequential analysis as a sampling technique facilitates efficient statistical inference by considering less number of observations in comparison to the fixed sampling method. The optimal stopping rule dictates the sample size and also the statistical inference deduced thereafter. In this research we propose three variants of the already existing multistage sampling procedures and name them as (i) Jump and Crawl (JC), (ii) Batch Crawl and Jump (BCJ) and (iii) Batch Jump and Crawl (BJC) sequential sampling methods. We use the (i) normal, (ii) exponential, (iii) gamma and (iv) extreme value distributions for the point estimation problems under bounded risk conditions. We highlight the efficacy of using the right adaptive sampling plan for the bounded risk problems for these four distributions, considering two different loss functions, namely (i) squared error loss (SEL) and (ii) linear exponential (LINEX) loss functions. Comparison and analysis of our proposed methods with existing sequential sampling techniques is undertaken and the importance of this study is highlighted using extensive theoretical simulation runs.
Foundations of Computing and Decision Sciences | 2017
Raghu Nandan Sengupta; Rakesh Kumar
Abstract We solve a linear chance constrained portfolio optimization problem using Robust Optimization (RO) method wherein financial script/asset loss return distributions are considered as extreme valued. The objective function is a convex combination of portfolio’s CVaR and expected value of loss return, subject to a set of randomly perturbed chance constraints with specified probability values. The robust deterministic counterpart of the model takes the form of Second Order Cone Programming (SOCP) problem. Results from extensive simulation runs show the efficacy of our proposed models, as it helps the investor to (i) utilize extensive simulation studies to draw insights into the effect of randomness in portfolio decision making process, (ii) incorporate different risk appetite scenarios to find the optimal solutions for the financial portfolio allocation problem and (iii) compare the risk and return profiles of the investments made in both deterministic as well as in uncertain and highly volatile financial markets.
Communications in Statistics - Simulation and Computation | 2012
Raghu Nandan Sengupta; Sachin Srivastava
We derive the minimum risk estimates of the scalar means for Normal, Exponential, and Gamma distributions, under the convex combination of SEL and LINEX loss functions. The functional forms of the proposed estimates for the three examples are general in nature, and for the boundary conditions provide us with the corresponding estimates under SEL and LINEX loss, respectively. We authenticate our proposed models using different iterative as well as meta-heuristic techniques, and through extensive simulation as well as application of live data sets, validate the efficacy of our proposed results.