Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tapas K. Das is active.

Publication


Featured researches published by Tapas K. Das.


Iie Transactions | 2004

Wavelet-based multiscale statistical process monitoring: A literature review

Rajesh Ganesan; Tapas K. Das; Vivekanand Venkataraman

Data that represent complex and multivariate processes are well known to be multiscale due to the variety of changes that could occur in a process with different localizations in time and frequency. Examples of changes may include mean shift, spikes, drifts and variance shifts all of which could occur in a process at different times and at different frequencies. Acoustic emission signals arising from machining, images representing MRI scans and musical audio signals are some examples that contain these changes and are not suited for single scale analysis. The recent literature contains several wavelet-decomposition-based multiscale process monitoring approaches including many real life process monitoring applications. These approaches are shown to be effective in handling different data types and, in concept, are likely to perform better than existing single scale approaches. There also exists a vast literature on the theory of wavelet decomposition and other statistical elements of multiscale monitoring methods, such as principal components analysis, denoising and charting. To our knowledge, no comprehensive review of the work relevant to multiscale monitoring of both univariate and multivariate processes has been presented to the literature. In this paper, over 150 both published and unpublished papers are cited for this important subject, and some extensions of the current research are also discussed.


IEEE Transactions on Power Systems | 2007

A Reinforcement Learning Model to Assess Market Power Under Auction-Based Energy Pricing

Vishnuteja Nanduri; Tapas K. Das

Auctions serve as a primary pricing mechanism in various market segments of a deregulated power industry. In day-ahead (DA) energy markets, strategies such as uniform price, discriminatory, and second-price uniform auctions result in different price settlements and thus offer different levels of market power. In this paper, we present a nonzero sum stochastic game theoretic model and a reinforcement learning (RL)-based solution framework that allow assessment of market power in DA markets. Since there are no available methods to obtain exact analytical solutions of stochastic games, an RL-based approach is utilized, which offers a computationally viable tool to obtain approximate solutions. These solutions provide effective bidding strategies for the DA market participants. The market powers associated with the bidding strategies are calculated using well-known indexes like Herfindahl-Hirschmann index and Lerner index and two new indices, quantity modulated price index (QMPI) and revenue-based market power index (RMPI), which are developed in this paper. The proposed RL-based methodology is tested on a sample network


Iie Transactions | 1999

Optimal preventive maintenance in a production inventory system

Tapas K. Das; Sudeep Sarkar

We consider a production inventory system that produces a single product type, and inventory is maintained according to an (S, s) policy. Exogenous demand for the product arrives according to a random process. Unsatisfied demands are not back ordered. Such a make-to-stock production inventory policy is found very commonly in discrete part manufacturing industry, e.g., automotive spare parts manufacturing. It is assumed that the demand arrival process is Poisson. Also, the unit production time, the time between failures, and the repair and maintenance times are assumed to have general probability distributions. We conjecture that, for any such system, the down time due to failures can be reduced through preventive maintenance resulting in possible increase in the system performance. We develop a mathematical model of the system, and derive expressions for several performance measures. One such measure (cost benefit) is used as the basis for optimal determination of the maintenance parameters. The model application is explained via detailed study of 21 variants of a numerical example problem. The optimal maintenance policies (obtained using a numerical search technique) vary widely depending on the problem parameters. Plots of the cost benefit versus the system characteristic parameters (such as, demand arrival rate, failure rate, production rate, etc.) reveal the parameter sensitivities. The results show that the actual values of the failure and maintenance costs, and their ratio are significant in determining the sensitivities of the system parameters.


Iie Transactions | 2002

A reinforcement learning approach to a single leg airline revenue management problem with multiple fare classes and overbooking

Abhuit Gosavii; Naveen Bandla; Tapas K. Das

The airline industry strives to maximize the revenue obtained from the sale of tickets on every flight. This is referred to as revenue management and it forms a crucial aspect of airline logistics. Ticket pricing, seat or discount allocation, and overbooking are some of the important aspects of a revenue management problem. Though ticket pricing is usually heavily influenced by factors beyond the control of an airline company, significant amount of control can be exercised over the seat allocation and the overbooking aspects. A realistic model for a single leg of a flight should consider multiple fare classes, overbooking of the flight, concurrent demand arrivals of passengers from the different fare classes, and class-dependent, random cancellations. Accommodating all these factors in one optimization model is a challenging task because that makes it a very large-scale stochastic optimization problem. Almost all papers in the existing literature either accommodate only a subset of these factors or use a discrete approximation in order to make the model tractable. We consider all these factors and cast the single leg problem as a semi-Markov Decision Problem (SMDP) under the average reward optimizing criterion over an infinite time horizon. We solve it using a stochastic optimization technique called Reinforcement Learning. Not only is Reinforcement Learning able to scale up to a huge state-space but because it is simulation-based it can also handle complex modeling assumptions such as the ones mentioned above. The state-space of the numerical test problem scenarios considered here is non-denumerable; its countable part being of the order of 109. Our solution procedure involves a multi-step extension of the SMART algorithm which is based on the one-step Bellman equation. Numerical results presented here show that our approach is able to outperform a heuristic, namely the nested version of the EMSR heuristic, which is widely used in the airline industry. We also present a detailed study of the sensitivity of some modeling parameters via a full factorial experiment.


Iie Transactions | 2008

A large-scale simulation model of pandemic influenza outbreaks for development of dynamic mitigation strategies

Tapas K. Das; Alex Savachkin; Yiliang Zhu

Limited stockpiles of vaccine and antiviral drugs and other resources pose a formidable healthcare delivery challenge for an impending human-to-human transmittable influenza pandemic. The existing preparedness plans by the Center for Disease Control and Health and Human Services strongly underscore the need for efficient mitigation strategies. Such a strategy entails decisions for early response, vaccination, prophylaxis, hospitalization and quarantine enforcement. This paper presents a large-scale simulation model that mimics stochastic propagation of an influenza pandemic controlled by mitigation strategies. The impact of a pandemic is assessed via measures including total numbers of infected, dead, denied hospital admission and denied vaccine/antiviral drugs, and also through an aggregate cost measure incorporating healthcare cost and lost wages. The model considers numerous demographic and community features, daily human activities, vaccination, prophylaxis, hospitalization, social distancing, and hourly accounting of infection spread. The simulation model can serve as the foundation for developing dynamic mitigation strategies. The simulation model is tested on a hypothetical community with over 1100 000 people. A designed experiment is conducted to examine the statistical significance of a number of model parameters. The experimental outcomes can be used in developing guidelines for strategic use of limited resources by healthcare decision makers. Finally, a Markov decision process model and its simulation-based reinforcement learning framework for developing mitigation strategies are presented. The simulation-based framework is quite comprehensive and general, and can be particularized to other types of infectious disease outbreaks.


International Journal of Production Research | 2002

Global supply chain management: A reinforcement learning approach

P. Pontrandolfo; Abhijit Gosavi; O.G. Okogbaa; Tapas K. Das

In recent years, researchers and practitioners alike have devoted a great deal of attention to supply chain management (SCM). The main focus of SCM is the need to integrate operations along the supply chain as part of an overall logistic support function. At the same time, the need for globalization requires that the solution of SCM problems be performed in an international context as part of what we refer to as Global Supply Chain Management (GSCM). This paper proposes an approach to study GSCM problems using an artificial intelligence framework called reinforcement learning (RL). The RL framework allows the management of global supply chains under an integration perspective. The RL approach has remarkable similarities to that of an autonomous agent network (AAN); a similarity that we shall discuss. The RL approach is applied to a case example, namely a networked production system that spans several geographic areas and logistics stages. We discuss the results and provide guidelines and implications for practical applications.


Iie Transactions | 2001

Intelligent dynamic control policies for serial production lines

Carlos D. Paternina-Arboleda; Tapas K. Das

Abstract Heuristic production control policies such as CONWIP, kanban, and other hybrid policies have been in use for years as better alternatives to MRP-based push control policies. It is a fact that these policies, although efficient, are far from optimal. Our goal is to develop a methodology that, for a given system, finds a dynamic control policy via intelligent agents. Such a policy while achieving the productivity (i.e., demand service rate) goal of the system will optimize a cost/reward function based on the WIP inventory. To achieve this goal we applied a simulation-based optimization technique called Reinforcement Learning (RL) on a four-station serial line. The control policy attained by the application of a RL algorithm was compared with the other existing policies on the basis of total average WIP and average cost of WIP. We also develop a heuristic control policy in light of our experience gained from a close examination of the policies obtained by the RL algorithm. This heuristic policy named Behavior-Based Control (BBC), although placed second to the RL policy, proved to be a more efficient and leaner control policy than most of the existing policies in the literature. The performance of the BBC policy was found to be comparable to the Extended Kanban Control System (EKCS), which as per our experimentation, turned out to be the best of the existing policies. The numerical results used for comparison purposes were obtained from a four-station serial line with two different (constant and Poisson) demand arrival processes.


Iie Transactions | 1997

Economic design of dual-sampling-interval policies for X¯ charts with and without run rules

Tapas K. Das; Vikas Jain; Abhijit Gosavi

Recent studies show that the dual-sampling-interval (DSI) policies of X¯ control chart yield a smaller average time to signal (ATS) than Shewharts classical fixed-sampling-interval (FSI) policy for off-target processes. An economic design approach for DSI policies has not been addressed in the literature. In this paper we develop a comprehensive cost model for DSI policies with and without run rules and steady-state performance. The expression for the unit cost of quality is used as the objective function in optimal design of DSI policy parameters. The design process and the sensitivities of some of the model input parameters are exposed through numerical examples.


IEEE Transactions on Semiconductor Manufacturing | 2003

Wavelet-based identification of delamination defect in CMP (Cu-low k) using nonstationary acoustic emission signal

Rajesh Ganesan; Tapas K. Das; Arun K. Sikder; Ashok Kumar

Wavelet-based multiscale analysis approaches have revolutionized the tasks of signal processing, such as image and data compression. However, the scope of wavelet-based methods in the fields of statistical applications, such as process monitoring, density estimation, and defect identification, are still in their early stages of evolution. Recent literature contains some applications of wavelet-based methods in monitoring, such as tool-life monitoring, bearing defect monitoring, and monitoring of ultra-precision processes. This paper presents a novel application of a wavelet-based multiscale method in a nanomachining process [chemical mechanical planarization (CMP)] of wafer fabrication. The application involves identification of delamination defect of low-k dielectric layers by analyzing the nonstationary acoustic emission (AE) signal and coefficient of friction (CoF) signal collected during copper damascene (Cu-low k) CMP process. An offline strategy and a moving window-based strategy for online implementation of the wavelet monitoring approach are developed. Both offline and moving window-based strategies are implemented on the data collected from two different sources. The results show that the wavelet-based approach using the AE signal offers an efficient means for real-time detection of delamination defects in CMP processes. Such an online strategy, in contrast to the existing offline approaches, offers a viable tool for CMP process control. The results also indicate that the CoF signal is insensitive to delamination defect.


IEEE Transactions on Power Systems | 2009

Generation Capacity Expansion in Energy Markets Using a Two-Level Game-Theoretic Model

Vishnu Nanduri; Tapas K. Das; Patricio Rocha

With a significant number of states in the U.S. and countries around the world trading electricity in restructured markets, a sizeable proportion of capacity expansion in the future will have to take place in market-based environments. However, since a majority of the literature on capacity expansion is focused on regulated market structures, there is a critical need for comprehensive capacity expansion models targeting restructured markets. In this research, we develop a two-tier matrix game model, and a novel solution algorithm that incorporates risk due to volatilities in profit (via CVaR), intended for use by generators to make multi-period, multi-player generation capacity expansion decisions. We demonstrate the applicability of the model using a sample network from Power-World software and analyze the results.

Collaboration


Dive into the Tapas K. Das's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abhijit Gosavi

Missouri University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ashok Kumar

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Patricio Rocha

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Arun K. Sikder

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Vishnu Nanduri

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Alex Savachkin

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Diana Prieto

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Louis Martin-Vega

University of South Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge