Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where T. Agami Reddy is active.

Publication


Featured researches published by T. Agami Reddy.


Hvac&r Research | 2007

Calibrating Detailed Building Energy Simulation Programs with Measured Data—Part I: General Methodology (RP-1051)

T. Agami Reddy; Itzhak Maor; Chanin Panjapornpon

Calibrated simulation is the process of using a building simulation program for an existing building and “tuning” or calibrating the various inputs to the program so that predictions match closely with observed energy use. Historically, the calibration process has been an art form that inevitably relies on user knowledge, past experience, statistical expertise, engineering judgment, and an abundance of trial and error. Unfortunately, despite widespread interest in the professional community, no consensus guidelines have been published on how to perform a calibration using detailed simulation programs. This research project was initiated with the intention to cull the best tools, techniques, approaches, and procedures from the existing body of research and develop a coherent and systematic calibration methodology that includes both parameter estimation and the determination of the uncertainty in the calibrated simulation. A general methodology of calibrating detailed simulation programs to performance data is proposed, which we deem to be methodical, rational, robust, and computationally efficient while being flexible enough to satisfy different users with different personal preferences and biases. The methodology involves various concepts and approaches borrowed from allied scientific disciplines that are also reviewed in this paper. The methodology essentially consists of five parts: (1) identify a building energy program that has the ability to simulate the types of building elements and systems present and set up the simulation input file to be as realistic as possible; (2) depending on the building type, heuristically define a set of influential parameters and schedules that have simple and clear correspondence to specific and easy-to-identify inputs to the simulation program, along with their best-guess estimates and their range of variation; (3) perform a coarse grid search wherein the heuristically defined influential parameters are subject to a Monte Carlo simulation involving thousands of simulation trials from which a small set of promising parameter vector solutions can be identified by filtering, the strong and weak parameters can be identified, and narrower bounds of variability of the strong parameters can be defined; (4) perform a guided grid search to further refine the promising parameter vector solutions; and (5) use this small set of solutions (as opposed to a single calibrated solution) to make predictions about intended changes to the building and its systems, and determine the prediction uncertainty of the entire calibration process. A companion paper (Reddy et al. 2007) will present the results of applying this calibration methodology to two synthetic office buildings and one actual office building.


Hvac&r Research | 2007

Calibrating detailed building energy simulation programs with measured data - Part II: Application to three case study office buildings (RP-1051)

T. Agami Reddy; Itzhak Maor; Chanin Panjapornpon

The companion paper proposed a general methodology of calibrating detailed building energy simulation programs to performance data that also allowed the determination of the prediction uncertainty of intended energy conservation measures. The methodology strived to provide a measure of scientific rigor to the process of calibration as a whole, which has remained an art form with no clear consensus guidelines despite being followed by numerous professionals for several decades. The proposed methodology, while providing a clear structure consistent with that adopted in more mature scientific fields, also uses expert domain knowledge and is flexible enough to satisfy different users with different personal preferences and biases. This paper attests to the overall validity of the methodology by presenting the results of applying it to three case study office buildings—two synthetic and one actual. Conclusions on various variants of the overall calibration methodology are presented, along with guidelines and a summary of lessons learned on how to implement such a calibration methodology. Future research needed prior to implementation in commercial hourly detailed simulation programs is also identified.


Hvac&r Research | 2006

Calibration of Building Energy Simulation Programs Using the Analytic Optimization Approach (RP-1051)

Jian Sun; T. Agami Reddy

Reconciling results from detailed building energy simulation programs to measured data has always been recognized as essential in substantiating how well the simulation model represents the real building and its system. If the simulation results do not match actual monitored data, the programmer will typically “adjust” inputs and operating parameters on a trial-and-error basis until the program output matches the known data. This “fudging” process often results in the manipulation of a large number of variables, which may significantly decrease the credibility of the entire simulation. A major drawback to the widespread acceptance and credibility of the calibrated simulation approach is that it is highly dependent on the personal judgment of the analyst doing the calibration. The lack of a proper mathematical foundation for the general calibration problem has greatly contributed to the current state of affairs. This paper proposes a general analytic framework for calibrating building energy system simulation software/programs that has a firm mathematical and statistical basis. The approach is based on the recognition that although calibration can be cast as an optimization problem, the basic issue is that the calibration problem is underdetermined or overparametrized, i.e., there are many more parameters to tune than can be supported by the monitored data. Further, detailed simulation programs are made up of nonlinear, implicit, and computationally demanding models. The proposed methodology involves several distinct concepts, namely, sensitivity analysis (to identify a subset of strong influential variables), identifiability analysis (to determine how many parameters of this subset can be tuned mathematically and which specific ones are the best candidates), numerical optimization (to determine the numerical values of this best subset of parameters), and uncertainty analysis (to deduce the range of variation of these parameters). A synthetic example involving an office building is used to illustrate the methodology with the DOE-2 simulation program. The proposed methodology is recommended for use as the second step of a two-stage process with the first being a coarse-grid search that has reduced the number of simulation input parameters to a manageable few and also narrowed their individual range of variability.


Journal of Solar Energy Engineering-transactions of The Asme | 2005

Modeling and Experimental Evaluation of Passive Heat Sinks for Miniature High-Flux Photovoltaic Concentrators

Jian Sun; Tomer Israeli; T. Agami Reddy; Kevin Scoles; Jeffrey M. Gordon; Daniel Feuermann

An important consideration in the practical realization of high-concentration photovoltaic devices is the heat rejection at high power densities to the environment. Recently, optical designs for generating solar flux in excess of 1000 suns on advanced solar cells-while respecting flux homogeneity and system compactness-were suggested with the introduction of solar fiber-optic mini-dish concentrators, tailored specifically to high-flux photovoltaic devices [1]. At the core of the design is the miniaturization of the smallest building block in the system-the concentrator and the cell-permitting low-cost mass production and reliance on passive heat rejection of solar energy that is not converted to electricity First, this paper proposes a relatively simple 1-D axi-symmetric model for predicting the thermal and electrical performance of such mini-dish high-flux concentrators. Experimental measurements were performed with a real-sun solar simulator, indoors under controllable conditions, at flux levels up to 5,000 suns. A CFD (Computational Fluid Dynamics) model was also developed for model-validation. Both the modeling approaches predict heat sink temperatures within experimental uncertainty of a couple of degrees. Next, the 1-D axi-symmetric model is used to evaluate the sensitivity of different solar cell model assumptions, environmental effects (such as outdoor temperature, and the wind speed), heat sink size and geometry, thermal contact resistance, etc. It was confirmed that the miniaturization of the solar cell module permits passive heat rejection, such that solar cell temperatures should not reach more than 80 °C at peak insolation and stagnation conditions. Though the cell rated efficiency degrades by only 1-2% in absolute terms, higher cell temperatures may compromise the integrity of the cell circuitry and of the encapsulation. The 1-D axi-symmetric model also allows optimization of the heat sink geometric dimensions for a given volume. Hour-by-hour performance simulation results for such an optimized design configuration were performed for one month in summer and one month in winter for two locations namely Philadelphia, PA and Phoenix, AZ. The insight gained from this study is important for the proper design of the various components and materials to be used in PV mini-dishes. Equally important is that it allows similar types of analyses to be performed and well-informed design choices to be made for mini-dishes that have to operate under different climatic conditions with cells of different performance and concentration ratios.


Hvac&r Research | 2002

An evaluation of classical steady-state off-line linear parameter estimation methods applied to chiller performance data

T. Agami Reddy; Klaus K. Andersen

The objective of this paper is to evaluate different inverse methods with application to off-line model parameter estimation using data from a field-operated chiller. In HVAC&R data analysis, there is sometimes a need to evaluate and use estimation techniques that are more subtle than the ordinary least square (OLS) method. One example is in fault detection and diagnosis of HVAC&R equipment and systems using performance data obtained from field monitoring. By identifying a better performance model, the fault detection process is more likely to be refined and accurate. In this paper a number of exploratory, diagnostic, and classical estimation methods are reviewed to determine the circumstances in which they are likely to be superior to the OLS method. These methods are then evaluated using monitored data from a field-operated chiller. This study provides a reference on parameter estimation methods for the HVAC&R community.


Journal of Solar Energy Engineering-transactions of The Asme | 2003

Characteristic Physical Parameter Approach to Modeling Chillers Suitable for Fault Detection, Diagnosis, and Evaluation

Yongzhong Jia; T. Agami Reddy

Model-based fault detection and diagnosis approaches based on statistical models for fault-free performance concurrently require a fault classifier database for diagnosis. On the other hand, a model with physical parameters would directly provide such diagnostic ability. In this paper, we propose a generic model development approach, called the characteristic parameter approach, which is suitable for large engineering systems that usually come equipped with numerous sensors. Such an approach is applied to large centrifugal chillers, which are generally the single most expensive piece of equipment in heating, ventilating, air-conditioning, and refrigeration systems. The basis of the characteristic parameter approach is to quantify the performance of each and every primary component of the chiller (the electrical motor, the compressor, the condenser heat exchanger, the evaporator heat exchanger, and the expansion device) by one or two performance parameters, the variation in magnitude of which is indicative of the health of that component. A hybrid inverse model is set up based on the theoretical standard refrigeration cycle in conjunction with statistically identified component models that correct for non-standard behavior of the characteristic parameters of the particular chiller. Such an approach has the advantage of using few physically meaningful parameters (as against using the numerous sensor data directly), which simplifies the detection phase while directly providing the needed diagnostic ability. Another advantage to this generic approach is that the identification of the correction models is simple and robust, since it requires regression rather than calibration. The entire methodology has been illustrated with actual monitored data from two centrifugal chillers (one a laboratory chiller and the other a field operated chiller). The sensitivity of this approach to sensor noise has also been investigated.


Hvac&r Research | 2000

Uncertainty of 'measured' energy savings from statistical baseline models

T. Agami Reddy; D. E. Claridge

Baseline models are a crucial element in determining savings from energy conserving measures. The baseline model is obtained by regressing the energy consumption data for the period prior to the implementation of the energy conservation measures. The widely used criteria for determining adequacy of a particular baseline model is based on statistical cutoff criteria that do not give the user a knowledge of the error inherent in the savings determination. The absolute cutoff criteria may not be appropriate since baseline model development is not the desired end. It is proposed that models be evaluated in terms of the ratio of the expected uncertainty in the savings to the total savings (ΔEsave/Esave). This physically and financially intuitive measure permits the user to vary the criteria according to factors most relevant for a particular energy conservation project. Simplified expressions for (ΔEsave/Esave) appropriate for use by practitioners as applicable for cases with uncorrelated data and for use with correlated time series data are developed and discussed in the context of case-study data. The use of this concept to logically select the most appropriate measurement and verification protocol to verify savings is also described.


Hvac&r Research | 2003

Evaluation of the Suitability of Different Chiller Performance Models for On-Line Training Applied to Automated Fault Detection and Diagnosis (RP-1139)

T. Agami Reddy; Dagmar Niebur; Klaus Kaae Andersen; Paolo P. Pericolo; Gaspar Cabrera

This paper presents the research results of comparing the suitability of four different chiller performance models to be used for on-line automated fault detection and diagnosis (FDD) of vapor-compression chillers. The models were limited to steady-state performance and included (a) black-box multivariate polynomial (MP) models; (b) artificial neural network (ANN) models, specifically radial basis function (RBF) and multilayer perceptron (MLP); (c) the generic physical component (PC) model approach; and (d) the lumped physical Gordon-Ng (GN) model. All models except for (b) are linear in the parameters. A review of the engineering literature identified the three following on-line training schemes as suitable for evaluation: ordinary recursive least squares (ORLS) under incremental window scheme, sliding window scheme, and weighted recursive least squares (WRLS) scheme, where more weight is given to newer data. The evaluation was done based on five months of data from a 220 ton field-operated chiller from Toronto (a data set of 810 data points) and fourteen days of data from a 450 ton field-operated chiller (a set of about 1120 data points) located on Drexel University campus. The evaluation included a preliminary off-line or batch analysis to gain a first understanding of the suitability of the various models and their particular drawbacks and then to investigate whether the different chiller models exhibit any time variant or seasonal behavior. The subsequent on-line evaluation consisted of assessing the various models in terms of their suitability for model parameter tracking as well as model prediction accuracy (which would provide the necessary thresholds for flagging occurrence of faults). The former assessment suggested that parameter tracking using the GN model parameters could be a viable option for fault detection (FD) implementation, while the black box models were not at all suitable given their high standard errors. The assessment of models in terms of their internal prediction accuracy revealed that the MLP model was best, followed by the MP and GN models. However, the more important test of external predictive accuracy suggests that all models are equally accurate (CV about 2% to 4%) and, hence, comparable within the experimental uncertainty of the data. ORLS with incremental window scheme was found to be the most robust compared to the other computational schemes. The chiller models do not exhibit any time variant behavior since WRLS was found to be poorest. Finally, in terms of the initial length of training data, it was determined—at least with the data sets used that exhibited high autocorrelation—that about 320 and 400 data points would be respectively necessary for the MP and GN model parameter estimates to stabilize at their long-term values. This paper also provides a detailed discussion of the potential advantages that on-line model training can offer and identifies areas of follow-up research.


Hvac&r Research | 2007

Application of a Generic Evaluation Methodology to Assess Four Different Chiller FDD Methods (RP-1275)

T. Agami Reddy

A previous paper (Reddy 2007) suggested a generic approach for evaluating the performance of fault detection and diagnosis (FDD) methods and proposed general expressions normalized to an ideal FDD method. These expressions were then tailored to large chillers, and specific numerical values of several of the quantities appearing in these expressions were suggested based on discussions with a chiller manufacturer and service companies as well as analysis of chiller performance data from a laboratory chiller. This paper first describes four promising chiller FDD methods (two of which are modified versions of those proposed for rooftop units) and then illustrates their customization using steady-state chiller performance data gathered from a laboratory chiller as part of a previous research project. Subsequently, results of evaluating these four FDD methods in the framework of the generic assessment methodology are presented and their implications discussed. This paper illustrates the application of the FDD methodology and highlights the benefit of the FDD evaluation tool in identifying the most promising FDD method suitable for later field evaluation.


Hvac&r Research | 2007

General Methodology Combining Engineering Optimization of Primary HVAC&R Plants with Decision Analysis Methods—Part II: Uncertainty and Decision Analysis

Wei Jiang; T. Agami Reddy; Patrick L. Gurian

A companion paper (Jiang and Reddy 2007) presents a general and computationally efficient methodology for dynamic scheduling and optimal control of complex primary HVAC&R plants using a deterministic engineering optimization approach. The objective of this paper is to complement the previous work by proposing a methodology by which the robustness of the optimal deterministic strategy to various sources of uncertainties can be evaluated against non-optimal but risk averse alternatives within a formal decision analysis framework. This specifically involves performing a sensitivity analysis on the effect of various stochastic factors that impact primary HVAC&R plant optimization, such as the uncertainty in load prediction and the uncertainties associated with various component models of the equipment. This is achieved through Monte Carlo simulations on the deterministic outcome, which allow additional attributes, such as its variability and the probability of insufficient cooling, to be determined along with the minimum operating cost. The entire analysis is then repeated for a specific non-optimal but risk-averse operating strategy. Finally, a formal decision analysis model using linear multi-attribute utility functions is suggested for comparing both these strategies in a framework that explicitly models the risk perception of the plant operator in terms of the three attributes. The methodology is demonstrated using the same illustrative case study as the companion paper.

Collaboration


Dive into the T. Agami Reddy's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Leslie K. Norford

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

William P. Bahnfleth

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steven Snyder

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Wei Jiang

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Klaus Kaae Andersen

Technical University of Denmark

View shared research outputs
Researchain Logo
Decentralizing Knowledge