Scott J. Duncan
Georgia Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Scott J. Duncan.
Procedia Computer Science | 2013
Joosung Kang; Scott J. Duncan; Dimitri N. Mavris
Abstract Apart from the potential to reduce emissions and reliance on petroleum, large-scale adoption of electric vehicles (EVs) presents an opportunity to provide electric energy storage (EES)-based ancillary services, e.g., smoothing intermittency due to renewable energy sources (RESs) and supporting grid-wide frequency stability. However, the potential benefits of EVs are accompanied by a number of challenges. Especially, the charging of EVs can impact the distribution grid because they consume a large amount of electrical energy and can exacerbate undesirable peaks in consumption. To mitigate such issues, in this paper, we present a concept of real-time scheduling (RTS) techniques for EV charging that minimizes impacts to the power grid and guarantees the satisfaction of individual consumers charging requirements. Simulations using a model of RTS charging concept show its advantages compared to existing “valley-filling” techniques from the literature. For this initial proof of principle, the presented model assumes a centralized control scheme; the simulation environment for this scheme is the precursor to an agent-based concept for a decentralized scheme. The implications of this work to systems engineering are discussed.
design automation conference | 2006
Jason Matthew Aughenbaugh; Scott J. Duncan; Christiaan J.J. Paredis; Bert Bras
There is growing acceptance in the design community that two types of uncertainty exist: inherent variability and uncertainty that results from a lack of knowledge, which variously is referred to as imprecision, incertitude, irreducible uncertainty, and epistemic uncertainty. There is much less agreement on the appropriate means for representing and computing with these types of uncertainty. Probability bounds analysis (PBA) is a method that represents uncertainty using upper and lower cumulative probability distributions. These structures, called probability boxes or just p-boxes, capture both variability and imprecision. PBA includes algorithms for efficiently computing with these structures under certain conditions. This paper explores the advantages and limitations of PBA in comparison to traditional decision analysis with sensitivity analysis in the context of environmentally benign design and manufacture. The example of the selection of an oil filter involves multiple objectives and multiple uncertain parameters. These parameters are known with varying levels of uncertainty, and different assumptions about the dependencies between variables are made. As such, the example problem provides a rich context for exploring the applicability of PBA and sensitivity analysis to making engineering decisions under uncertainty. The results reveal specific advantages and limitations of both methods. The appropriate choice of an analysis depends on the exact decision scenario.Copyright
international conference on fuel cell science engineering and technology fuelcell collocated with asme international conference on energy sustainability | 2014
Scott J. Duncan; Michael Balchanos; Woongje Sung; Juhyun Kim; Yongchang Li; Yanal Issac; Dimitri N. Mavris; Adam Coulon
Researchers at Georgia Tech (GT) have recently begun the GT Smart Energy Campus initiative, which combines campus energy metering data with physics-based modeling and simulation to create an integrated analysis environment for campus energy. The environment consists of a digital representation of campus, which supports situational awareness, as well as a virtual test bed for analyzing emerging energy technologies and future scenarios. The first year of the initiative has focused on evaluating campus energy metering data using visual analytics and statistical analysis techniques. Data analysis is presented as having value for two main uses: (1) as attention-directing information to help system operators diagnose anomalies and (2) as a precursor to modeling and simulation (M&S) in future phases of the Smart Energy Campus initiative. The environment is explained using the initial study scoping of the campus thermal energy generation and distribution systems. Furthermore, a modeling and simulation approach leveraging the Modelica M&S language is described, and preliminary results in using it to represent the campus chilled water system are presented.Copyright
Procedia Computer Science | 2013
Satya S. Pogaru; Michael Z. Miller; Scott J. Duncan; Dimitri N. Mavris
When modeling and simulating a novel system to be designed, a modeler defines design variables, i.e., those parameters pertaining to the system to be realized, as well as modeling & simulation variables (M/SV), i.e., parameters regarding how the system (as an abstraction of reality) should be modeled and simulated. In this paper, the authors examine the influence of M/SV for a specific case of the conceptual design of a demand response (DR) program. DR is a proposed Smart Grid capability that can be implemented by a utility into an electricity distribution grid. M/SV considered include simulation time-step, number of electricity consumers, and seed variables used in modeling stochastic behavior. The influence of these variables on the ability of the DR simulation environment to produce accurate load curves and peaks is analyzed. For some M/SV, is shown that increased fidelity offers diminishing returns on greater computation time. Quantification of the influence of M/SV is used to support discussion and to identify important considerations when modeling large scale DR past the conceptual design stage.
international conference on system of systems engineering | 2011
Scott J. Duncan; Kelly Griendling; Dimitri N. Mavris
To meet increasing demand worldwide for electricity, efforts are underway to augment todays electrical grid infrastructure with enhanced information technology (IT), creating a Smart Grid capable of advanced monitoring, information exchange, analytics, and control. Because of the nature of the electrical grid, the stakeholders that control and use it, and the interconnectedness of proposed IT solutions, Smart Grid is a system-of-systems (SoS) that warrants new approaches to design. In this paper, the Relational-Oriented Systems Engineering and Technology Tradeoff Analysis (ROSETTA) Environment is assessed in light of Smart Grid system-of-systems (SoS) design. The specific example considered is a demand response (DR) problem of monitoring and leveling peak electricity demand. DR is first presented as an SoS design problem, comprising multiple, managerially independent systems (both human and technical) and subject to a variety of scenarios. Through discussion of the DR example, the intent and structure of ROSETTA is elucidated and initially considered for relevance to Smart Grid SoS design decisions.
international conference on fuel cell science engineering and technology fuelcell collocated with asme international conference on energy sustainability | 2014
Linyu Zhang; Yongchang Li; Scott J. Duncan; Juhyun Kim; Dimitri N. Mavris
An accurate building energy technology portfolio evaluation approach is needed that integrates physics-based models and business case analysis. Open source, parametric building modeling tools have recently matured to enable system-level building energy analysis at high fidelities. It is observed that these modeling tools usually only analyze energy savings and are not concerned with other criteria often factored into the choice of an energy technology portfolio. This paper presents an approach to constructing a parametric, physics-based, building-specific, business case analysis tool for quantifying multi-criteria performance of building energy technology portfolios. The resulting environment, which is used to build up a portfolio step-by-step and analyze performance trades, is explained through a case study. The application presented is for a building energy retrofit, comparing building energy consumption before and after application of technologies from a set of contenders, but it can be extended to the design of new buildings.© 2014 ASME
48th AIAA/ASME/SAE/ASEE Joint Propulsion Conference & Exhibit | 2012
Nicholas A. Molino; Jonathan S. Sands; Scott J. Duncan; Eriks Osvalds; Dimitri N. Mavris
A new quality indicator for Pareto efficient frontiers has been developed in order to map desired Pareto optimal set attributes to a single metric. This new Pareto quality metric was tested to see if it could be effectively utilized along with Response Surface Methodology (RSM) techniques to tune the parameter settings of multiobjective optimization schemes. As a working example, the parameter tuning was applied to a multiobjective genetic algorithm optimizing the engine cycle design of a geared turbofan with N+2 level technology on a 300 passenger commercial aircraft. The Environmental Design Space (EDS) tool, which analyzes aircraft performance, source noise, and exhaust emissions, was used for the cycle design. The goal of the multiobjective optimization was to simultaneously minimize total fuel burn and cumulative noise for a specified mission profile. This research has concluded that tuning the parameter settings of an optimizer can provide significant benefits in generating Pareto optimal solutions, especially when executed on an engine cycle design. The Pareto quality indicator developed has shown promise in assisting the parameter tuning effort, but must be refined through further research and development.
ASME 2009 3rd International Conference on Energy Sustainability collocated with the Heat Transfer and InterPACK09 Conferences | 2009
Matthew A. Prior; Ian C. Stults; Matthew J. Daskilewicz; Scott J. Duncan; Brian J. German; Dimitri N. Mavris
The demand for greater efficiency, lower emissions, and higher reliability in combined cycle power plants has driven industry to use higher-fidelity plant component models in conceptual design. Normally used later in preliminary component design, physics-based models can also be used in conceptual design as the building blocks of a plant-level modeling and simulation (M&S) environment. Although better designs can be discovered using such environments, the linking of multiple high-fidelity models can create intractably large design variable sets, long overall execution times, and model convergence limitations. As a result, an M&S environment comprising multiple linked high-fidelity models can be prohibitively large and/or slow to evaluate, discouraging design optimization and design space exploration. This paper describes a design space exploration methodology that addresses the aforementioned challenges. Specifically, the proposed methodology includes techniques for the reduction of total model run-time, reduction of design space dimensionality, effect visualization, and identification of Pareto-optimal power plant designs. An overview of the methodology’s main steps is given, leading to a description of the benefit and implementation of each step. Major steps in the process include design variable screening, efficient design space sampling, and surrogate modeling, all of which can be used as precursors to traditional optimization techniques. As an alternative to optimization, a Monte Carlo based method for design space exploration is explained conceptually. Selected steps from the methodology are applied to a fictional but representative example problem of combined cycle power plant design. The objective is to minimize cost of electricity (COE), subject to constraints on base load power and acquisition cost. This example problem is used to show relative run-time savings from using the methodology’s techniques compared to the alternative of performing optimization without them. The example additionally provides a context for explaining design space visualization techniques that are part of the methodology.© 2009 ASME
ASME 2007 International Mechanical Engineering Congress and Exposition | 2007
Christiaan J.J. Paredis; Bert Bras; Scott J. Duncan
In this article, Information-Gap Decision Theory (IGDT), an approach to robust decision making under severe uncertainty, is applied to decisions about a remanufacturing process. IGDT is useful when only a nominal estimate is available for an uncertain quantity; the amount that estimate differs from the quantity’s actual value is not known. The decision strategy in IGDT involves maximizing robustness to uncertainty of unknown size, while still guaranteeing no worse than some “good enough” critical level of performance, rather than optimal performance. The design scenario presented involves selecting the types of technologies and number of stations to be used in a remanufacturing process. The profitability of the process is affected by severe uncertainty in the demand for remanufactured parts. Because nothing is know about demand except an estimate based on a different product from a previous year, info-gap theory will be used to determine an appropriate tradeoff between performance and robustness to severe uncertainty. Which design is most preferred is seen to switch depending on choice of critical performance level. Implications of findings, as well as future research directions, are discussed.Copyright
SAE 2006 World Congress & Exhibition | 2006
Scott J. Duncan; Chris Paredis; Berdinus A. Bras
This paper was presented at the Reliability and Robust Design in Automotive Engineering Forum of the SAE 2006 World Congress. Reprinted with permission from SAE paper 2006-01-0273