Roland R. Mielke
Old Dominion University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Roland R. Mielke.
winter simulation conference | 1999
Roland R. Mielke
The purpose of the paper is to describe several recent applications of enterprise simulation. An enterprise simulation is a simulation which is constructed with a top-down view of a business enterprise and which is intended to serve as a decision support tool for decision makers. Examples are taken from the domain areas of transportation, urban operations, supply chain management, entertainment, and manufacturing. The objective is to help clarify the meaning of the term enterprise simulation and to promote its use as an important management tool.
IEEE Transactions on Parallel and Distributed Systems | 1993
Sukhamoy Som; Roland R. Mielke; John W. Stoughton
Presents a new data flow graph model for describing the real-time execution of iterative control and signal processing algorithms on multiprocessor data flow architectures. Identified by the acronym ATAMM, for Algorithm to Architecture Mapping Model, the model is important because it specifies criteria for a multiprocessor operating system to achieve predictable and reliable performance. Algorithm performance is characterized by execution time and iteration period. For a given data flow graph representation, the model facilitates calculation of greatest lower bounds for these performance measures. When sufficient processors are available, the system executes algorithms with minimum execution time and minimum iteration period, and the number of processors required is calculated. When only limited processors are available or when processors fail, performance is made to degrade gracefully and predictably. The user off-line is able to specify tradeoffs between increasing execution time or increasing iteration period. The approach to achieving predictable performance is to control the injection rate of input data and to modify the data flow graph precedence relations so that a processor is always available to execute an enabled graph node. An implementation of the ATAMM model in a four-processor architecture based on Westinghouses VHSIC 1750A Instruction Set Processor is described. >
winter simulation conference | 1998
Roland R. Mielke; Adham Zahralddin; Damanjit Padam; Thomas W. Mastaglio
This paper describes the application of computer simulation to a new and interesting problem area, the management of major theme parks. The operation and management of theme parks is becoming continually more difficult and competitive. The level of customer expectation for excitement and quality of experience is increasing at a much greater rate than the publics willingness to accept admission price increases. As a consequence, theme park management is asked to deliver more services, at a faster pace and with higher quality, with fewer and fewer seasonal employees. VMASC has begun to work with two local theme parks, Water Country USA located in Williamsburg, Virginia and operated by Anheuser-Busch, and Kings Dominion located in Richmond, Virginia and operated by Paramount Parks. The objective is to identify management issues and operational problems where simulation may serve as an important tool to assist in the decision-making process.
real-time systems symposium | 1990
Sukhamoy Som; Roland R. Mielke; John W. Stoughton
Consideration is given to the development of strategies for predictable performance in homogeneous multicomputer data-flow architectures operating in real-time. Algorithms are restricted to the class of large-grained, decision-free algorithms. The mapping of such algorithms onto the specified class of data-flow architectures is realized by a new marked graph model called ATAMM (algorithm to architecture mapping model). Algorithm performance and resource needs are determined for predictable periodic execution of algorithms, which is achieved by algorithm modification and input data injection control. Performance is gracefully degraded to adapt to decreasing numbers of resources. The realization of the ATAMM model on a VHSIC four processor testbed is described. A software design tool for prediction of performance and resource requirements is described and is used to evaluate the performance of a space surveillance algorithm.<<ETX>>
international conference on distributed computing systems | 1988
Roland R. Mielke; John W. Stoughton; Sukhamoy Som
A novel graph-theoretic model for describing the relation between a decomposed algorithm and its execution in a multiprocessor environment is developed. Called ATAMM, the model consists of a set of Petri-net marked graphs that incorporates the general specifications of a data-flow architecture. The model is useful for representing decision-free algorithms having large-grained, computationally complex primitive operations. Performance measures of computing speed and throughput capacity are defined. The ATAMM model is used to develop analytically lower bounds for these parameters.<<ETX>>
Mathematical and Computer Modelling | 2004
James F. Leathrum; Roland R. Mielke; Saurav Mazumdar; Reejo Mathew; Y Manepalli; V Pillai; R.N Malladi; J Joines
A new architecture for simulating intratheater sealift operations is presented. Intratheater sealift operations refer to new strategies proposed for quickly deploying a military force to a theater of war when major seaports are not available. In this strategy, a self-deployable force is transported to a sea-based intermediate staging base (SISB) by conventional cargo transport ships. The SISB is a world-class seaport generally located within 800 miles of the theater of war. At the SISB, cargo is transferred to a new ship platform called the theater support vessel (TSV). TSVs are to be designed to access very small ports located at or near the theater of war. Simulation provides an efficient and cost-effective method for testing these strategies and for evaluating the required new logistics technologies. Should intratheater sealift operations prove viable, the simulation also provides a means to plan and rehearse an exercise. The new simulation architecture is described and example simulation case studies are conducted to demonstrate the capabilities and potential benefits of the approach.
southeastcon | 1991
R.L. Jones; John W. Stoughton; Roland R. Mielke
Diagnostics software for analyzing ATAMM (algorithm-to-architecture-mapping) based concurrent processing is presented. ATAMM is a Petri-net-based model capable of modeling the execution of computationally complex algorithms on distributed data-flow architectures. The ATAMM multicomputer operating system (AMOS), which enforces the ATAMM rules for predictable multiprocessing, is presented. The software presented referred to as the analysis tool, evaluates the behavior and performance of an ATAMM-based system by examining the time-tagged AMOS communication events collected in a file during execution. The tool provides automatic and user-interactive measurements of throughput, concurrency, resource utilization, and system overhead. The analysis-tool capabilities are demonstrated by evaluating the simulated execution of a specific algorithm graph for a given set of operating system parameters. Measurements of throughput and overhead are used to assess the effect of the operating system on ideal performance.<<ETX>>
international phoenix conference on computers and communications | 1990
Sukhamoy Som; John W. Stoughton; Roland R. Mielke
The algorithm-to-architecture mapping model (ATAMM) is a new marked graph (a class of Petri net) model from which the rules for data and control flow in a homogeneous, multicomputer, data-flow architecture may be defined. This study is concerned with performance modeling for periodic execution of large-grain, decision-free algorithms in such an ATAMM-defined architecture. Major applications are expected to be real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The computing environment, problem domain, and algorithm execution pattern are described. Performance measures of computing speed and throughout capacity are defined. Performance bounds are established. Resource (computing element) needs are determined for periodic execution of algorithms.<<ETX>>
International Journal of Simulation and Process Modelling | 2009
Roland R. Mielke; Mark W. Scerbo; Kurt Taylor Gaubatz; Ginger S. Watson
Modelling and Simulation (M&S) is increasingly important in numerous disciplines. M&S also has gained acceptance as a discipline in its own right. There is growing demand for at least two different approaches to M&S graduate education, one path for users and another for developers of M&S. Thus, the traditional department-focused approach is no longer adequate for M&S education. This paper outlines the development of a multidisciplinary approach to M&S graduate education at Old Dominion University. The approach encourages development of a number of M&S programmes, coordinated by university-level oversight, in which all academic colleges participate.
southeastcon | 1990
Sukhamoy Som; B. Mandala; Roland R. Mielke; John W. Stoughton
A design tool for performance prediction in homogeneous, multicomputer dataflow architectures operating in real time is discussed. Algorithms are restricted to the class of large-grain, decision-free algorithms. Major applications are expected to be real-time implementation of control and signal processing algorithms, where performance is required to be highly predictable. The mapping of such algorithms onto the specified class of dataflow architectures is realized by a marked graph model called the algorithm to architecture mapping model (ATAMM). Performance measures which determine computing speed and throughput capacity are defined, and the lower bounds for these performance measures are stated. Computing resource needs are determined for predictable periodic execution of algorithms. A software design tool is presented to aid the designer in predicting performance and resource requirements.<<ETX>>