Rodion M. Podorozhny
Texas State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rodion M. Podorozhny.
Distributed and Parallel Databases | 2008
Liangzhao Zeng; Anne H. H. Ngu; Boualem Benatallah; Rodion M. Podorozhny; Hui Lei
Process-based composition of Web services has recently gained significant momentum for the implementation of inter-organizational business collaborations. In this approach, individual Web services are choreographed into composite Web services whose integration logics are expressed as composition schema. In this paper, we present a goal-directed composition framework to support on-demand business processes. Composition schemas are generated incrementally by a rule inference mechanism based on a set of domain-specific business rules enriched with contextual information. In situations where multiple composition schemas can achieve the same goal, we must first select the best composition schema, wherein the best schema is selected based on the combination of its estimated execution quality and schema quality. By coupling the dynamic schema creation and quality-driven selection strategy in one single framework, we ensure that the generated composite service comply with business rules when being adapted and optimized.
international conference on service oriented computing | 2008
Michael Pierre Carlson; Anne H. H. Ngu; Rodion M. Podorozhny; Liangzhao Zeng
The need for integration of both client and server applications that were not initially designed to interoperate is gaining popularity. One of the reasons for this popularity is the capability to quickly reconfigure a composite application for a task at hand, both by changing the set of components and the way they are interconnected. Service Oriented Architecture (SOA) has become a popular platform in the IT industry for building such composite applications recently with the integrated components being provided as web services. A key limitation of such a web service is that it requires extra programming efforts when integrating non web service components, which is not cost-effective. Moreover, with the emergence of new standards, such as OSGi, the components used in composite applications have grown to include more than just web services. Our work enables progressive composition of non web service based components such as portlets, web applications, native widgets, legacy systems, and Java Beans. Further, we proposed a novel application of semantic annotation together with the standard semantic web matching algorithm for finding sets of functionally equivalent components out of a large set of available non web service based components. Once such a set is identified the user can drag and drop the most suitable component into an Eclipse based composition canvas. After a set of components has been selected in such a way, they can be connected by data-flow arcs, thus forming an integrated, composite application without any low level programming and integration efforts. We implemented and conducted experimental study on the above progressive composition framework on IBMs Lotus Expeditor which is an extension of a Service Oriented Architecture (SOA) platform called the Eclipse Rich Client Platform (RCP) that complies with the OSGi standard.
Ecological Informatics | 2007
Emery R. Boose; Aaron M. Ellison; Leon J. Osterweil; Lori A. Clarke; Rodion M. Podorozhny; Julian L. Hadley; Alexander E. Wise; David R. Foster
Abstract At the dawn of the 21st century, environmental scientists are collecting more data more rapidly than at any time in the past. Nowhere is this change more evident than in the advent of sensor networks able to collect and process (in real time) simultaneous measurements over broad areas and at high sampling rates. At the same time there has been great progress in the development of standards, methods, and tools for data analysis and synthesis, including a new standard for descriptive metadata for ecological datasets (Ecological Metadata Language) and new workflow tools that help scientists to assemble datasets and to diagram, record, and execute analyses. However these developments (important as they are) are not yet sufficient to guarantee the reliability of datasets created by a scientific process — the complex activity that scientists carry out in order to create a dataset. We define a dataset to be reliable when the scientific process used to create it is (1) reproducible and (2) analyzable for potential defects. To address this problem we propose the use of an analytic web , a formal representation of a scientific process that consists of three coordinated graphs (a data-flow graph, a dataset-derivation graph, and a process-derivation graph) originally developed for use in software engineering. An analytic web meets the two key requirements for ensuring dataset reliability: (1) a complete audit trail of all artifacts (e.g., datasets, code, models) used or created in the execution of the scientific process that created the dataset, and (2) detailed process metadata that precisely describe all sub-processes of the scientific process. Construction of such metadata requires the semantic features of a high-level process definition language. In this paper we illustrate the use of an analytic web to represent the scientific process of constructing estimates of ecosystem water flux from data gathered by a complex, real-time multi-sensor network. We use Little-JIL, a high-level process definition language, to precisely and accurately capture the analytical processes involved. We believe that incorporation of this approach into existing tools and evolving metadata specifications (such as EML) will yield significant benefits to science. These benefits include: complete and accurate representations of scientific processes; support for rigorous evaluation of such processes for logical and statistical errors and for propagation of measurement error; and assurance of dataset reliability for developing sound models and forecasts of environmental change.
foundations of software engineering | 2008
Leon J. Osterweil; Lori A. Clarke; Aaron M. Ellison; Rodion M. Podorozhny; Alexander E. Wise; Emery R. Boose; Julian L. Hadley
This paper describes our experiences in exploring the applicability of software engineering approaches to scientific data management problems. Specifically, this paper describes how process definition languages can be used to expedite production of scientific datasets as well as to generate documentation of their provenance. Our approach uses a process definition language that incorporates powerful semantics to encode scientific processes in the form of a Process Definition Graph (PDG). The paper describes how execution of the PDG-defined process can generate Dataset Derivation Graphs (DDGs), metadata that document how the scientific process developed each of its product datasets. The paper uses an example to show that scientific processes may be complex and to illustrate why some of the more powerful semantic features of the process definition language are useful in supporting clarity and conciseness in representing such processes. This work is similar in goals to work generally referred to as Scientific Workflow. The paper demonstrates the contribution that software engineering can make to this domain.
international conference on coordination models and languages | 1999
Rodion M. Podorozhny; Barbara Staudt Lerner; Leon J. Osterweil
Precise specification of resources is important in activity and agent coordination. As the scarcity or abundance of resources can make a considerable difference in how to best coordinate the tasks and actions. That being the case, we propose the use of a resource model. We observe that past work on resource modeling does not meet our needs, as the models tend to be either too informal (as in management resource modeling) to support definitive analysis, or too narrow in scope (as in the case of operating system resource modeling) to support specification of the diverse tasks we have in mind. In this paper we introduce a general approach and some key concepts in a resource modeling and management system that we have developed. We also describe two experiences we have had in applying our resource system. In one case we have added resource specifications to a process program. In another case we used resource specifications to augment a multiagent scheduling system. In both cases, the result was far greater clarity and precision in the process and agent coordination specifications, and validation of the effectiveness of our resource modeling and management approaches.
IEEE Transactions on Automation Science and Engineering | 2010
Leon J. Osterweil; Lori A. Clarke; Aaron M. Ellison; Emery R. Boose; Rodion M. Podorozhny; Alexander E. Wise
With the availability of powerful computational and communication systems, scientists now readily access large, complicated derived datasets and build on those results to produce, through further processing, yet other derived datasets of interest. The scientific processes used to create such datasets must be clearly documented so that scientists can evaluate their soundness, reproduce the results, and build upon them in responsible and appropriate ways. Here, we present the concept of an analytic web, which defines the scientific processes employed and details the exact application of those processes in creating derived datasets. The work described here is similar to work often referred to as ¿scientific workflow,¿ but emphasizes the need for a semantically rich, rigorously defined process definition language. We illustrate the information that comprises an analytic web for a scientific process that measures and analyzes the flux of water through a forested watershed. This is a complex and demanding scientific process that illustrates the benefits of using a semantically rich, executable language for defining processes and for supporting automatic creation of process provenance metadata.
integrated formal methods | 2007
Rodion M. Podorozhny; Sarfraz Khurshid; Dewayne E. Perry; Xiaoqin Zhang
Multi-agent systems provide an increasingly popular solution in problem domains that require management of uncertainty and a high degree of adaptability. Robustness is a key design criterion in building multi-agent systems. We present a novel approach for the design of robust multi-agent systems. Our approach constructs a model of the design of a multi-agent system in Alloy, a declarative language based on relations, and checks the properties of the model using the Alloy Analyzer, a fully automatic analysis tool for Alloy models. While several prior techniques exist for checking properties of multi-agent systems, the novelty of our work is that we can check properties of coordination and interaction, as well as properties of complex data structures that the agents may internally be manipulating or even sharing. This is the first application of Alloy to checking properties of multi-agent systems. Such unified analysis has not been possible before. We also introduce the use of a formal method as an integral part of testing and validation.
international conference on software engineering | 1997
Rodion M. Podorozhny; Leon J. Osterweil
This paper describes experimentation aimed at making the comparison of software design methodologies SDM s more of an exact science Our aim is to lay the foundations for this more exact science by establishing xed methods and conceptual frameworks that are able to assure that comparison e orts will yield predictable reproducible results Earlier papers have proposed the use of a systematic process to compare SDM s This process assumes that the comparison will be done relative to a xed standard SDM feature classi cation schema and with the use of a xed formalism for modeling the SDM s Early experiments with this approach have yielded interesting SDM comparisons but have raised questions about how sensitive these results might be to the choice of modeling formalism In this paper we study this sensitivity by varying the choice of modeling formalism We describe an experiment in which we x a pair of SDM s and then use two di erent formalisms to obtain two di erent comparisons of that pair of SDM s We then compare the comparisons Our results suggest that comparison results may be relatively insensitive to di erences in modeling formalisms This paper also suggests an approach to further experimentation
IEEE Systems Journal | 2018
Xi Zheng; Christine Julien; Rodion M. Podorozhny; Franck Cassez; Thierry Rakotoarivelo
Our reliance on cyber–physical systems (CPSs) is increasingly widespread, but scalable methods for the analysis of such systems remain a significant challenge. Runtime verification of CPSs provides a reasonable middle ground between formal verification and simulation approaches, but it comes with its own challenges. A runtime verification system must run directly on the deployed application. In the CPS domain, it is therefore critical that a runtime verification system exhibits low overhead and good scalability so that the verification does not interfere with the analyzed CPS application. In this paper, we introduce Brace, a runtime verification system whose focus is on ensuring these performance qualities for applications in the CPS domain. Brace strives to bound the computation overhead for CPS runtime verification while preserving a high level of monitoring accuracy in terms of the number of false positive and false negative reports. Brace is particularly suitable to systems in which scheduling is distributed across networked CPS components. We evaluate Brace to determine how effectively and efficiently it can detect injected errors in two existing real-life CPS applications with distributed scheduling. Our results demonstrate that Brace efficiently detects those errors and a few true bugs and is able to bound both the memory and computation overhead even in systems with large numbers of observed events.
ACM Transactions in Embedded Computing Systems | 2017
Xi Zheng; Christine Julien; Hongxu Chen; Rodion M. Podorozhny; Franck Cassez
In Cyber-Physical Systems (CPS), cyber and physical components must work seamlessly in tandem. Runtime verification of CPS is essential yet very difficult, due to deployment environments that are expensive, dangerous, or simply impossible to use for verification tasks. A key enabling factor of runtime verification of CPS is the ability to integrate real-time simulations of portions of the CPS into live running systems. We propose a verification approach that allows CPS application developers to opportunistically leverage real-time simulation to support runtime verification. Our approach, termed BraceBind, allows selecting, at runtime, between actual physical processes or simulations of them to support a running CPS application. To build BraceBind, we create a real-time simulation architecture to generate and manage multiple real-time simulation environments based on existing simulation models in a manner that ensures sufficient accuracy for verifying a CPS application. Specifically, BraceBind aims to both improve simulation speed and minimize latency, thereby making it feasible to integrate simulations of physical processes into the running CPS application. BraceBind then integrates this real-time simulation architecture with an existing runtime verification approach that has low computational overhead and high accuracy. This integration uses an aspect-oriented adapter architecture that connects the variables in the cyber portion of the CPS application with either sensors and actuators in the physical world or the automatically generated real-time simulation. Our experimental results show that, with a negligible performance penalty, our approach is both efficient and effective in detecting program errors that are otherwise only detectable in a physical deployment.