Patrick O. Bobbie
University of West Florida
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Patrick O. Bobbie.
Multimedia Tools and Applications | 1998
Cyril U. Orji; Donald A. Adjeroh; Patrick O. Bobbie; Kingsley C. Nwosu
Recent advances in computing technology have brought multimedia information processing to prominence. The ability to digitize, store, retrieve, process, and transport analog information in digital form has changed the dimensions of information handling. Several architectural and network configurations have been proposed for efficient and reliable digital video delivery systems. However, these proposals succeed only in addressing subsets of the whole problem. In this paper, we discuss the characteristics of video services. These include Cable Television, Pay-Per-View, and Video Repository Centers. We also discuss requirements for “Video On Demand” services. With respect to these video services, we analyze two important video properties: image quality and response time. We discuss and present configurations of a Digital Video Delivery System (DVDS) from three general system components - servers, clients, and connectivities. Pertinent issues in developing each component are also analyzed. We also present an architecture of a DVDS that can support the various functionalities that exist in the various video services. Lastly, we discuss data allocation strategies which impact performance of interactive video on demand (IVOD). We present preliminary results from a study using a limited form of mirroring to support high performance IVOD.
Multimedia Systems | 1997
Cyril U. Orji; Patrick O. Bobbie; Kingsley C. Nwosu
Abstract.As the number of video streams to be supported by a digital video delivery system (DVDS) increases, an improved understanding of the necessity for reliable and cost-efficient support for a considerable number of video streams (in the magnitude of tens of thousands), and the dependency largely on software capabilities emerges. Even in the presence of an optimal hardware configuration, or model, and associated costs, using software to exploit the underlying hardware capabilities is of paramount importance. Although a number of DVDSs have become operational, their ability to deliver the required services mainly depends on the small number of streams supported and the hardware trade-offs. It is imperative that current software developments account for the eventual scalability of the number of video streams without commensurate increase in hardware. In this paper, we present strategies for the management of video streams in order to maintain and satisfy their space and time requirements. We use a DVDS architectural model with functionally dichotomized nodes: a single-node partition is responsible for data retrieval, while the remaining partition of nodes accepts user requests, determines object locations, and routes requests through the network that connects both partitions. We present a detailed analysis of the issues related to queuing I/O requests and data buffering. The discussion includes the requirements for arranging and scheduling I/O requests and data buffers, with the objective of guaranteeing the required data availability rates for continuous media display.
Journal of Systems and Software | 1991
Patrick O. Bobbie; Mike P. Papazoglou
Abstract A knowledge base (KB) is a collection of factual information pertaining to the objects of specialized domains or application areas. KB information may be acquired and represented using language paradigms which are based on formalisms of predicate calculus. Usually, the domains are not necessarily distinct due to the interrelatedness of the components of the problem or interdependency of the objects. Therefore, this interdependency could generate long search paths or references to the KB objects, particularly for large KB data. However, KB data can be reorganized into groups or clusters using some common relational information of the data objects. The reorganization process isolates the data into clusters and localizes the interdependency within the clusters. Therefore, the clusters offer opportunities for mapping the data into distributed or parallel processing environments to facilitate computational efficiency. This article focuses on methods for structuring, partitioning, and clustering logic-based KB data (rules and facts) for distributed computations.
International Journal of Software Engineering and Knowledge Engineering | 1992
Joseph E. Urban; Patrick O. Bobbie
The fundamental rationale for the increase in the use of CASE tools by both large and medium enterprises is the belief that CASE tools facilitate and enhance improved productivity and system quality. The development of CASE environments has evolved over several years. Users are demanding high-level, domain-specific interfaces to applications, easy-to-use systems, systems that offer increases in productivity/cost ratios, flexibility in multiparadigm tools usage, and systems which are modular, portable, and robust. To meet such far ranging needs, software engineering research has become a large-scale endeavor. Thus, CASE development has become the concerted effort of academia, government, and industry. In this paper, the academic research effort on CASE development is discussed. Specifically, the perspective of the paper is on the effect that Undergraduate Software Engineering (USE) education has had and can have on the ability to develop timely and quality software tools. The focus of the paper is dichotomized as follows: (1) the impact of USE education on current techniques for developing CASE tools and a measurement of current CASE technology transfer and (2) the qualitative component(s) of USE education which will help in advancing tools development in the next decade.
IEEE Transactions on Applications and Industry | 1989
Patrick O. Bobbie; Joseph E. Urban
The authors discuss a set of solutions to the problem of modeling the relationships between the entities of large-scale software processes and the development environments using matrix models. Formal techniques for developing project management specifications from descriptions of the attributes and relationships between software process entities are addressed. The attributes and the relationships of the entities are specified at the abstract, conceptual level, stored in a knowledge base, and further modeled as matrices. The discussion includes an algorithmic method for decomposing the models into subprocesses. The goal is to provide a definition and an integration tool for understanding the relationships between the notions of large software processes and a supporting environment early in the life cycle.<<ETX>>
International Journal of Software Engineering and Knowledge Engineering | 1991
Patrick O. Bobbie
This article is on a methodology for eliciting and verifying the correctness of domain knowledge using propositional logic. The domain knowledge is further structured and organized into subdomains of objects to facilitate an automatic and rapid development of design prototypes. The philosophy of the design paradigm is threefold: 1) focus the development of software requirements specifications on a set of objects and a set of relationships; 2) establish theorems about the interrelationships of the objects, prove the correctness of the theorems, construct software architectural models based on the transitivity of the theorems, and decompose the models into clusters of objects; and 3) employ object-oriented design techniques to generate prototypes of object classes from the resultant clusters. A prototype generator has been implemented to realize these goals. The significant contributions of this paper are: 1) limiting the contents of the specifications to objects and relationships and mapping this dual basis approach into formalisms of logic to derive and verify the abstract interdependencies of the objects; and 2) modeling, decomposing, and clustering the objects into common classes to facilitate a modular design.
international conference on tools with artificial intelligence | 1999
Clement S. Allen; Sara Stoecklin; Patrick O. Bobbie; Qian Chen
A spoken dialogue interface allows a user to interact with a computer application using speech. The user engages in a conversation with the application to achieve some goal, for example to obtain travel information or to book theatre tickets. In this paper, we describe an architecture and development environment for designing distributed spoken dialogue interfaces. A distributed spoken dialogue interface allows multiple users, distributed throughout a computer network, to interact with an application using speech. With a distributed spoken dialogue interface, several users engage in a conversation with the application, at different times, to achieve some goal. Our approach to distributed spoken dialogue interfaces is based on the idea of an intelligent agent that coordinates the activities of the multiple users interacting with the application. To support our model of distributed spoken dialogues, we have created a software development environment, JSBB, that can be used to design both distributed and non-distributed spoken dialogue interfaces.
Knowledge Based Systems | 1990
Patrick O. Bobbie
Knowledge-base data are constructed from domains and subdomains of specific problem or application areas. Usually, the domains are not necessarily distinct because the original problem may have many interrelated components. As a result, the processing of the data becomes lengthy and windable. However, KB data can be reorganized into groups or clusters using some common relational information of the data objects. The reorganization process isolates the data and localizes the interdependency within the clusters, leaving weak linkages between clusters. Therefore, the clusters offer opportunities for mapping the data into distributed or parallel processing environments to facilitate a computational efficiency. The focus of this paper is on methods for structuring, partitioning, and clustering KB data for distributed computations.
International Journal of Software Engineering and Knowledge Engineering | 1998
Owusu-Ansah Agyapong; Patrick O. Bobbie
In this paper, we discuss a tool for eliciting domain knowledge (specification) of a decision support system. In particular, we focus on a decision support software system (DSS) which employs domain knowledge of recidivism in the juvenile justice system. Using the elicited domain knowledge, the DSS tool uses deductive reasoning techniques to make inferences and provide suggestive courses of action to support the investigatory functions of police, attorneys, or probation officials. The motivation for developing the system is manifold: (1) the activities of the officials are repetitive and their procedures mostly manual; (2) investigations usually result in large volume of biographical data; (3) the need to link several, related case files; (4) officials seldom have concurrent access to case files — causing delays in resolving cases in the court system; among others. Developing a software system to support the investigation and decision making of criminal cases is in itself a daunting task, which makes the system specification a critical input to the development process. Hence, the correctness of the resultant domain knowledgebase and the underlying deductive/support system depends on logically consistent and sound methods. In the paper, we describe the rationale for developing the DSS system, why we focus on the criminal (juvenile) justice system, the methodology for eliciting DSS domain knowledge, and a scenario of what we are implementing as a proof-of-concept system. A series of elicitation sessions which epitomize the DSS system are discussed in the article.
international symposium on autonomous decentralized systems | 1995
Patrick O. Bobbie
The paper discusses a methodology for capturing, representing, and analyzing software requirements specification of real time, autonomous systems. Temporal requirements and constraints often make it difficult to fully capture and analyze the behavior of entities of real time, autonomous systems at the abstract level and early in the software development process. If high level domain knowledge and the expected functionalities of the system are specified using first order logic (and/or temporal logic), the resultant specification would provide a sound, logical basis for further analysis and verification early in the life cycle. Formal methods and tools (e.g., propositional and predicate calculus) as well as theorem proving procedures, together, provide a viable methodological solution to the problem.<<ETX>>