Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Z. Kovacs is active.

Publication


Featured researches published by Z. Kovacs.


international database engineering and applications symposium | 1998

The integration of product data and workflow management systems in a large scale engineering database application

R. McClatchey; Z. Kovacs; F. Estrella; J.M. Le Goff; G. Chevenier; Nigel Baker; S. Lieunard; S. Murray; T. Le Flour; A. Bazan

At a time when many companies are embracing business process re-engineering and are under pressure to reduce time-to-market, the management of product information from creative design through to manufacture has become increasingly important. Traditionally, design engineers have employed product data management systems to coordinate and control access to documented versions of product designs. However, these systems provide control only at the collaborative design level and are seldom used beyond design. Workflow management systems, on the other hand, are employed to coordinate and support the more complex and repeatable work processes of the production environment. Most commercial workflow products cannot support the highly dynamic activities found both in the design stages of product development and in rapidly evolving workflow definitions. The integration of product data management with workflow management could provide support for product development from initial CAD/CAM collaborative design through to the support and optimisation of production workflow activities. This paper investigates such an integration and proposes a philosophy for the support of product data throughout the full development and production lifecycle.


nuclear science symposium and medical imaging conference | 1998

The use of production management techniques in the construction of large scale physics detectors

A. Bazan; G. Chevenier; Florida Estrella; Z. Kovacs; T. Le Flour; J.M. Le Goff; S. Lieunard; R. McClatchey; S. Murray; L. Varga; J.-P. Vialle; M. Zsenei

The construction process of detectors for the Large Hadron Collider (LHC) experiments is large scale, heavily constrained by resource availability and evolves with time. As a consequence, changes in detector component design need to be tracked and quickly reflected in the construction process. With similar problems in industry engineers employ so-called Product Data Management (PDM) systems to control access to documented versions of designs and managers employ so-called Workflow Management software (WfMS) to coordinate production work processes. However, PDM and WfMS software are not generally integrated in industry. The scale of LHC experiments, like CMS, demands that industrial production techniques be applied in detector construction. This paper outlines the major functions and applications of the CRISTAL system (Cooperating Repositories and an information System for Tracking Assembly Lifecycles) in use in CMS which successfully integrates PDM and WfMS techniques in managing large scale physics detector construction. This is the first time industrial production techniques have been deployed to this extent in detector construction.


database and expert systems applications | 1998

An object model for product and workflow data management

Nigel Baker; A. Bazan; G. Chevenier; F. Estrella; Z. Kovacs; T. Le Flour; J.M. Le Goff; S. Lieanard; R. McClatchey; S. Murray; J.-P. Vialle

In industry design engineers have traditionally employed Product Data Management Systems to coordinate and control access to documented versions of product designs. However, these systems provide control only at the collaborative design level and are seldom used beyond design. Workflow management systems, on the other hand, are employed in industry to coordinate and support the more complex and repeatable work processes of the production environment. The integration of Product Data Management with Workflow Management can provide support for product development from initial CAD/CAM collaborative design through to the support and optimisation of production workflow activities. This paper investigates this integration and proposes a data model for the support of product data throughout the full development and production lifecycle and demonstrates its usefulness in the construction of large scale high energy physics detectors at the European Particle Physics laboratory at CERN.


database and expert systems applications | 1997

Version management in a distributed workflow application

Richard McClatchey; Nigel Baker; W. Harris; J.M. Le Goff; Z. Kovacs; F. Estrella; A. Bazan; T. Le Flour

Most applications of workflow management in business are based on well-defined repetitively executed processes. Recently, workflow management principles have been applied to the scientific and engineering communities, where activities may dynamically change as the workflows are executed and may often evolve through multiple versions. These domains present new problems including tracking the progress of parts on which those activities are executed, particularly if the production process itself is distributed across multiple sites. A major requirement of the system is that the full production history of every part must be permanently stored. This paper reports on the activities of a large-scale distributed scientific workflow management project, entitled CRISTAL (Cooperating Repositories and Information System for the Tracking of Assembly Lifecycles), in which a product data management (PDM) system is used to store and maintain versions of workflow meta-objects and in which a light-weight workflow enactment component is implemented for execution. These versions must support the permanent recording of a constantly adapting production process. The workflow enactment component is based on Petri nets, which are designed to embody all aspects of the scientific workflow management.


international database engineering and applications symposium | 1999

The role of meta-objects and self-description in an engineering data warehouse

R. McClatchey; Z. Kovacs; F. Estrella; J.M. Le Goff; L. Varga; M. Zsenei

As enterprises, data and functions become increasingly complex and distributed the need for information systems to be both customisable and interoperable also increases. Large scale engineering and scientific projects demand flexibility in order to evolve over time and to interact with external systems (both newly designed and legacy in nature) while retaining a degree of conceptual simplicity. The design of such systems is heavily dependent on the flexibility and accessibility of the data model describing the enterprises repository. The model must provide interoperability and reusability so that a range of applications can access the enterprise data. Making the repository self-describing, based on meta-object structures, ensures that knowledge about the repository structure is available for applications to interrogate and to navigate around for the extraction of application-specific data. In this paper, a large application is described which uses a meta-object based repository to capture engineering data in a large data warehouse. It shows that adopting a meta-modeling approach to repository design provides support for interoperability and a sufficiently flexible environment in which system evolution and reusability can be handled.


enterprise distributed object computing | 1999

Patterns for integrating manufacturing product and process models

Z. Kovacs; R. McClatchey; J.-M. Le Goff; Nigel Baker

In building models for manufacturing, product information has most often been handled separately from process information. The integration of product and process models in a unified data model could provide the means by which information could be shared across a manufacturing enterprise throughout the system lifecycle from design to production. Recently, attempts have been made to integrate these two separate views of systems through identifying common data models. This paper relates description-driven systems to multi-layer architectures and reveals where existing design patterns facilitate the integration of product and process models and where patterns are missing or where existing patterns require enrichment for this integration. It reports on the construction of a so-called description-driven system which integrates product data management (PDM) and workflow management (WfM) data models through a common meta-model.


computer-based medical systems | 2015

Traceability and Provenance in Big Data Medical Systems

Richard McClatchey; Jetendr Shamdasani; Andrew Branson; Kamran Munir; Z. Kovacs; Giovanni B. Frisoni

Providing an appropriate level of accessibility to and tracking of data or process elements in large volumes of medical data, is an essential requirement in the Big Data era. Researchers require systems that provide traceability of information through provenance data capture and management to support their clinical analyses. We present an approach that has been adopted in the neuGRID and N4U projects, which aimed to provide detailed traceability to support research analysis processes in the study of biomarkers for Alzheimers disease, but is generically applicable across medical systems. To facilitate the orchestration of complex, large-scale analyses in these projects we have adapted CRISTAL, a workflow and provenance tracking solution. The use of CRISTAL has provided a rich environment for neuroscientists to track and manage the evolution of data and workflow usage over time in neuGRID and N4U.


Computer Physics Communications | 2001

Design patterns for description-driven systems in high energy physics

Nigel Baker; A. Bazan; Guy Chevenier; Florida Estrella; Z. Kovacs; Jean-Marie Le Goff; Richard McClatchey; Peter Martin

In data modeling, product information has most often been handled separately from process information. The integration of product and process models in a unified data model could provide the means by which information could be shared between High Energy Physics (HEP) groups throughout the system lifecycle from design through to production. Recently attempts have been made to integrate these two separate views of systems through identifying common data models. This paper relates description-driven systems to multi-layer architectures through the CRISTAL project and reveals where existing design patterns facilitate the integration of product and process models, where patterns are missing or require enrichment for integration. It reports on the construction of a so-called description-driven system which integrates Product Data Management (PDM) and Workflow Management (WfM) models through a common meta-model.


Computer Physics Communications | 1998

Workflow management in the assembly of CMS ECAL

Nigel Baker; A. Bazan; Florida Estrella; Z. Kovacs; T. Le Flour; J.M. Le Goff; E. Leonardi; S. Lieunard; Richard McClatchey; J.-P. Vialle

As with all experiments in the LHC era, the Compact Muon Solenoid (CMS) detectors will be constituted of a very large number of constituent parts. Typically, each major detector may be constructed out of over a million precision parts and will be produced and assembled during the next decade by specialised centres distributed world-wide. Each constituent part of each detector must be accurately measured and tested locally prior to its ultimate assembly and integration in the experimental area at CERN. Much of the information collected during this phase will be needed not only to construct the detector, but for its calibration, to facilitate accurate simulation of its performance and to assist in its lifetime maintenance. The CRISTAL system is a prototype being developed to monitor and control the production and assembly process of the CMS Electromagnetic Calorimeter (ECAL). The software will be generic in design and hence reusable for other CMS detector groups. This paper discusses the distributed computing problems and design issues posed by this project. The overall software design architecture is described together with the main technology aspects of linking distributed object oriented databases via CORBA with WWW/Java-based query processing. The paper then concentrates on the design of the workflow management system of CRISTAL.


computer-based medical systems | 2016

Towards a Biomedical Virtual Research Environment

Richard McClatchey; Jetendr Shamdasani; Andrew Branson; Kamran Munir; Z. Kovacs

Providing an appropriate level of accessibility to and tracking of data and process elements in large volumes of medical data is essential. Researchers require systems that provide traceability of information through provenance data capture and management to support clinical analyses. A Virtual Research Environment (VRE) in which data sets and workflows are described, captured, managed and instantiated across biomedical applications would provide an ideal platform for supporting collaborative analyses acting as a knowledge repository for research. We propose a solution that aims to provide detailed data and process traceability to support research analysis that is generically applicable across medical systems.

Collaboration


Dive into the Z. Kovacs's collaboration.

Top Co-Authors

Avatar

Nigel Baker

University of the West of England

View shared research outputs
Top Co-Authors

Avatar

Richard McClatchey

University of the West of England

View shared research outputs
Top Co-Authors

Avatar

Florida Estrella

University of the West of England

View shared research outputs
Top Co-Authors

Avatar

Andrew Branson

University of the West of England

View shared research outputs
Researchain Logo
Decentralizing Knowledge