Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wilfried Grossmann is active.

Publication


Featured researches published by Wilfried Grossmann.


international conference on software engineering | 2001

Evaluating the accuracy of defect estimation models based on inspection data from two inspection cycles

Stefan Biffl; Wilfried Grossmann

Defect content estimation techniques (DCETs), based on defect data from inspection, estimate the total number of defects in a document to evaluate the development process. For inspections that yield few data points DCETs reportedly underestimate the number of defects. If there is a second inspection cycle, the additional defect data is expected to increase estimation accuracy. In this paper we consider 3 scenarios to combine data sets from the inspection-reinspection process. We evaluate these approaches with data from an experiment in a university environment where 31 teams inspected and reinspected a software requirements document. Main findings of the experiment were that reinspection data improved estimation accuracy. With the best combination approach all examined estimators yielded on average estimates within 20% around the true value, all estimates stayed within 40% around the true value.


conference on advanced information systems engineering | 2012

On analyzing process compliance in skin cancer treatment: an experience report from the evidence-based medical compliance cluster (EBMC 2 )

Michael Binder; Wolfgang Dorda; Georg Duftschmid; Reinhold Dunkl; Karl Anton Fröschl; Walter Gall; Wilfried Grossmann; Kaan Harmankaya; Milan Hronsky; Stefanie Rinderle-Ma; Christoph Rinner; Stefanie Weber

Process mining has proven itself as a promising analysis technique for processes in the health care domain. The goal of the EBMC2 project is to analyze skin cancer treatment processes regarding their compliance with relevant guidelines. For this, first of all, the actual treatment processes have to be discovered from the available data sources. In general, the L* life cycle model has been suggested as structured methodology for process mining projects. In this experience paper, we describe the challenges and lessons learned when realizing the L* life cycle model in the EBMC2 context. Specifically, we provide and discuss different approaches to empower data of low maturity levels, i.e., data that is not already available in temporally ordered event logs, including a prototype for structured data acquisition. Further, first results on how process mining techniques can be utilized for data screening are presented.


Archive | 2015

Fundamentals of Business Intelligence

Wilfried Grossmann; Stefanie Rinderle-Ma

This book presents a comprehensive and systematic introduction to transforming process-oriented data into information about the underlying business process, which is essential for all kinds of decision-making. To that end, the authors develop step-by-step models and analytical tools for obtaining high-quality data structured in such a way that complex analytical tools can be applied. The main emphasis is on process mining and data mining techniques and the combination of these methods for process-oriented data. After a general introduction to the business intelligence (BI) process and its constituent tasks in chapter 1, chapter 2 discusses different approaches to modeling in BI applications. Chapter 3 is an overview and provides details of data provisioning, including a section on big data. Chapter 4 tackles data description, visualization, and reporting. Chapter 5 introduces data mining techniques for cross-sectional data. Different techniques for the analysis of temporal data are then detailed in Chapter 6. Subsequently, chapter 7 explains techniques for the analysis of process data, followed by the introduction of analysis techniques for multiple BI perspectives in chapter 8. The book closes with a summary and discussion in chapter 9. Throughout the book, (mostly open source) tools are recommended, described and applied; a more detailed survey on tools can be found in the appendix, and a detailed code for the solutions together with instructions on how to install the software used can be found on the accompanying website. Also, all concepts presented are illustrated and selected examples and exercises are provided. The book is suitable for graduate students in computer science, and the dedicated website with examples and solutions makes the book ideal as a textbook for a first course in business intelligence in computer science or business information systems. Additionally, practitioners and industrial developers who are interested in the concepts behind business intelligence will benefit from the clear explanations and many examples.


USAB'11 Proceedings of the 7th conference on Workgroup Human-Computer Interaction and Usability Engineering of the Austrian Computer Society: information Quality in e-Health | 2011

Assessing medical treatment compliance based on formal process modeling

Reinhold Dunkl; Karl Anton Fröschl; Wilfried Grossmann; Stefanie Rinderle-Ma

The formalization and analysis of medical guidelines play an essential role in clinical practice nowadays. Due to their inexorably generic nature such guidelines leave room for different interpretation and implementation. Hence, it is desirable to understand this variability and its implications for patient treatment in practice. In this paper we propose an approach for comparing guideline-based treatment processes with empirical treatment processes. The methodology combines ideas from workflow modeling, process simulation, process mining, and statistical methods of evidence-based medicine. The applicability of the approach is illustrated based on the Cutaneous Melanoma use case.


statistical and scientific database management | 2002

Statistical composites: a transformation-bound representation of statistical datasets

Michaela Denk; Karl Anton Froeschl; Wilfried Grossmann

Statistical data processing makes use of data matrices and tables as primary structures for data representation. Embedding these structures into processing-relevant context information gives rise to enhanced data structures linking data and metadata. The paper describes a framework for statistical data processing utilising metadata computationally.


BioMed Research International | 2015

Effects of Shared Electronic Health Record Systems on Drug-Drug Interaction and Duplication Warning Detection

Christoph Rinner; Wilfried Grossmann; Simone Katja Sauter; Michael Wolzt; Walter Gall

Shared electronic health records (EHRs) systems can offer a complete medication overview of the prescriptions of different health care providers. We use health claims data of more than 1 million Austrians in 2006 and 2007 with 27 million prescriptions to estimate the effect of shared EHR systems on drug-drug interaction (DDI) and duplication warnings detection and prevention. The Austria Codex and the ATC/DDD information were used as a knowledge base to detect possible DDIs. DDIs are categorized as severe, moderate, and minor interactions. In comparison to the current situation where only DDIs between drugs issued by a single health care provider can be checked, the number of warnings increases significantly if all drugs of a patient are checked: severe DDI warnings would be detected for 20% more persons, and the number of severe DDI warnings and duplication warnings would increase by 17%. We show that not only do shared EHR systems help to detect more patients with warnings but DDIs are also detected more frequently. Patient safety can be increased using shared EHR systems.


International Journal of Software Engineering and Knowledge Engineering | 2014

Overcoming Heterogeneity in Business Process Modeling with Rule-Based Semantic Mappings

Christoph Prackwieser; Robert Andrei Buchmann; Wilfried Grossmann; Dimitris Karagiannis

The paper tackles the problem of notational heterogeneity in business process modeling. Heterogeneity is overcome with an approach that induces semantic homogeneity independent of notation, driven by commonalities and recurring semantics in various control flow-oriented modeling languages, with the goal of enabling process simulation on a generic level. Thus, hybrid process models (for end-to-end or decomposed processes) having different parts or subprocesses modeled with different languages become simulate-able, making it possible to derive quantitative measures (lead time, costs, or resource capacity) across notational heterogeneity. The result also contributes to a better understanding of the process structure, as it helps with identifying interface problems and process execution requirements, and can support a multitude of areas that benefit from step by step process simulation (e.g. process-oriented requirement analysis, user interface design, generation of business-related test cases, compilation of handbooks and training material derived from processes). A use case is presented in the context of the ComVantage EU research project, where notational heterogeneity is induced by: (a) the specificity and hybrid character of a process-centric modeling method designed for the project application domain, and (b) the collaborative nature of the modeling effort, with different modelers working with different notations for different layers of abstraction in a shared on-line tool and model repository.


knowledge science, engineering and management | 2013

Towards a Generic Hybrid Simulation Algorithm Based on a Semantic Mapping and Rule Evaluation Approach

Christoph Prackwieser; Robert Andrei Buchmann; Wilfried Grossmann; Dimitris Karagiannis

In this paper we present a semantic lifting methodology for heterogeneous process models, depicted with various control flow-oriented notations, aimed to enable their simulation on a generic level. This allows for an integrated simulation of hybrid process models, such as end-to-end models or multi- layer models, in which different parts or subprocesses are modeled with different notations. Process simulation outcome is not limited to determining quantitative process measures as lead time, costs, or resource capacity, it can also contribute greatly to a better understanding of the process structure, it helps with identifying interface problems and process execution requirements, and can support a multitude of areas that benefit from step by step process simulation - process-oriented requirement analysis, user interface design, generation of business-related test cases, compilation of handbooks and training material derived from processes.


Archive | 1990

WAMASTEX — Heuristic Guidance for Statistical Analysis

W. Dorda; Karl Anton Froeschl; Wilfried Grossmann

The current state and the direction of further development of the WAMASTEX system are described. The main portion of the paper discusses the empirical assessment of several decision heuristics Wamastex’s internal workings are based upon.


conference on advanced information systems engineering | 2014

A Method for Analyzing Time Series Data in Process Mining: Application and Extension of Decision Point Analysis

Reinhold Dunkl; Stefanie Rinderle-Ma; Wilfried Grossmann; Karl Anton Fröschl

The majority of process mining techniques focuses on control flow. Decision Point Analysis (DPA) exploits additional data attachments within log files to determine attributes decisive for branching of process paths within discovered process models. DPA considers only single attribute values. However, in many applications, the process environment provides additional data in form of consecutive measurement values such as blood pressure or container temperature. We introduce the DPATS method as an iterative process for exploiting time series data by combining process and data mining techniques. The latter ranges from visual mining to temporal data mining techniques such as dynamic time warping and response feature analysis. The method also offers different approaches for incorporating time series data into log files in order to enable existing process mining techniques to be applied. Finally, we provide the simulation environment DPATSSim to produce log files and time series data. The DPATS method is evaluated based on application scenarios from the logistics and medical domain.

Collaboration


Dive into the Wilfried Grossmann's collaboration.

Top Co-Authors

Avatar

Walter Gall

Medical University of Vienna

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christoph Rinner

Medical University of Vienna

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Wolzt

Medical University of Vienna

View shared research outputs
Top Co-Authors

Avatar

Georg Duftschmid

Medical University of Vienna

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Simone Katja Sauter

Medical University of Vienna

View shared research outputs
Researchain Logo
Decentralizing Knowledge