Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sandra Geisler is active.

Publication


Featured researches published by Sandra Geisler.


BMC Medical Informatics and Decision Making | 2013

Decision support for hospital bed management using adaptable individual length of stay estimations and shared resources

Robert Schmidt; Sandra Geisler; Cord Spreckelsen

BackgroundElective patient admission and assignment planning is an important task of the strategic and operational management of a hospital and early on became a central topic of clinical operations research. The management of hospital beds is an important subtask. Various approaches have been proposed, involving the computation of efficient assignments with regard to the patients’ condition, the necessity of the treatment, and the patients’ preferences. However, these approaches are mostly based on static, unadaptable estimates of the length of stay and, thus, do not take into account the uncertainty of the patient’s recovery. Furthermore, the effect of aggregated bed capacities have not been investigated in this context. Computer supported bed management, combining an adaptable length of stay estimation with the treatment of shared resources (aggregated bed capacities) has not yet been sufficiently investigated. The aim of our work is: 1) to define a cost function for patient admission taking into account adaptable length of stay estimations and aggregated resources, 2) to define a mathematical program formally modeling the assignment problem and an architecture for decision support, 3) to investigate four algorithmic methodologies addressing the assignment problem and one base-line approach, and 4) to evaluate these methodologies w.r.t. cost outcome, performance, and dismissal ratio.MethodsThe expected free ward capacity is calculated based on individual length of stay estimates, introducing Bernoulli distributed random variables for the ward occupation states and approximating the probability densities. The assignment problem is represented as a binary integer program. Four strategies for solving the problem are applied and compared: an exact approach, using the mixed integer programming solver SCIP; and three heuristic strategies, namely the longest expected processing time, the shortest expected processing time, and random choice. A baseline approach serves to compare these optimization strategies with a simple model of the status quo. All the approaches are evaluated by a realistic discrete event simulation: the outcomes are the ratio of successful assignments and dismissals, the computation time, and the model’s cost factors.ResultsA discrete event simulation of 226,000 cases shows a reduction of the dismissal rate compared to the baseline by more than 30 percentage points (from a mean dismissal ratio of 74.7% to 40.06% comparing the status quo with the optimization strategies). Each of the optimization strategies leads to an improved assignment. The exact approach has only a marginal advantage over the heuristic strategies in the model’s cost factors (≤3%). Moreover,this marginal advantage was only achieved at the price of a computational time fifty times that of the heuristic models (an average computing time of 141 s using the exact method, vs. 2.6 s for the heuristic strategy).ConclusionsIn terms of its performance and the quality of its solution, the heuristic strategy RAND is the preferred method for bed assignment in the case of shared resources. Future research is needed to investigate whether an equally marked improvement can be achieved in a large scale clinical application study, ideally one comprising all the departments involved in admission and assignment planning.


international workshop on geostreaming | 2010

A data stream-based evaluation framework for traffic information systems

Sandra Geisler; Christoph Quix; Stefan Schiffer

Traffic information systems based on mobile, in-car sensor technology are a challenge for data management systems as a huge amount of data has to be processed in real-time. Data mining methods must be adapted to cope with these challenges in handling streaming data. Although several data stream mining methods have been proposed, an evaluation of such methods in the context of traffic applications is yet missing. In this paper, we present an evaluation framework for data stream mining for traffic applications. We apply a traffic simulation software to emulate the generation of traffic data by mobile probes. The framework is evaluated in a first case study, namely queue-end detection. We show first results of the evaluation of a data stream mining method, varying multiple parameters for the traffic simulation. The goal of our work is to identify parameter settings for which the data stream mining methods produce useful results for the traffic application at hand.


conference on information and knowledge management | 2010

Automatic schema merging using mapping constraints among incomplete sources

Xiang Li; Christoph Quix; David Kensche; Sandra Geisler

Schema merging is the process of consolidating multiple schemas into a unified view. The task becomes particularly challenging when the schemas are highly heterogeneous and autonomous. Classical data integration systems rely on a mediated schema created by human experts through an intensive design process. In this paper, we present a novel approach for merging multiple relational data sources related by a collection of mapping constraints in the form of P2P style tuple-generating dependencies (tgds). In the scenario of data integration, we opt for minimal mediated schemas that are complete regarding certain answers of conjunctive queries. Under Open World Assumption (OWA), we characterize the semantics of schema merging by properties of the output mapping system between the source schemas and the mediated schema. We propose a merging algorithm following a redundancy reduction paradigm and prove that the output satisfies the desired logical properties. Recognizing the fact that multiple plausible mediated schemas may co-exist, a variant of the a priori algorithm is employed to enumerate alternative mediated schemas. Output mappings in the form of data dependencies are generated to support the mediated schemas, which enables query processing. We have evaluated our merging approach over a collection of real world data sets, which demonstrate the applicability and effectiveness of our approach in practice.


Data Exchange, Information, and Streams | 2013

Data Stream Management Systems.

Sandra Geisler

In many application fields, such as production lines or stock analysis, it is substantial to create and process high amounts of data at high rates. Such continuous data flows with unknown size and end are also called data streams. The processing and analysis of data streams are a challenge for common data management systems as they have to operate and deliver results in real time. Data Stream Management Systems (DSMS), as an advancement of database management systems, have been implemented to deal with these issues. DSMS have to adapt to the notion of data streams on various levels, such as query languages, processing or optimization. In this chapter we give an overview of the basics of data streams, architecture principles of DSMS and the used query languages. Furthermore, we specifically detail data quality aspects in DSMS as these play an important role for various applications based on data streams. Finally, the chapter also includes a list of research and commercial DSMS and their key properties.


international conference on data engineering | 2011

Automatic generation of mediated schemas through reasoning over data dependencies

Xiang Li; Christoph Quix; David Kensche; Sandra Geisler; Lisong Guo

Mediated schemas lie at the center of the well recognized data integration architecture. Classical data integration systems rely on a mediated schema created by human experts through an intensive design process. Automatic generation of mediated schemas is still a goal to be achieved. We generate mediated schemas by merging multiple source schemas interrelated by tuple-generating dependencies (tgds). Schema merging is the process to consolidate multiple schemas into a unified view. The task becomes particularly challenging when the schemas are highly heterogeneous and autonomous. Existing approaches fall short in various aspects, such as restricted expressiveness of input mappings, lacking data level interpretation, the output mapping is not in a logical language (or not given at all), and being confined to binary merging. We present here a novel system which is able to perform native n-ary schema merging using P2P style tgds as input. Suited in the scenario of generating mediated schemas for data integration, the system opts for a minimal schema signature retaining all certain answers of conjunctive queries. Logical output mappings are generated to support the mediated schemas, which enable query answering and, in some cases, query rewriting.


Journal of Data and Information Quality | 2016

Ontology-Based Data Quality Management for Data Streams

Sandra Geisler; Christoph Quix; Sven Weber; Matthias Jarke

Data Stream Management Systems (DSMS) provide real-time data processing in an effective way, but there is always a tradeoff between data quality (DQ) and performance. We propose an ontology-based data quality framework for relational DSMS that includes DQ measurement and monitoring in a transparent, modular, and flexible way. We follow a threefold approach that takes the characteristics of relational data stream management for DQ metrics into account. While (1) Query Metrics respect changes in data quality due to query operations, (2) Content Metrics allow the semantic evaluation of data in the streams. Finally, (3) Application Metrics allow easy user-defined computation of data quality values to account for application specifics. Additionally, a quality monitor allows us to observe data quality values and take counteractions to balance data quality and performance. The framework has been designed along a DQ management methodology suited for data streams. It has been evaluated in the domains of transportation systems and health monitoring.


Archive | 2014

Evaluation of Real-Time Traffic Applications Based on Data Stream Mining

Sandra Geisler; Christoph Quix

Traffic management today requires the analysis of a huge amount of data in real-time in order to provide current information about the traffic state or hazards to road users and traffic control authorities. Modern cars are equipped with several sensors which can produce useful data for the analysis of traffic situations. Using mobile communication technologies, such data can be integrated and aggregated from several cars which enables intelligent transportation systems (ITS) to monitor the traffic state in a large area at relatively low costs. However, processing and analyzing data poses numerous challenges for data management solutions in such systems. Real-time analysis with high accuracy and confidence is one important requirement in this context. We present a summary of our work on a comprehensive evaluation framework for data stream-based ITS. The goal of the framework is to identify appropriate configurations for ITS and to evaluate different mining methods for data analysis. The framework consists of a traffic simulation software, a data stream management system, utilizes data stream mining algorithms, and provides a flexible ontology-based component for data quality monitoring during data stream processing. The work has been done in the context of a project on Car-To-X communication using mobile communication networks. The results give some interesting insights for the setup and configuration 0 traffic information systems that use Car-To-X messages as primary source for deriving traffic information and also point out challenges for data stream management and data stream mining.


international conference on objects and databases | 2010

Solving ORM by MAGIC: MApping generatIon and composition

David Kensche; Christoph Quix; Xiang Li; Sandra Geisler

Object-relational mapping (ORM) technologies have been proposed as a solution for the impedance mismatch problem between object-oriented applications and relational databases. Existing approaches use special-purpose mapping languages or are tightly integrated with the programming language. In this paper, we present MAGIC, an approach using bidirectional query and update views, based on a generic metamodel and a generic mapping language. The mapping language is based on second-order tuple-generating dependencies and allows arbitrary restructuring between the application model and the database schema. Due to the genericity of our approach, the core part including mapping generation and mapping composition is independent of the modeling languages being employed. We show the formal basis of MAGIC and how queries including aggregation can be defined using an easy to use query API. The scalability of our approach is shown in the evaluation using the TPC benchmark.


ieee embs international conference on biomedical and health informatics | 2017

A presentation semantic for the Operational Data Model (ODM)

Thomas Martin Deserno; Daniel Haak; Markus Harmsen; Sandra Geisler; Matthias Jarke

The Clinical Data Interchange Standards Consortium (CDISC) Operational Data Model (ODM) is a standard for structural interchange and archiving of clinical trial data. However, ODM insufficiently supports visualization (e.g., dropdown vs. checkbox), layout (e.g., horizontal vs. vertical), and functionality (e.g., hiding of fields). In this work, the Operational Data Representation (ODR) is defined as a semi-formal specification language. ODR extends ODM by a presentation layer, whereby ODM is still used for data binding. As proof of concept, a software framework of ODR rendering engines is implemented on various devices (web, mobile applications, paper-based). A converter for OpenClinica eCRFs is developed to simplify ODR specification. Using ODR, a public repository of more than 4,000 ODM-based eCRFs is rendered correctly on all runtime systems. Therefore, ODR allows eCRF interchange across heterogeneous devices and systems on the visual and the functional level. It will simplify electronic patient-reported outcome (ePRO).


Distributed and Parallel Databases | 2016

Guest Editorial: Large-scale Data Management for Mobile Applications

Thierry Delot; Sandra Geisler; Sergio Ilarri; Christoph Quix

The increasing functionality and capacity of mobile devices have enabled new mobile applications which require new approaches for data management. Users want to have a seamless integration of their data on their mobile with other devices, which can be either classical devices such as a desktop PC or other mobile devices. On the one hand, the growing computing power of mobile devices and the availability of “Big Data” to mobile users facilitate the development of powerful mobile applications. On the other hand, the limitations ofmobile deviceswith respect to energy, storage, display size, communication bandwidth, and real-time capabilities have to be considered. Due to the growing volume of the data that has to be managed, the availability of huge datasets, the emergence of non-traditional techniques for data management (e.g., NoSQL systems), and the spreading of cloud computing, new efforts are expected in this area. Information management in mobile applications is a complex problem space which requires the consideration of the aforementioned constraints. Under the umbrella of

Collaboration


Dive into the Sandra Geisler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiang Li

RWTH Aachen University

View shared research outputs
Top Co-Authors

Avatar

Rihan Hai

RWTH Aachen University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge