Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Volker Hartmann is active.

Publication


Featured researches published by Volker Hartmann.


Scientific Reports | 2015

An ensemble-averaged, cell density-based digital model of zebrafish embryo development derived from light-sheet microscopy data with single-cell resolution

Andrei Yu. Kobitski; Jens C. Otte; Masanari Takamiya; Benjamin Schäfer; Jonas Mertes; Johannes Stegmaier; Sepand Rastegar; Francesca Rindone; Volker Hartmann; Rainer Stotzka; Ariel Garcia; Jos van Wezel; Ralf Mikut; Uwe Strähle; G. Ulrich Nienhaus

A new era in developmental biology has been ushered in by recent advances in the quantitative imaging of all-cell morphogenesis in living organisms. Here we have developed a light-sheet fluorescence microscopy-based framework with single-cell resolution for identification and characterization of subtle phenotypical changes of millimeter-sized organisms. Such a comparative study requires analyses of entire ensembles to be able to distinguish sample-to-sample variations from definitive phenotypical changes. We present a kinetic digital model of zebrafish embryos up to 16 h of development. The model is based on the precise overlay and averaging of data taken on multiple individuals and describes the cell density and its migration direction at every point in time. Quantitative metrics for multi-sample comparative studies have been introduced to analyze developmental variations within the ensemble. The digital model may serve as a canvas on which the behavior of cellular subpopulations can be studied. As an example, we have investigated cellular rearrangements during germ layer formation at the onset of gastrulation. A comparison of the one-eyed pinhead (oep) mutant with the digital model of the wild-type embryo reveals its abnormal development at the onset of gastrulation, many hours before changes are obvious to the eye.


ieee international symposium on parallel & distributed processing, workshops and phd forum | 2011

The Large Scale Data Facility: Data Intensive Computing for Scientific Experiments

Ariel Garcia; S. Bourov; Ahmad Hammad; Jos van Wezel; Bernhard Neumair; Achim Streit; Volker Hartmann; Thomas Jejkal; Patrick Neuberger; Rainer Stotzka

The Large Scale Data Facility (LSDF) at the Karlsruhe Institute of Technology was started end of 2009 with the aim of supporting the growing requirements of data intensive experiments. In close cooperation with the involved scientific communities, the LSDF provides them not only with adequate storage space but with a directly attached analysis farm and -- more importantly -- with value added services for their big scientific data-sets. Analysis workflows are supported through the mixed Hadoop and Open Nebula Cloud environments directly attached to the storage, and enable the efficient processing of the experimental data. Metadata handling is a central part of the LSDF, where a metadata repository, community specific metadata schemes, graphical tools, and APIs were developed for accessing and efficiently organizing the stored data-sets.


parallel, distributed and network-based processing | 2011

Perspective of the Large Scale Data Facility (LSDF) Supporting Nuclear Fusion Applications

Rainer Stotzka; Volker Hartmann; Thomas Jejkal; Michael Sutter; Jos van Wezel; Marcus Hardt; Ariel Garcia; Rainer Kupsch; S. Bourov

To cope with the growing requirements of data intensive scientific experiments, models and simulations the Large Scale Data Facility(LSDF) at KIT aims to support many scientific disciplines. The LSDFis a distributed storage facility at Exabyte scale providing storage, archives, data bases and meta data repositories. Open interfaces and APIs support a variety of access methods to the highly available services for high throughput data applications. Tools for an easy and transparent access allow scientists to use the LSDF without bothering with the internal structures and technologies. In close cooperation with the scientific communities the LSDF provides assistance to efficiently organize data and metadata structures, and develops and deploys community specific software on the directly connected computing infrastructure.


parallel, distributed and network-based processing | 2012

LAMBDA -- The LSDF Execution Framework for Data Intensive Applications

Thomas Jejkal; Volker Hartmann; Rainer Stotzka; Jens C. Otte; Ariel Garcia; Jos van Wezel; Achim Streit

To cope with the growing requirements of data intensive scientific experiments, models and simulations the Large Scale Data Facility (LSDF) at KIT aims to support many scientific disciplines. The LSDF is a distributed storage facility at Exabyte scale providing storage, archives, data bases and meta data repositories. Apart from data storage many scientific communities need to perform data processing operations as well. For this purpose the LSDF Execution Framework for Data Intensive Applications (LAMBDA) was developed to allow asynchronous high-performance data processing next to the LSDF. However, it is not restricted to the LSDF or any special feature only available at the LSDF. The main goal of LAMBDA is to simplify large scale data processing for scientific users by reducing complexity, responsibility and error-proneness. The description of an execution is realized as part of LAMBDA administration in the background via meta data that can be obtained from arbitrary sources. Thus, the scientific user has only to select which applications he wants to apply to his data.


ieee symposium on large data analysis and visualization | 2011

Data-intensive analysis for scientific experiments at the Large Scale Data Facility

Ariel Garcia; S. Bourov; Ahmad Hammad; Volker Hartmann; Thomas Jejkal; Jens C. Otte; S. Pfeiffer; T. Schenker; C. Schmidt; P. Neuberger; Rainer Stotzka; J. van Wezel; Bernhard Neumair; Achim Streit

The Large Scale Data Facility (LSDF) was conceived and launched at the Karlsruhe Institute of Technology (KIT) end of 2009 to address the growing need for value-added storage services for data intensive experiments. The LSDF main focus is to support scientific experiments producing large data sets reaching into the petabyte range with adequate storage, support and value added services for data management, processing and preservation. In this work we describe the approach taken to perform data analysis in LSDF, as well as for data management of the scientific datasets.


international conference on big data | 2015

An Optimized Generic Client Service API for Managing Large Datasets within a Data Repository

Ajinkya Prabhune; Rainer Stotzka; Thomas Jejkal; Volker Hartmann; Margund Bach; Eberhard Schmitt; Michael Hausmann; Juergen Hesser

Exponential growth in scientific research data demands novel measures for managing the extremely large datasets. In particular, due to advancements in high-resolution microscopy, the nanoscopy scientific research community is producing datasets up to the range of multiple TeraBytes (TB). Systematically acquired datasets of biological specimens are composed of multiple high-resolution images, in the range of 150-200 TB. The management of these extremely large datasets requires an optimized Generic Client Service (GCS) API with an integration into a data repository system. The novel API proposed in this paper provides an abstract interface that connects various disparate systems. The API is optimized to provide an efficient and automated ingest, download of the data and management of its metadata. The ingest and download processes are based on well-defined workflows stated in this paper. The base metadata model for comprehensive description of the datasets is also stated in the paper. The API is seamlessly integrated with a digital data repository system, namely KIT Data Manager to make it adaptable for a wide range of communities. Finally, a simple and easy to use command line tool is realized based on GCS API to manage large datasets of nanoscopy research community.


parallel, distributed and network-based processing | 2014

Device-Driven Metadata Management Solutions for Scientific Big Data Use Cases

Richard Grunzke; Ralph Müller-Pfefferkorn; René Jäkel; Jürgen Hesser; Nick Kepper; Michael Hausmann; Jurgen Starek; Sandra Gesing; Marcus Hardt; Volker Hartmann; Jan Potthoff; Stephan Kindermann

Big Data applications in science are producing huge amounts of data, which require advanced processing, handling, and analysis capabilities. For the organization of large scale data sets it is essential to annotate these with metadata, index them, and make them easily findable. In this paper we investigate two scientific use cases from biology and photon science, which entail complex situations in regard to data volume, data rates and analysis requirements. The LSDMA project provides an ideal context for this research, combining both innovative R&D on the processing, handling, and analysis level and a wide range of research communities in need of scalable solutions. To facilitate the advancement of data life cycles we present preferred metadata management strategies. In biology the Open Microscopy Environment (OME) and in photon science NeXus/ICAT are presented. We show that these are well suited for the respective data life cycles. To facilitate searching across communities we discuss solutions involving the Open Archive Initiative - Protocol for Metadata Harvesting (OAI-PMH) and Apache Lucene/Solr.


international conference on e science | 2006

Grid Services Toolkit for Process Data Processing

Tim O. Müller; Thomas Jejkal; Rainer Stotzka; Michael Sutter; Volker Hartmann; Hartmut Gemmeke

Grid is a rapidly growing new technology that will provide easy access to vast amounts of computer resources, both hardware and software. As these resources become available soon, more and more scientific users are interested in benefiting from them. At this time the main problem accessing Grid is that scientific user usually need to know a lot about Grid methods and technologies besides their own field of application. This paper describes a toolkit which is based om Grid Services designed especially for the field of process data processing providing database access and management, common methods of statistical data analysis and project specific methods. The toolkit will fill to some extent the gap between high-level scientific Grid users and low-level functions in Grid environments, thus simplifying and accelerating the development of scientific Grid applications.


Proceedings of IWSG 2016 : 8th International Workshop on Science Gateways, Rome, Italy, 8th - 10th June 2016. Ed.: S. Gesing | 2016

Towards a Metadata-driven Multi-community Research Data Management Service

Richard Grunzke; Volker Hartmann; Thomas Jejkal; Ajinkya Prabhune; Hendrik Herold; Aline Deicke; Alexander Hoffmann; Torsten Schrade; Gotthard Meinel; Sonja Herres-Pawlis; Rainer Stotzka; Wolfgang E. Nagel

Nowadays, the daily work of many research communities is characterized by an increasing amount and complexity of data. This makes it increasingly difficult to manage, access and utilize to ultimately gain scientific insights based on it. At the same time, domain scientists want to focus on their science instead of IT. The solution is research data management in order to store data in a structured way to enable easy discovery for future reference. An integral part is the use of metadata. With it, data becomes accessible by its content instead of only its name and location. The use of metadata shall be as automatic and seamless as possible in order to foster a high usability. Here we present the architecture and initial steps of the MASi project with its aim to build a comprehensive research data management service. First, it extends the existing KIT Data Manager framework by a generic programming interface and by a generic graphical web interface. Advanced additional features includes the integration of provenance metadata and persistent identifiers. The MASi service aims at being easily adaptable for arbitrary communities with limited effort. The requirements for the initial use cases within geography, chemistry and digital humanities are elucidated. The MASi research data management service is currently being built up to satisfy these complex and varying requirements in an efficient way. Keywords—Metadata, Communities, Research Data Management


Automatisierungstechnik | 2016

Automation strategies for large-scale 3D image analysis

Johannes Stegmaier; Benjamin Schott; Eduard Hübner; Manuel Traub; Maryam Shahid; Masanari Takamiya; Andrei Yu. Kobitski; Volker Hartmann; Rainer Stotzka; Jos van Wezel; Achim Streit; G. Ulrich Nienhaus; Uwe Strähle; Markus Reischl; Ralf Mikut

Abstract New imaging techniques enable visualizing and analyzing a multitude of unknown phenomena in many areas of science at high spatio-temporal resolution. The rapidly growing amount of image data, however, can hardly be analyzed manually and, thus, future research has to focus on automated image analysis methods that allow one to reliably extract the desired information from large-scale multidimensional image data. Starting with infrastructural challenges, we present new software tools, validation benchmarks and processing strategies that help coping with large-scale image data. The presented methods are illustrated on typical problems observed in developmental biology that can be answered, e.g., by using time-resolved 3D microscopy images.

Collaboration


Dive into the Volker Hartmann's collaboration.

Top Co-Authors

Avatar

Rainer Stotzka

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Thomas Jejkal

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ariel Garcia

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Achim Streit

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Sutter

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Richard Grunzke

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jens C. Otte

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jos van Wezel

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Wolfgang E. Nagel

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ajinkya Prabhune

Karlsruhe Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge