Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lars Schneidenbach.
Journal of The Optical Society of America A-optics Image Science and Vision | 2005
Christine Böckmann; Irina Mironova; Detlef Müller; Lars Schneidenbach; Remo Nessler
The hybrid regularization technique developed at the Institute of Mathematics of Potsdam University (IMP) is used to derive microphysical properties such as effective radius, surface-area concentration, and volume concentration, as well as the single-scattering albedo and a mean complex refractive index, from multiwavelength lidar measurements. We present the continuation of investigations of the IMP method. Theoretical studies of the degree of ill-posedness of the underlying model, simulation results with respect to the analysis of the retrieval error of microphysical particle properties from multiwavelength lidar data, and a comparison of results for different numbers of backscatter and extinction coefficients are presented. Our analysis shows that the backscatter operator has a smaller degree of ill-posedness than the operator for extinction. This fact underlines the importance of backscatter data. Moreover, the degree of ill-posedness increases with increasing particle absorption, i.e., depends on the imaginary part of the refractive index and does not depend significantly on the real part. Furthermore, an extensive simulation study was carried out for logarithmic-normal size distributions with different median radii, mode widths, and real and imaginary parts of refractive indices. The errors of the retrieved particle properties obtained from the inversion of three backscatter (355, 532, and 1064 nm) and two extinction (355 and 532 nm) coefficients were compared with the uncertainties for the case of six backscatter (400, 710, 800 nm, additionally) and the same two extinction coefficients. For known complex refractive index and up to 20% normally distributed noise, we found that the retrieval errors for effective radius, surface-area concentration, and volume concentration stay below approximately 15% in both cases. Simulations were also made with unknown complex refractive index. In that case the integrated parameters stay below approximately 30%, and the imaginary part of the refractive index stays below 35% for input noise up to 10% in both cases. In general, the quality of the retrieved aerosol parameters depends strongly on the imaginary part owing to the degree of ill-posedness. It is shown that under certain constraints a minimum data set of three backscatter coefficients and two extinction coefficients is sufficient for a successful inversion. The IMP algorithm was finally tested for a measurement case.
Atmospheric Measurement Techniques | 2015
Detlef Müller; Christine Böckmann; Alexei Kolgotin; Lars Schneidenbach; Eduard Chemyakin; Julia Rosemann; Pavel Znak; Anton Romanov
We present a summary on the current status of two inversion algorithms that are used in EARLINET (European Aerosol Research Lidar Network) for the inversion of data collected with EARLINET multiwavelength Raman lidars. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. Development of these two algorithms started in 2000 when EARLINET was founded. The algorithms are based on a manually controlled inversion of optical data which allows for detailed sensitivity studies. The algorithms allow us to derive particle effective radius as well as volume and surface area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light absorption needs to be known with high accuracy. It is an extreme challenge to retrieve the real part with an accuracy better than 0.05 and the imaginary part with accuracy better than 0.005–0.1 or±50 %. Single-scattering albedo can be computed from the retrieved microphysical parameters and allows us to categorize aerosols into highand low-absorbing aerosols. On the basis of a few exemplary simulations with synthetic optical data we discuss the current status of these manually operated algorithms, the potentially achievable accuracy of data products, and the goals for future work. One algorithm was used with the purpose of testing how well microphysical parameters can be derived if the real part of the complex refractive index is known to at least 0.05 or 0.1. The other algorithm was used to find out how well microphysical parameters can be derived if this constraint for the real part is not applied. The optical data used in our study cover a range of Ångström exponents and extinction-to-backscatter (lidar) ratios that are found from lidar measurements of various aerosol types. We also tested aerosol scenarios that are considered highly unlikely, e.g. the lidar ratios fall outside the commonly accepted range of values measured with Raman lidar, even though the underlying microphysical particle properties are not uncommon. The goal of this part of the study is to test the robustness of the algorithms towards their ability to identify aerosol types that have not been measured so far, but cannot be ruled out based on our current knowledge of aerosol physics. We computed the optical data from monomodal logarithmic particle size distributions, i.e. we explicitly excluded the more complicated case of bimodal particle size distributions which is a topic of ongoing research work. Another constraint is that we only considered particles of spherical shape in our simulations. We considered particle radii as large as Published by Copernicus Publications on behalf of the European Geosciences Union. 5008 D. Müller et al.: EARLINET inversion algorithms 7–10 μm in our simulations where the Potsdam algorithm is limited to the lower value. We considered optical-data errors of 15 % in the simulation studies. We target 50 % uncertainty as a reasonable threshold for our data products, though we attempt to obtain data products with less uncertainty in future work.
Computer Physics Communications | 2009
Lukas Osterloh; Carlos Perez; David Böhme; José María Baldasano; Christine Böckmann; Lars Schneidenbach; David Vicente
We present new software for the retrieval of the volume distribution – and thus, other relevant microphysical properties such as the effective radius – of stratospheric and tropospheric aerosols from multiwavelength LIDAR data. We consider the basic equation as a linear ill-posed problem and solve the linear system derived from spline collocation. We consider as well the technical implications of the algorithm implementation. In order to reduce runtime which is incurred by the vast theoretical search space, experiments on the MareNostrum Supercomputer were made to understand the significance of the different search space dimensions on the quality of the solution with the goal of restricting or eliminating entirely certain dimensions of the search space, to massively reduce calculation time for later production runs. Results show that the search space can be reduced according to available computation power to still yield reasonable results. Also, the scalability of the parallel software proved to be good.
international conference on supercomputing | 2014
Felix Schürmann; Fabien Delalondre; Pramod S. Kumbhar; John Biddiscombe; Miguel Gila; Davide Tacchella; Alessandro Curioni; Bernard Metzler; Peter Morjan; Joachim Fenkes; Michele M. Franceschini; Robert S. Germain; Lars Schneidenbach; T. J. C. Ward; Blake G. Fitch
Storage class memory is receiving increasing attention for use in HPC systems for the acceleration of intensive IO operations. We report a particular instance using SLC FLASH memory integrated with an IBM BlueGene/Q supercomputer at scale Blue Gene Active Storage, BGAS. We describe two principle modes of operation of the non-volatile memory: 1 block device; 2 direct storage access DSA. The block device layer, built on the DSA layer, provides compatibility with IO layers common to existing HPC IO systems POSIX, MPIO, HDF5 and is expected to provide high performance in bandwidth critical use cases. The novel DSA strategy enables a low-overhead, byte addressable, asynchronous, kernel by-pass access method for very high user space IOPs in multithreaded application environments. Here, we expose DSA through HDF5 using a custom file driver. Benchmark results for the different modes are presented and scale-out to full system size showcases the capabilities of this technology.
european pvm mpi users group meeting on recent advances in parallel virtual machine and message passing interface | 2009
Lars Schneidenbach; Bettina Schnor; Martin Gebser; Roland Kaminski; Benjamin Kaufmann; Torsten Schaub
This paper presents the concept of parallelisation of a solver for Answer Set Programming (ASP). While there already exist some approaches to parallel ASP solving, there was a lack of a parallel version of the powerful clasp solver. We implemented a parallel version of clasp based on message-passing. Experimental results on Blue Gene P/L indicate the potential of such an approach.
local computer networks | 2003
Lars Schneidenbach; Bettina Schnor; Stefan Petri
GAMMA (the Genoa active message machine) is a lightweight messaging system for fast and Gigabit Ethernet. It is based on an active message-like paradigm and provides a performing, cost-effective alternative to proprietary high-speed networks, e.g. Myrinet, with the combination of low end-to-end latency and high throughput. GAMMA supports the important class of MPI based parallel applications via the MPI/GAMMA interface [G. Ciaccio], but up to now support for the also important class of socket based cluster applications is still missing. This paper describes two different approaches to how the socket interface can be adapted to GAMMA: The first is transparent for both application and GAMMA layer, the second is only transparent for the application. First performance results with the so-called GAMMA sockets are given. They show that the second approach performs almost as good as native GAMMA communication.
international parallel and distributed processing symposium | 2016
Stefan Eilemann; Fabien Delalondre; Jon Bernard; Judit Planas; Felix Schuermann; John Biddiscombe; Costas Bekas; Alessandro Curioni; Bernard Metzler; Peter Kaltstein; Peter Morjan; Joachim Fenkes; Ralph Bellofatto; Lars Schneidenbach; T. J. Christopher Ward; Blake G. Fitch
Scientific workflows are often composed of compute-intensive simulations and data-intensive analysis and visualization, both equally important for productivity. High-performance computers run the compute-intensive phases efficiently, but data-intensive processing is still getting less attention. Dense non-volatile memory integrated into super-computers can help address this problem. In addition to density, it offers significantly finer-grained I/O than disk-based I/O systems. We present a way to exploit the fundamental capabilities of Storage-Class Memories (SCM), such as Flash, by using scalable key-value (KV) I/O methods instead of traditional file I/O calls commonly used in HPC systems. Our objective is to enable higher performance for on-line and near-line storage for analysis and visualization of very high resolution, but correspondingly transient, simulation results. In this paper, we describe 1) the adaptation of a scalable key-value store to a BlueGene/Q system with integrated Flash memory, 2) a novel key-value aggregation module which implements coalesced, function-shipped calls between the clients and the servers, and 3) the refactoring of a scientific workflow to use application-relevant keys for fine-grained data subsets. The resulting implementation is analogous to function-shipping of POSIX I/O calls but shows an order of magnitude increase in read and a factor 2.5x increase in write IOPS performance (11 million read IOPS, 2.5 million write IOPS from 4096 compute nodes) when compared to a classical file system on the same system. It represents an innovative approach for the integration of SCM within an HPC system at scale.
european pvm mpi users group meeting on recent advances in parallel virtual machine and message passing interface | 2008
Lars Schneidenbach; David Böhme; Bettina Schnor
The efficient implementation of one-sided communication for cluster environments is a challenging task. Here, we present and discuss performance issues which are inherently created by the design of the MPI-2 API specification. Based on our investigations, we propose an one-sided communication API called NEON. The presented measurements with a real application demonstrate the benefits of the NEON approach.
international conference on networking and services | 2008
Adrian Knoth; Christian Kauhaus; Dietmar Fey; Lars Schneidenbach; Bettina Schnor
The message passing interface (MPI) [17] is the most widely used message passing library for parallel applications on compute clusters. Here, we present our experiences in developing an IPv6 enabled MPI version for both most popular implementations, the MPICH2 and the Open MPI implementations. Further, we discuss how these IPv6 enabled MPI implementations can be used within multi-cluster and grid topologies.
international conference on networking and services | 2006
Sven Friedrich; Sebastian Krahmer; Lars Schneidenbach; Bettina Schnor
With the next generation Internet protocol IPv6 at the horizon, it is time to think about how applications can migrate to IPv6. Web traffic is currently one of the most important applications in the Internet. The increasing popularity of dynamically generated content on the World Wide Web, has created the need for fast Web servers. Server clustering together with server load balancing has emerged as a promising technique to build scalable Web servers. The paper gives a short overview over the new features of IPv6 and different server load balancing technologies. Further, we present and evaluate Loaded, a user-space server load balancer for IPv4 and IPv6 based on Linux