Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul Zimdars is active.

Publication


Featured researches published by Paul Zimdars.


Climate Dynamics | 2014

Evaluation of the CORDEX-Africa multi-RCM hindcast: systematic model errors

Joong Kyun Kim; Duane E. Waliser; Chris A. Mattmann; Cameron Goodale; Andrew F. Hart; Paul Zimdars; Daniel J. Crichton; Colin Jones; Grigory Nikulin; Bruce Hewitson; Chris Jack; Christopher Lennard; Alice Favre

Monthly-mean precipitation, mean (TAVG), maximum (TMAX) and minimum (TMIN) surface air temperatures, and cloudiness from the CORDEX-Africa regional climate model (RCM) hindcast experiment are evaluated for model skill and systematic biases. All RCMs simulate basic climatological features of these variables reasonably, but systematic biases also occur across these models. All RCMs show higher fidelity in simulating precipitation for the west part of Africa than for the east part, and for the tropics than for northern Sahara. Interannual variation in the wet season rainfall is better simulated for the western Sahel than for the Ethiopian Highlands. RCM skill is higher for TAVG and TMAX than for TMIN, and regionally, for the subtropics than for the tropics. RCM skill in simulating cloudiness is generally lower than for precipitation or temperatures. For all variables, multi-model ensemble (ENS) generally outperforms individual models included in ENS. An overarching conclusion in this study is that some model biases vary systematically for regions, variables, and metrics, posing difficulties in defining a single representative index to measure model fidelity, especially for constructing ENS. This is an important concern in climate change impact assessment studies because most assessment models are run for specific regions/sectors with forcing data derived from model outputs. Thus, model evaluation and ENS construction must be performed separately for regions, variables, and metrics as required by specific analysis and/or assessments. Evaluations using multiple reference datasets reveal that cross-examination, quality control, and uncertainty estimates of reference data are crucial in model evaluations.


Earth Science Informatics | 2014

Cloud computing and virtualization within the regional climate model and evaluation system

Chris A. Mattmann; Duane E. Waliser; Jinwon Kim; Cameron Goodale; Andrew F. Hart; Paul M. Ramirez; Daniel J. Crichton; Paul Zimdars; Maziyar Boustani; Kyo Lee; Paul C. Loikith; Kim Whitehall; Chris Jack; Bruce Hewitson

The Regional Climate Model Evaluation System (RCMES) facilitates the rapid, flexible inclusion of NASA observations into climate model evaluations. RCMES provides two fundamental components. A database (RCMED) is a scalable point-oriented cloud database used to elastically store remote sensing observations and to make them available using a space time query interface. The analysis toolkit (RCMET) is a Python-based toolkit that can be delivered as a cloud virtual machine, or as an installer package deployed using Python Buildout to users in order to allow for temporal and spatial regridding, metrics calculation (RMSE, bias, PDFs, etc.) and end-user visualization. RCMET is available to users in an “offline”, lone scientist mode based on a virtual machine dynamically constructed with model outputs and observations to evaluate; or on an institution’s computational cluster seated close to the observations and model outputs. We have leveraged RCMES within the content of the Coordinated Regional Downscaling Experiment (CORDEX) project, working with the University of Cape Town and other institutions to compare the model output to NASA remote sensing data; in addition we are also working with the North American Regional Climate Change Assessment Program (NARCCAP). In this paper we explain the contribution of cloud computing to RCMES’s specifically describing studies of various cloud databases we evaluated for RCMED, and virtualization toolkits for RCMET, and their potential strengths in delivering user-created dynamic regional climate model evaluation virtual machines for our users.


ieee international conference on cloud computing technology and science | 2011

Evaluating cloud computing in the NASA DESDynI ground data system

John J. Tran; Luca Cinquini; Chris A. Mattmann; Paul Zimdars; David T. Cuddy; K. Leung; Oh-ig Kwoun; Dan Crichton; Dana Freeborn

The proposed NASA Deformation, Ecosystem Structure and Dynamics of Ice (DESDynI) mission would be a first-of-breed endeavor that would fundamentally change the paradigm by which Earth Science data systems at NASA are built. DESDynI is evaluating a distributed architecture where expert science nodes around the country all engage in some form of mission processing and data archiving. This is compared to the traditional NASA Earth Science missions where the science processing is typically centralized. Whats more, DESDynI is poised to profoundly increase the amount of data collection and processing well into the 5 terabyte/day and tens of thousands of job range, both of which comprise a tremendous challenge to DESDynIs proposed distributed data system architecture. In this paper, we report on a set of architectural trade studies and benchmarks meant to inform the DESDynI mission and the broader community of the impacts of these unprecedented requirements. In particular, we evaluate the benefits of cloud computing and its integration with our existing NASA ground data system software called Apache Object Oriented Data Technology (OODT). The preliminary conclusions of our study suggest that the use of the cloud and OODT together synergistically form an effective, efficient and extensible combination that could meet the challenges of NASA science missions requiring DESDynI-like data collection and processing volumes at reduced costs.


Earth Science Informatics | 2015

Exploring a graph theory based algorithm for automated identification and characterization of large mesoscale convective systems in satellite datasets

Kim Whitehall; Chris A. Mattmann; Gregory S. Jenkins; Mugizi Robert Rwebangira; Belay Demoz; Duane E. Waliser; Jinwon Kim; Cameron Goodale; Andrew F. Hart; Paul M. Ramirez; Michael J. Joyce; Maziyar Boustani; Paul Zimdars; Paul C. Loikith; Huikyo Lee

Mesoscale convective systems are high impact convectively driven weather systems that contribute large amounts to the precipitation daily and monthly totals at various locations globally. As such, an understanding of the lifecycle, characteristics, frequency and seasonality of these convective features is important for several sectors and studies in climate studies, agricultural and hydrological studies, and disaster management. This study explores the applicability of graph theory to creating a fully automated algorithm for identifying mesoscale convective systems and determining their precipitation characteristics from satellite datasets. Our results show that applying graph theory to this problem allows for the identification of features from infrared satellite data and the seamlessly identification in a precipitation rate satellite-based dataset, while innately handling the inherent complexity and non-linearity of mesoscale convective systems.


ieee international conference on cloud computing technology and science | 2011

A cloud-enabled regional climate model evaluation system

Andrew F. Hart; Cameron Goodale; Chris A. Mattmann; Paul Zimdars; Dan Crichton; Peter Lean; Jinwon Kim; Duane E. Waliser

The climate research community is increasingly interested in utilizing direct, observational measurements to validate model output in an effort to tune those models to better approximate our planets dynamic climate. The current emphasis on performing these comparisons at regional, as opposed to global, scales presents challenges both scientific and technical, since regional ecosystems are highly heterogeneous and the available data is not readily consumed on a regional basis. If provided with a common approach for efficiently accessing and utilizing the existing observational datasets, climate researchers have the potential to effect lasting societal, economic and political benefits. A key challenge, however, is that model-to-observational comparison requires massive quantities of data and significant computational capabilities. Further complicating matters is the fact that, currently, observational data and model outputs exist in a variety of data formats, utilize varying degrees of specificity and resolution, and reside in disparate, highly heterogeneous data systems. In this paper we present a software architectural approach that leverages the advantages of cloud computing and modern open-source software technologies to address the regional climate modeling problem. Our system, dubbed RCMES, is highly scalable and elastic, allows for both local and distributed management of the satellite observations and generated model outputs, and delivers this information to climate researchers in a way that is easily integrated into existing climate simulations and statistical tools.


international conference on big data | 2016

SciSpark: Highly interactive in-memory science data analytics

Brian Wilson; Rahul Palamuttam; Kim Whitehall; Chris A. Mattmann; Alex Goodman; Maziyar Boustani; Sujen Shah; Paul Zimdars; Paul M. Ramirez

We present further work on SciSpark, a Big Data framework that extends Apache Sparks inmemory parallel computing to scale scientific computations. SciSparks current architecture and design includes: time and space partitioning of highresolution geo-grids from NetCDF3/4; a sciDataset class providing N-dimensional array operations in Scala/Java and CF-style variable attributes (an update of our prior sciTensor class); parallel computation of time-series statistical metrics; and an interactive front-end using science (code & visualization) Notebooks. We demonstrate how SciSpark achieves parallel ingest and time/space partitioning of Earth science satellite and model datasets. We illustrate the usability, extensibility, and early performance of SciSpark using several Earth science Use cases, here presenting benchmarks for sciDataset Readers and parallel time-series analytics. A three-hour SciSpark tutorial was taught at an ESIP Federation meeting using a dozen “live” Notebooks.


Earth Science Informatics | 2016

A topical evaluation and discussion of data movement technologies for data-intensive scientific applications

Chris A. Mattmann; Luca Cinquini; Paul Zimdars; Michael J. Joyce; Shakeh E. Khudikyan

Transferring large volumes of information from one location to potentially many others that are geographically distributed and across varying networks is still prevalent in modern scientific data systems. This is despite the movement to push computation to the data and to reduce data movement needed to compute answers to challenging scientific problems, to disseminate information to the scientific community, and to acquire data for curation and enrichment. Because of this, it is imperative that decisions made regarding data movement systems and architectures be backed by both analytical rigor, and also by empirical evidence and measurement. The purpose of this study is to expand on the work performed by our research team over the last decade and to take a fresh look at the evaluation of multiple topical data transfer technologies in use cases derived from data-intensive scientific systems and applications in the areas of Earth science. We report on the evaluation of a set of data movement technologies against a set of empirically derived comparison dimensions. Based on this evaluation, we make recommendations towards the selection of appropriate data movement technologies in scientific applications and scenarios.


computer based medical systems | 2014

A Laboratory-Targeted, Data Management and Processing System for the Early Detection Research Network

Rishi Verma; Andrew F. Hart; Chris A. Mattmann; Daniel J. Crichton; Heather Kincaid; Sean Kelly; Michael J. Joyce; Paul Zimdars; David L. Tabb; Jay D. Holman; Matthew C. Chambers; Kristen Anton; Maureen Colbert; Christos Patriotis; Sudhir Srivastava

The National Institutes of Health (NIH), National Cancer Institutes Early Detection Research Network (EDRN) is a cross-institutional collaborative initiative seeking to accelerate the clinical application of cancer biomarker research. Over the past decade, it has been our role, as EDRNs Informatics Center (IC), to develop a comprehensive information services infrastructure as well as a set of software tools and services to support this overall initiative. We have recently developed a novel application called the Laboratory Catalog and Archive Service (LabCAS) whose focus is to extend EDRN IC data management and processing capabilities to EDRN laboratories. By leveraging the same technologies used to manage and process NASA Earth and Planetary data sets, we offer EDRN researchers an effective way of managing their laboratory data. More specifically, LabCAS enables EDRN researchers to reliably archive their experimental data, to optionally share these data in a controlled manner with other researchers, and to gain insight into these data through highly configurable data analysis pipelines tailored to the broad range of biomarker related experiments. This particular collaboration leverages expertise from NASAs Jet Propulsion Laboratory, Vanderbilt University Medical Center, and Dartmouth Medical School, as well as builds upon existing cross-governmental collaboration between NASA and the NIH.


international geoscience and remote sensing symposium | 2014

24 Hour near real time processing and computation for the JPL Airborne Snow Observatory

Chris A. Mattmann; Thomas H. Painter; Paul M. Ramirez; Cameron Goodale; Andrew F. Hart; Paul Zimdars; Maziyar Boustani; Shakeh E. Khudikyan; Rishi Verma; Felix Seidel Caprez; Jeff S. Deems; A. Trangsrud; Joseph W. Boardman


Archive | 2013

Scientific Applications of the Regional Climate Model Evaluation System (RCMES)

Paul C. Loikith; Duane E. Waliser; Chris A. Mattmann; Jinwon Kim; Huikyo Lee; Paul Ramirez; Andrew F. Hart; Cameron Goodale; Michael J. Joyce; Shakeh E. Khudikyan; Maziyar Boustani; Kim Whitehall; Alex Goodman; Jesslyn Whittell; Paul Zimdars

Collaboration


Dive into the Paul Zimdars's collaboration.

Top Co-Authors

Avatar

Chris A. Mattmann

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andrew F. Hart

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Cameron Goodale

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Duane E. Waliser

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kim Whitehall

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Maziyar Boustani

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dan Crichton

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jinwon Kim

University of California

View shared research outputs
Top Co-Authors

Avatar

Michael J. Joyce

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Paul M. Ramirez

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge