Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chris A. Mattmann is active.

Publication


Featured researches published by Chris A. Mattmann.


international conference on software engineering | 2006

A software architecture-based framework for highly distributed and data intensive scientific applications

Chris A. Mattmann; Daniel J. Crichton; Nenad Medvidovic; Steve Hughes

Modern scientific research is increasingly conducted by virtual communities of scientists distributed around the world. The data volumes created by these communities are extremely large, and growing rapidly. The management of the resulting highly distributed, virtual data systems is a complex task, characterized by a number of formidable technical challenges, many of which are of a software engineering nature. In this paper we describe our experience over the past seven years in constructing and deploying OODT, a software framework that supports large, distributed, virtual scientific communities. We outline the key software engineering challenges that we faced, and addressed, along the way. We argue that a major contributor to the success of OODT was its explicit focus on software architecture. We describe several large-scale, real-world deployments of OODT, and the manner in which OODT helped us to address the domain-specific challenges induced by each deployment.


Climate Dynamics | 2014

Evaluation of the CORDEX-Africa multi-RCM hindcast: systematic model errors

Joong Kyun Kim; Duane E. Waliser; Chris A. Mattmann; Cameron Goodale; Andrew F. Hart; Paul Zimdars; Daniel J. Crichton; Colin Jones; Grigory Nikulin; Bruce Hewitson; Chris Jack; Christopher Lennard; Alice Favre

Monthly-mean precipitation, mean (TAVG), maximum (TMAX) and minimum (TMIN) surface air temperatures, and cloudiness from the CORDEX-Africa regional climate model (RCM) hindcast experiment are evaluated for model skill and systematic biases. All RCMs simulate basic climatological features of these variables reasonably, but systematic biases also occur across these models. All RCMs show higher fidelity in simulating precipitation for the west part of Africa than for the east part, and for the tropics than for northern Sahara. Interannual variation in the wet season rainfall is better simulated for the western Sahel than for the Ethiopian Highlands. RCM skill is higher for TAVG and TMAX than for TMIN, and regionally, for the subtropics than for the tropics. RCM skill in simulating cloudiness is generally lower than for precipitation or temperatures. For all variables, multi-model ensemble (ENS) generally outperforms individual models included in ENS. An overarching conclusion in this study is that some model biases vary systematically for regions, variables, and metrics, posing difficulties in defining a single representative index to measure model fidelity, especially for constructing ENS. This is an important concern in climate change impact assessment studies because most assessment models are run for specific regions/sectors with forcing data derived from model outputs. Thus, model evaluation and ENS construction must be performed separately for regions, variables, and metrics as required by specific analysis and/or assessments. Evaluations using multiple reference datasets reveal that cross-examination, quality control, and uncertainty estimates of reference data are crucial in model evaluations.


grid computing environments | 2011

Apache airavata: a framework for distributed applications and computational workflows

Suresh Marru; Lahiru Gunathilake; Chathura Herath; Patanachai Tangchaisin; Marlon E. Pierce; Chris A. Mattmann; Raminder Singh; Thilina Gunarathne; Eran Chinthaka; Ross Gardler; Aleksander Slominski; Ate Douma; Srinath Perera; Sanjiva Weerawarana

In this paper, we introduce Apache Airavata, a software framework to compose, manage, execute, and monitor distributed applications and workflows on computational resources ranging from local resources to computational grids and clouds. Airavata builds on general concepts of service-oriented computing, distributed messaging, and workflow composition and orchestration. This paper discusses the architecture of Airavata and its modules, and illustrates how the software can be used as individual components or as an integrated solution to build science gateways or general-purpose distributed application and workflow management systems.


Future Generation Computer Systems | 2014

The Earth System Grid Federation: An open infrastructure for access to distributed geospatial data

Luca Cinquini; Daniel J. Crichton; Chris A. Mattmann; John Harney; Galen M. Shipman; Feiyi Wang; Rachana Ananthakrishnan; Neill Miller; Sebastian Denvil; Mark Morgan; Zed Pobre; Gavin M. Bell; Charles Doutriaux; Robert S. Drach; Dean N. Williams; Philip Kershaw; Stephen Pascoe; Estanislao Gonzalez; Sandro Fiore; Roland Schweitzer

Abstract The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF’s architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL, GSI and SAML). The ESGF software stack integrates custom components (for data publishing, searching, user interface, security and messaging), developed collaboratively by the team, with popular application engines (Tomcat, Solr) available from the open source community. The full ESGF infrastructure has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire Fifth Coupled Model Intercomparison Project (CMIP5) output used by the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs). This paper presents ESGF as a successful example of integration of disparate open source technologies into a cohesive, wide functional system, and describes our experience in building and operating a distributed and federated infrastructure to serve the needs of the global climate science community.


The Astrophysical Journal | 2015

A MILLISECOND INTERFEROMETRIC SEARCH FOR FAST RADIO BURSTS WITH THE VERY LARGE ARRAY

Casey J. Law; Geoffrey C. Bower; S. Burke-Spolaor; Bryan J. Butler; Earl Lawrence; T. Joseph W. Lazio; Chris A. Mattmann; Michael P. Rupen; Andrew Siemion; Scott VanderWiel

We report on the first millisecond timescale radio interferometric search for the new class of transient known as fast radio bursts (FRBs). We used the Very Large Array (VLA) for a 166-hour, millisecond imaging campaign to detect and precisely localize an FRB. We observed at 1.4 GHz and produced visibilities with 5 ms time resolution over 256 MHz of bandwidth. Dedispersed images were searched for transients with dispersion measures from 0 to 3000 pc/cm3. No transients were detected in observations of high Galactic latitude fields taken from September 2013 though October 2014. Observations of a known pulsar show that images typically had a thermal-noise limited sensitivity of 120 mJy/beam (8 sigma; Stokes I) in 5 ms and could detect and localize transients over a wide field of view. Our nondetection limits the FRB rate to less than 7e4/sky/day (95% confidence) above a fluence limit of 1.2 Jy-ms. Assuming a Euclidean flux distribution, the VLA rate limit is inconsistent with the published rate of Thornton et al. We recalculate previously published rates with a homogeneous consideration of the effects of primary beam attenuation, dispersion, pulse width, and sky brightness. This revises the FRB rate downward and shows that the VLA observations had a roughly 60% chance of detecting a typical FRB and that a 95% confidence constraint would require roughly 500 hours of similar VLA observing. Our survey also limits the repetition rate of an FRB to 2 times less than any known repeating millisecond radio transient.


automated software engineering | 2011

Enhancing architectural recovery using concerns

Joshua Garcia; Daniel Popescu; Chris A. Mattmann; Nenad Medvidovic; Yuanfang Cai

Architectures of implemented software systems tend to drift and erode as they are maintained and evolved. To properly understand such systems, their architectures must be recovered from implementation-level artifacts. Many techniques for architectural recovery have been proposed, but their degrees of automation and accuracy remain unsatisfactory. To alleviate these shortcomings, we present a machine learning-based technique for recovering an architectural view containing a systems components and connectors. Our approach differs from other architectural recovery work in that we rely on recovered software concerns to help identify components and connectors. A concern is a software systems role, responsibility, concept, or purpose. We posit that, by recovering concerns, we can improve the correctness of recovered components, increase the automation of connector recovery, and provide more comprehensible representations of architectures.


ACM Sigsoft Software Engineering Notes | 2005

Leveraging architectural models to inject trust into software systems

Somo Banerjee; Chris A. Mattmann; Nenad Medvidovic; Leana Golubchik

Existing software systems have become increasingly durable and their lifetimes have significantly lengthened. They are increasingly distributed and decentralized. Our dependence on them has grown tremendously. As such, the issues of trustworthiness and security have become prime concerns in designing, constructing, and evolving software systems. However, the exact meanings of these concepts are not universally agreed upon, nor is their role in the different phases of the software development lifecycle. In this paper, we argue that trustworthiness is a more broadly encompassing term than security, and that the two are often interdependent. We then identify a set of dimensions of trustworthiness. Finally, we analyze how the key elements of a software systems architecture can be leveraged in support of those trustworthiness dimensions. Our ultimate goal is to apply these ideas in the context of a concrete software architecture project. The goal of this paper is more modest: to understand the problem area and its relation to software architecture.


IEEE Software | 2008

Scientific Software as Workflows: From Discovery to Distribution

David Woollard; Nenad Medvidovic; Yolanda Gil; Chris A. Mattmann

Scientific workflows-models of computation that capture the orchestration of scientific codes to conduct in silico research-are gaining recognition as an attractive alternative to script-based orchestration. Even so, researchers developing scientific workflow technologies still face fundamental challenges, including developing the underlying science of scientific workflows. You can classify scientific-workflow environments according to three major phases of in silico research: discovery, production, and distribution. On the basis of this classification, scientists can make more-informed decisions regarding the adoption of particular workflow environments.


information reuse and integration | 2004

ACE: improving search engines via Automatic Concept Extraction

Paul Ramirez; Chris A. Mattmann

The proliferation of the Internet has caused the process of browsing and searching for information to become extremely cumbersome. While many search engines provide reasonable information., they still fall short by overwhelming users with a multitude of often irrelevant results. This problem has several causes but most notably is the inability for the user to be able to convey the context of their search. Unfortunately, search engines must assume a general context when looking for matching pages, causing users to visit each page in the result list to ultimately find or not find their desired result. We believe that the necessity of visiting each page could be removed if the concepts, i.e. over-arching ideas of the underlying page, could be revealed to the end user. This would require mining the concepts from each referenced page. It is our contention that this could be done automatically, rather than relying on the current convention of mandating that the searcher extract these concepts manually through examination of result links. This ability to mine concepts would not only be useful to finding the appropriate result but in further identifying relevant pages. We present the Automatic Concept Extraction (ACE) algorithm, which can aid users performing searches using search engines. We discuss ACE both theoretically, and in the context of a graphical user interface and implementation which we have constructed in Java to aid in qualitatively evaluating our algorithm. ACE is found to perform at least as well or better than 4 other related algorithms, which we survey in the literature.


international conference on e science | 2006

A Distributed Information Services Architecture to Support Biomarker Discovery in Early Detection of Cancer

Daniel J. Crichton; Sean Kelly; Chris A. Mattmann; Qing Xiao; John Hughes; Jane Oh; Mark Thornquist; Donald Johnsey; Sudhir Srivastava; Laura Essermann; William L. Bigbee

Informatics in biomedicine is becoming increasingly interconnected via distributed information services, interdisciplinary correlation, and crossinstitutional collaboration. Partnering with NASA, the Early Detection Research Network (EDRN), a program managed by the National Cancer Institute, has been defining and building an informatics architecture to support the discovery of biomarkers in their earliest stages. The architecture established by EDRN serves as a blueprint for constructing a set of services focused on the capture, processing, management and distribution of information through the phases of biomarker discovery and validation.

Collaboration


Dive into the Chris A. Mattmann's collaboration.

Top Co-Authors

Avatar

Daniel J. Crichton

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andrew F. Hart

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Paul Ramirez

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

J. Steven Hughes

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nenad Medvidovic

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Sean Kelly

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Duane E. Waliser

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Cameron Goodale

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Luca Cinquini

Jet Propulsion Laboratory

View shared research outputs
Top Co-Authors

Avatar

Paul Zimdars

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge