Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Norm Beekwilder is active.

Publication


Featured researches published by Norm Beekwilder.


grid computing | 2004

An early evaluation of WSRF and WS-Notification via WSRF.NET

Marty Humphrey; Glenn S. Wasson; Mark M. Morgan; Norm Beekwilder

The Web Services Resource Framework (WSRF) and its companion WS-Notification were introduced in January 2004 as a new model on which to build grids. This paper contains early observations made while implementing the full suite of WSRF and WS-Notification specifications on the Microsoft .NET Platform. While the potential of WSRF and WS-Notification remains strong, initial observations are that there are many challenges that remain to be solved, most notably the implied programming model derived from the specifications, particularly the complexity of service-side and client-code and the complexity of WS-Notification.


international conference on e-science | 2012

Calibration of watershed models using cloud computing

Marty Humphrey; Norm Beekwilder; Jonathan L. Goodall; Mehmet B. Ercan

Understanding hydrologic systems at the scale of large watersheds and river basins is critically important to society when faced with extreme events, such as floods and droughts, or with concerns about water quality. A critical requirement of watershed modeling is model calibration, in which the computational models parameters are varied during a search algorithm in order to find the best match against physically-observed phenomena such as streamflow. Because it is generally performed on a laptop computer, this calibration phase can be very time-consuming, significantly limiting the ability of a hydrologist to experiment with different models. In this paper, we describe our system for watershed model calibration using cloud computing, specifically Microsoft Windows Azure. With a representative watershed model whose calibration takes 11.4 hours on a commodity laptop, our cloud-based system calibrates the watershed model in 43.32 minutes using 16 cloud cores (15.78x speedup), 11.76 minutes using 64 cloud cores (58.13x speedup), and 5.03 minutes using 256 cloud cores (135.89x speedup). We believe that such speed-ups offer the potential toward real-time interactive model creation with continuous calibration, ushering in a new paradigm for watershed modeling.


Environmental Modelling and Software | 2014

Calibration of SWAT models using the cloud

Mehmet B. Ercan; Jonathan L. Goodall; Anthony M. Castronova; Marty Humphrey; Norm Beekwilder

This paper evaluates a recently created Soil and Water Assessment Tool (SWAT) calibration tool built using the Windows Azure Cloud environment and a parallel version of the Dynamically Dimensioned Search (DDS) calibration method modified to run in Azure. The calibration tool was tested for six model scenarios constructed for three watersheds of increasing size each for a 2 year and 10 year simulation duration. Results show significant speedup in calibration time and, for up to 64 cores, minimal losses in speedup for all watershed sizes and simulation durations. An empirical relationship is presented for estimating the time needed to calibration a SWAT model using the cloud calibration tool as a function of the number of Hydrologic Response Units (HRUs), time steps, and cores used for the calibration. A cloud-based, parallel SWAT calibration algorithm is evaluated.1000 runs are sufficient for flow calibration using up to 256 cores in parallel.Speedup of the parallel SWAT calibration tool is linear for up to 64 cores.Cost to calibrate a SWAT model can be estimated from HRU and time step counts.


Lawrence Berkeley National Laboratory | 2008

Fluxnet Synthesis Dataset Collaboration Infrastructure

Deborah A. Agarwal; Marty Humphrey; Catharine van Ingen; Norm Beekwilder; Monte Goode; Keith Jackson; Matt Rodriguez; Robin Weber

The Fluxnet synthesis dataset originally compiled for the La Thuile workshop contained approximately 600 site years. Since the workshop, several additional site years have been added and the dataset now contains over 920 site years from over 240 sites. A data refresh update is expected to increase those numbers in the next few months. The ancillary data describing the sites continues to evolve as well. There are on the order of 120 site contacts and 60 proposals have been approved to use the data. These proposals involve around 120 researchers. The size and complexity of the dataset and collaboration has led to a new approach to providing access to the data and collaboration support and the support team attended the workshop and worked closely with the attendees and the Fluxnet project office to define the requirements for the support infrastructure. As a result of this effort, a new website (http://www.fluxdata.org) has been created to provide access to the Fluxnet synthesis dataset. This new web site is based on a scientific data server which enables browsing of the data on-line, data download, and version tracking. We leverage database and data analysis tools such as OLAP data cubes and web reports to enable browser and Excel pivot table access to the data.


international conference on e-science | 2017

Hunting Data Rogues at Scale: Data Quality Control for Observational Data in Research Infrastructures

Gilberto Pastorello; Dan Gunter; Housen Chu; Danielle Christianson; Carlo Trotta; Eleonora Canfora; Boris Faybishenko; You-Wei Cheah; Norm Beekwilder; Stephen Chan; Sigrid Dengel; Trevor F. Keenan; Fianna O'Brien; Abdelrahman Elbashandy; Cristina Poindexter; Marty Humphrey; Dario Papale; Deborah A. Agarwal

Data quality control is one of the most time consuming activities within Research Infrastructures (RIs), especially when involving observational data and multiple data providers. In this work we report on our ongoing development of data rogues, a scalable approach to manage data quality issues for observational data within RIs. The motivation for this work started with the creation of the FLUXNET2015 dataset, which includes carbon, water, and energy fluxes plus micrometeorological and ancillary data measured in over 200 sites around the world. To create an uniform dataset, including derived data products, extensive work on data quality control was needed. The unpredictable nature of observational data quality issues makes the automation of data quality control inherently difficult. Developed based on this experience, the data rogues methodology allows for increased automation of quality control activities by systematically identifying, cataloging, and documenting implementations of solutions to data issues. We believe this methodology can be extended and applied to others domains and types of data, making the automation of data quality control a more tractable problem.


2016 IEEE International Conference on Smart Cloud (SmartCloud) | 2016

Unified, Multi-level Intrusion Detection in Private Cloud Infrastructures

Marty Humphrey; Robert Emerson; Norm Beekwilder

Traditional network firewalls and intrusion detection systems (IDSs) are typically relied upon as the primary defense against unauthorized access in private clouds. While certainly important, we argue traditional approaches to intrusion detection are insufficient for private clouds-by ignoring application-and cloud-specific mechanisms and network traffic, traditional IDSs can either notice too late or never at all a potentially-costly intrusion. This paper presents a unified, multi-level IDS architecture that supplements traditional approaches with safeguards and detection mechanisms that leverage knowledge of typical/correct private cloud operations, both at the cloud-application-level and as well as the cloud-control-level. We present our general framework and methodology, and describe its operation in detail via a specific case study of an intruder attempting to manipulate VM-level scheduling in an OpenStack-based private cloud. Our results show such a unified, multi-level, cloud-specific IDS can help greatly reduce the security issues prohibiting private cloud deployments.


Concurrency and Computation: Practice and Experience | 2010

A data-centered collaboration portal to support global carbon-flux analysis

Deborah A. Agarwal; Marty Humphrey; Norm Beekwilder; Keith Jackson; Monte Goode; Catharine van Ingen


cluster computing and the grid | 2004

OGSI.NET: OGSI-compliance on the .NET framework

Glenn S. Wasson; Norm Beekwilder; Mark M. Morgan; Marty Humphrey


Archive | 2004

NET: OGSI - compliance on the

Glenn S. Wasson; Norm Beekwilder; M. J. Morgan; Marty Humphrey


conference on high performance computing (supercomputing) | 2005

Alternative Software Stacks for OGSA-based Grids

Marty Humphrey; Glenn S. Wasson; Yuliyan Kiryakov; Sang-Min Park; David Del Vecchio; Norm Beekwilder

Collaboration


Dive into the Norm Beekwilder's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Deborah A. Agarwal

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Keith Jackson

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Monte Goode

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jonathan L. Goodall

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar

Mehmet B. Ercan

University of South Carolina

View shared research outputs
Researchain Logo
Decentralizing Knowledge