Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephen D Miller is active.

Publication


Featured researches published by Stephen D Miller.


Archive | 2011

Challenges in Data Intensive Analysis at Scientific Experimental User Facilities

Kerstin Kleese van Dam; Dongsheng Li; Stephen D Miller; John W Cobb; Mark L. Green; Catherine L. Ruby

Today’s scientific challenges such as routes to a sustainable energy future, materials by design or biological and chemical environmental remediation methods, are complex problems that require the integration of a wide range of complementary expertise to be addressed successfully. Experimental and computational science research methods can hereby offer fundamental insights for their solution.


Concurrency and Computation: Practice and Experience | 2007

The Neutron Science TeraGrid Gateway: a TeraGrid science gateway to support the Spallation Neutron Source: Research Articles

John W Cobb; Al Geist; James Arthur Kohl; Stephen D Miller; Peter F. Peterson; Gregory G. Pike; Michael A. Reuter; Tom Swain; Sudharshan S. Vazhkudai; Nithya N. Vijayakumar

Web portals are one of the possible ways to access the remote computing resources offered by Grid environments. Since the emergence of the first middleware for the Grid, works have been conducted on delivering the functionality of Grid services on the Web. Many interesting Grid portal solutions have been designed help organize remote access to Grid resources and applications from within Web browsers. They are technically advanced and more and more widely used around the world, resulting in feedback from the community. Some of these user comments concern the flexibility and user-friendliness of the developed solutions. In this paper we present how we addressed the need for a flexible and user-friendly Grid portal environment within the PROGRESS project and how our approach facilitates the use of the Grid within Web portals. Copyright


grid computing environments | 2010

Orbiter Commander: A flexible application framework for service-based scientific computing environments

Catherine L. Ruby; Mark L. Green; Stephen D Miller

Gateway computing environments face several challenges in providing robust, scalable, and sustainable capabilities to a wide range of users. Principles of encapsulation and cohesion have been applied in emerging trends of application framework development, where modular designs and abstraction layers allow these systems to remain flexible and agile as requirements evolve over time. Orbiter Commander is a modular and extensible application framework that leverages the Orbiter Federation Service Oriented Architecture to deliver fast and secure capabilities in an Eclipse RCP desktop application. Commander provides suites of modules that can be seamlessly delivered to end users on multiple platforms, enabling rapid component development through a flexible design and well-defined extension points. This paper presents our collaboration with the Spallation Neutron Source Neutron Experiment and Theory Hub (NExTHUB) and the Solenoidal Tracker at the at the RHIC (STAR) experiment, two suites of capabilities tailored to serve the needs of users at Oak Ridge National Laboratory and Brookhaven National Laboratory.


Journal of Physics: Conference Series | 2010

Doing Your Science While You're in Orbit

Mark L. Green; Stephen D Miller; Sudharshan S. Vazhkudai; James R Trater

Large-scale neutron facilities such as the Spallation Neutron Source (SNS) located at Oak Ridge National Laboratory need easy-to-use access to Department of Energy Leadership Computing Facilities and experiment repository data. The Orbiter thick- and thin-client and its supporting Service Oriented Architecture (SOA) based services (available at https://orbiter.sns.gov) consist of standards-based components that are reusable and extensible for accessing high performance computing, data and computational grid infrastructure, and cluster-based resources easily from a user configurable interface. The primary Orbiter system goals consist of (1) developing infrastructure for the creation and automation of virtual instrumentation experiment optimization, (2) developing user interfaces for thin- and thick-client access, (3) provide a prototype incorporating major instrument simulation packages, and (4) facilitate neutron science community access and collaboration. The secure Orbiter SOA authentication and authorization is achieved through the developed Virtual File System (VFS) services, which use Role-Based Access Control (RBAC) for data repository file access, thin-and thick-client functionality and application access, and computational job workflow management. The VFS Relational Database Management System (RDMS) consists of approximately 45 database tables describing 498 user accounts with 495 groups over 432,000 directories with 904,077 repository files. Over 59 million NeXus file metadata records are associated to the 12,800 unique NeXus file field/class names generated from the 52,824 repository NeXus files. Services that enable (a) summary dashboards of data repository status with Quality of Service (QoS) metrics, (b) data repository NeXus file field/class name full text search capabilities within a Google like interface, (c) fully functional RBAC browser for the read-only data repository and shared areas, (d) user/group defined and shared metadata for data repository files, (e) user, group, repository, and web 2.0 based global positioning with additional service capabilities are currently available. The SNS based Orbiter SOA integration progress with the Distributed Data Analysis for Neutron Scattering Experiments (DANSE) software development project is summarized with an emphasis on DANSE Central Services and the Virtual Neutron Facility (VNF). Additionally, the DANSE utilization of the Orbiter SOA authentication, authorization, and data transfer services best practice implementations are presented.


Journal of Physics: Conference Series | 2009

Data management and its role in delivering science at DOE BES user facilities – Past, Present, and Future

Stephen D Miller; Kenneth W. Herwig; Shelly Ren; Sudharshan S. Vazhkudai; Pete R. Jemian; Steffen Luitz; A. A. Salnikov; I. A. Gaponenko; Thomas Proffen; Paul S. Lewis; Mark L. Green

The primary mission of user facilities operated by Basic Energy Sciences under the Department of Energy is to produce data for users in support of open science and basic research [1]. We trace back almost 30 years of history across selected user facilities illustrating the evolution of facility data management practices and how these practices have related to performing scientific research. The facilities cover multiple techniques such as X-ray and neutron scattering, imaging and tomography sciences. Over time, detector and data acquisition technologies have dramatically increased the ability to produce prolific volumes of data challenging the traditional paradigm of users taking data home upon completion of their experiments to process and publish their results. During this time, computing capacity has also increased dramatically, though the size of the data has grown significantly faster than the capacity of ones laptop to manage and process this new facility produced data. Trends indicate that this will continue to be the case for yet some time. Thus users face a quandary for how to manage todays data complexity and size as these may exceed the computing resources users have available to themselves. This same quandary can also stifle collaboration and sharing. Realizing this, some facilities are already providing web portal access to data and computing thereby providing users access to resources they need [2]. Portal based computing is now driving researchers to think about how to use the data collected at multiple facilities in an integrated way to perform their research, and also how to collaborate and share data. In the future, inter-facility data management systems will enable next tier cross-instrument-cross facility scientific research fuelled by smart applications residing upon user computer resources. We can learn from the medical imaging community that has been working since the early 1990s to integrate data from across multiple modalities to achieve better diagnoses [3] – similarly, data fusion across BES facilities will lead to new scientific discoveries.


international performance computing and communications conference | 2011

A distributed workflow management system with case study of real-life scientific applications

Qishi Wu; Mengxia Zhu; Yi Gu; Xukang Lu; Patrick Brown; Michael A. Reuter; Stephen D Miller

Supporting large-scale scientific workflows in distributed network environments and optimizing their performances are crucial to the success of collaborative scientific discovery. We develop a generic scientific workflow platform, referred to as SciFlow, which constitutes a flexible framework to facilitate the distributed execution and management of scientific workflows and incorporates a class of workflow mapping schemes to achieve optimal end-to-end performances. The functionalities of SciFlow are provided and its interactions with other tools or systems are enabled through web services for easy access over standard Internet protocols while being independent of different platforms and programming languages. The performance superiority of SciFlow over existing workflow mapping schemes and management systems is illustrated by extensive simulations and is further verified by large-scale experiments on real-life scientific workflows through effective system implementation and deployment in distributed network environments.


Archive | 2011

Large-Scale User Facility Imaging and Scattering Techniques to Facilitate Basic Medical Research

Stephen D Miller; Jean-Christophe Bilheux; Shaun S. Gleason; Trent L. Nichols; Philip R. Bingham; Mark L. Green

Conceptually, modern medical imaging can be traced back to the late 1960’s and into the early 1970’s with the advent of computed tomography1. This pioneering work was done by 1979 Nobel Prize winners Godfrey Hounsfield and Allan McLeod Cormack which evolved into the first prototype Computed Tomography (CT) scanner in 1971 and became commercially available in 1972. Unique to the CT scanner was the ability to utilize X-ray projections taken at regular angular increments from which reconstructed three-dimensional (3D) images could be produced. It is interesting to note that the mathematics to realize tomographic images were developed in 1917 by the Austrian mathematician Johann Radon who produced the mathematical relationships to derive 3D images from projections – known today as the Radon Transform2. The confluence of newly advancing technologies, particularly in the areas of detectors, X-ray tubes, and computers combined with the earlier derived mathematical concepts ushered in a new era in diagnostic medicine via medical imaging (Beckmann, 2006).


Journal of Physics: Conference Series | 2010

Neutron Science TeraGrid Gateway

V. E. Lynch; Meili Chen; John W Cobb; James Arthur Kohl; Stephen D Miller; David A Speirs; Sudharshan S. Vazhkudai

The unique contributions of the Neutron Science TeraGrid Gateway (NSTG) are the connection of national user facility instrument data sources to the integrated cyberinfrastructure of the National Science FoundationTeraGrid and the development of a neutron science gateway that allows neutron scientists to use TeraGrid resources to analyze their data, including comparison of experiment with simulation. The NSTG is working in close collaboration with the Spallation Neutron Source (SNS) at Oak Ridge as their principal facility partner. The SNS is a next-generation neutron source. It has completed construction at a cost of


Concurrency and Computation: Practice and Experience | 2007

The Neutron Science TeraGrid Gateway: a TeraGrid science gateway to support the Spallation Neutron Source

John W Cobb; Al Geist; James Arthur Kohl; Stephen D Miller; Peter F. Peterson; Gregory G. Pike; Michael A. Reuter; Tom Swain; Sudharshan S. Vazhkudai; Nithya N. Vijayakumar

1.4 billion and is ramping up operations. The SNS will provide an order of magnitude greater flux than any previous facility in the world and will be available to all of the nations scientists, independent of funding source, on a peer-reviewed merit basis. With this new capability, the neutron science community is facing orders of magnitude larger data sets and is at a critical point for data analysis and simulation. There is a recognized need for new ways to manage and analyze data to optimize both beam time and scientific output. The TeraGrid is providing new capabilities in the gateway for simulations using McStas and a fitting service on distributed TeraGrid resources to improved turnaround. NSTG staff are also exploring replicating experimental data in archival storage. As part of the SNS partnership, the NSTG provides access to gateway support, cyberinfrastructure outreach, community development, and user support for the neutron science community. This community includes not only SNS staff and users but extends to all the major worldwide neutron scattering centers.


Archive | 2007

Multitier Portal Architecture for Thin- and Thick-client Neutron Scattering Experiment Support

Mark L. Green; Stephen D Miller

Collaboration


Dive into the Stephen D Miller's collaboration.

Top Co-Authors

Avatar

John W Cobb

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James Arthur Kohl

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

V. E. Lynch

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Mark L. Green

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

Michael A. Reuter

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Meili Chen

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Peter F. Peterson

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Al Geist

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Bradford C Smith

Oak Ridge National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge