Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patrick M. Widener is active.

Publication


Featured researches published by Patrick M. Widener.


Operating Systems Review | 2005

Efficient end to end data exchange using configurable compression

Yair Wiseman; Karsten Schwan; Patrick M. Widener

We explore the use of compression methods to improve the middleware-based exchange of information in interactive or collaborative distributed applications. In such applications, good compression factors must be accompanied by compression speeds suitable for the data transfer rates sustainable across network links. Our approach combines methods that continuously monitor current network and processor resources and assess compression effectiveness, with techniques that automatically choose suitable compression techniques. The resulting network- and user-aware compression methods are evaluated experimentally across a range of network links and application data, the former ranging from low end links to homes, to wide-area Internet links, to high end links in intranets, the latter including both scientific (binary molecular dynamics data) and commercial (XML) data sets. Results attained demonstrate substantial improvements of this adaptive technique for data compression over non-adaptive approaches, where better compression methods are used when CPU loads are low and/or network links are slow, and where less effective and typically, faster compression techniques are used in high end network infrastructures.


IEEE Transactions on Biomedical Engineering | 2010

An Integrative Approach for In Silico Glioma Research

Lee A. D. Cooper; Jun Kong; David A. Gutman; Fusheng Wang; Sharath R. Cholleti; Tony Pan; Patrick M. Widener; Ashish Sharma; Tom Mikkelsen; Adam E. Flanders; Daniel L. Rubin; Erwin G. Van Meir; Tahsin M. Kurç; Carlos S. Moreno; Daniel J. Brat; Joel H. Saltz

The integration of imaging and genomic data is critical to forming a better understanding of disease. Large public datasets, such as The Cancer Genome Atlas, present a unique opportunity to integrate these complementary data types for in silico scientific research. In this letter, we focus on the aspect of pathology image analysis and illustrate the challenges associated with analyzing and integrating large-scale image datasets with molecular characterizations. We present an example study of diffuse glioma brain tumors, where the morphometric analysis of 81 million nuclei is integrated with clinically relevant transcriptomic and genomic characterizations of glioblastoma tumors. The preliminary results demonstrate the potential of combining morphometric and molecular characterizations for in silico research.


international conference on autonomic computing | 2006

Implementing Diverse Messaging Models with Self-Managing Properties using IFLOW

Vibhore Kumar; Zhongtang Cai; Brian F. Cooper; Greg Eisenhauer; Karsten Schwan; Mohamed S. Mansour; Balasubramanian Seshasayee; Patrick M. Widener

Implementing self-management is hard, especially when building large scale distributed systems. Publish/subscribe middlewares, scientific visualization and collaboration tools and corporate operational information systems are examples of one class of systems, distributed information flow infrastructures, that could benefit from self management. This paper presents IFLOW, an autonomic middleware for implementing these different distributed systems in a self-managing way. IFLOW reduces different messaging models down to a common information flow abstraction, creates a self-managing implementation of that abstraction and then provides a substrate for building diverse information flow systems. We describe the design and implementation of IFLOW and describe case studies of implementing different messaging models as self-managing systems.


high performance distributed computing | 2001

Open metadata formats: efficient XML-based communication for high performance computing

Patrick M. Widener; Greg Eisenhauer; Karsten Schwan

High-performance computing faces considerable change as the Internet and the Grid mature. Applications that once were tightly-coupled and monolithic are now decentralized, with collaborating components spread across diverse computational elements. Such distributed systems most commonly communicate through the exchange of structured data. Definition and translation of metadata is incorporated in all systems that exchange structured data. We observe that the manipulation of this metadata can be decomposed into three separate steps: discovery, binding of program objects to the metadata, and marshaling of data to and from wire formats. We have designed a method of representing message formats in XML, using datatypes available in the XML Schema specification. We have implemented a tool, XMIT, that uses such metadata and exploits this decomposition in order to provide flexible run-time metadata definition facilities for an efficient binary communication mechanism. We also demonstrate that the use of XMIT makes possible such flexibility at little performance cost.


international parallel and distributed processing symposium | 2012

Accelerating Large Scale Image Analyses on Parallel, CPU-GPU Equipped Systems

George Teodoro; Tahsin M. Kurç; Tony Pan; Lee A. D. Cooper; Jun Kong; Patrick M. Widener; Joel H. Saltz

The past decade has witnessed a major paradigm shift in high performance computing with the introduction of accelerators as general purpose processors. These computing devices make available very high parallel computing power at low cost and power consumption, transforming current high performance platforms into heterogeneous CPU-GPU equipped systems. Although the theoretical performance achieved by these hybrid systems is impressive, taking practical advantage of this computing power remains a very challenging problem. Most applications are still deployed to either GPU or CPU, leaving the other resource under- or un-utilized. In this paper, we propose, implement, and evaluate a performance aware scheduling technique along with optimizations to make efficient collaborative use of CPUs and GPUs on a parallel system. In the context of feature computations in large scale image analysis applications, our evaluations show that intelligently co-scheduling CPUs and GPUs can significantly improve performance over GPU-only or multi-core CPU-only approaches.


conference on high performance computing (supercomputing) | 2002

Scalable Directory Services Using Proactivity

Fabián E. Bustamante; Patrick M. Widener; Karsten Schwan

Common to computational grids and pervasive computing is the need for an expressive, efficient, and scalable directory service that provides information about objects in the environment. We argue that a directory interface that ‘pushes’ information to clients about changes to objects can significantly improve scalability. This paper describes the design, implementation, and evaluation of the Proactive Directory Service (PDS). PDS’ interface supports a customizable ‘proactive’ mode through which clients can subscribe to be notified about changes to their objects of interest. Clients can dynamically tune the detail and granularity of these notifications through filter functions instantiated at the server or at the object’s owner, and by remotely tuning the functionality of those filters. We compare PDS’ performance against off-the-shelf implementations of DNS and the Lightweight Directory Access Protocol. Our evaluation results confirm the expected performance advantages of this approach and demonstrate that customized notification through filter functions can reduce bandwidth utilization while improving the performance of both clients and directory servers.


international conference on cluster computing | 2006

Efficient Data-Movement for Lightweight I/O

Ron A. Oldfield; Patrick M. Widener; Arthur B. Maccabe; Lee Ward; Todd Kordenbrock

Efficient data movement is an important part of any high-performance I/O system, but it is especially critical for the current and next-generation of massively parallel processing (MPP) systems. In this paper, we discuss how the scale, architecture, and organization of current and proposed MPP systems impact the design of the data-movement scheme for the I/O system. We also describe and analyze the approach used by the lightweight file systems (LWFS) project, and we compare that approach to more conventional data-movement protocols used by small and mid-range clusters. Our results indicate that the data-movement strategy used by LWFS clearly outperforms conventional data-movement protocols, particularly as data sizes increase


workshop on hot topics in operating systems | 2001

Active Streams - an approach to adaptive distributed systems

Fabián E. Bustamante; Greg Eisenhauer; Patrick M. Widener; Karsten Schwan; Calton Pu

Summary form only given. An increasing number of distributed applications aim to provide services to users by interacting with a correspondingly growing set of data-intensive network services. To support such requirements, we believe that new services need to be customizable, applications need to be dynamically extensible, and both applications and services need to be able to adapt to variations in resource availability and demand. A comprehensive approach to building new distributed applications can facilitate this by considering the contents of the information flowing across the application and its services and by adopting a component-based model to application/service programming. It should provide for dynamic adaptation at multiple levels and points in the underlying platform; and, since the mapping of components to resources in dynamic environment is too complicated, it should relieve programmers of this task. We propose Active Streams, a middleware approach and its associated framework for building distributed applications and services that exhibit these characteristics.


Proceedings of the IEEE | 2012

Digital Pathology: Data-Intensive Frontier in Medical Imaging

Lee A. D. Cooper; Alexis B. Carter; Alton B. Farris; Fusheng Wang; Jun Kong; David A. Gutman; Patrick M. Widener; Tony Pan; Sharath R. Cholleti; Ashish Sharma; Tahsin M. Kurç; Daniel J. Brat; Joel H. Saltz

Pathology is a medical subspecialty that practices the diagnosis of disease. Microscopic examination of tissue reveals information enabling the pathologist to render accurate diagnoses and to guide therapy. The basic process by which anatomic pathologists render diagnoses has remained relatively unchanged over the last century, yet advances in information technology now offer significant opportunities in image-based diagnostic and research applications. Pathology has lagged behind other healthcare practices such as radiology where digital adoption is widespread. As devices that generate whole slide images become more practical and affordable, practices will increasingly adopt this technology and eventually produce an explosion of data that will quickly eclipse the already vast quantities of radiology imaging data. These advances are accompanied by significant challenges for data management and storage, but they also introduce new opportunities to improve patient care by streamlining and standardizing diagnostic approaches and uncovering disease mechanisms. Computer-based image analysis is already available in commercial diagnostic systems, but further advances in image analysis algorithms are warranted in order to fully realize the benefits of digital pathology in medical discovery and patient care. In coming decades, pathology image analysis will extend beyond the streamlining of diagnostic workflows and minimizing interobserver variability and will begin to provide diagnostic assistance, identify therapeutic targets, and predict patient outcomes and therapeutic responses.


conference on multimedia computing and networking | 2007

CameraCast: Flexible Access to Remote Video Sensors

Jiantao Kong; Ivan B. Ganev; Karsten Schwan; Patrick M. Widener

New applications like remote surveillance and online environmental or traffic monitoring are making it increasingly important to provide flexible and protected access to remote video sensor devices. Current systems use application-level codes like web-based solutions to provide such access. This requires adherence to user-level APIs provided by such services, access to remote video information through given application-specific service and server topologies, and that the data being captured and distributed is manipulated by third party service codes. CameraCast is a simple, easily used system-level solution to remote video access. It provides a logical device API so that an application can identically operate on local vs. remote video sensor devices, using its own service and server topologies. In addition, the application can take advantage of API enhancements to protect remote video information, using a capability-based model for differential data protection that offers fine grain control over the information made available to specific codes or machines, thereby limiting their ability to violate privacy or security constraints. Experimental evaluations of CameraCast show that the performance of accessing remote video information approximates that of accesses to local devices, given sufficient networking resources. High performance is also attained when protection restrictions are enforced, due to an efficient kernel-level realization of differential data protection.

Collaboration


Dive into the Patrick M. Widener's collaboration.

Top Co-Authors

Avatar

Karsten Schwan

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Scott Levy

University of New Mexico

View shared research outputs
Top Co-Authors

Avatar

Kurt Brian Ferreira

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Greg Eisenhauer

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ron A. Oldfield

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge