Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Syam Gadde is active.

Publication


Featured researches published by Syam Gadde.


workshop on hot topics in operating systems | 1997

Reduce, reuse, recycle: an approach to building large Internet caches

Syam Gadde; Michael Rabinovich; Jeffrey S. Chase

New demands brought by the continuing growth of the Internet will be met in part by more effective use of caching in the Web and other services. We have developed CRISP, a distributed Internet object cache targeted to the needs of the organizations that aggregate the end users of Internet services, particularly the commercial Internet Service Providers (ISPs) where much of the new growth occurs. A CRISP cache consists of a group of cooperating caching servers sharing a central directory of cached objects. This simple and obvious strategy is easily overlooked due to the well known drawbacks of a centralized structure. However, we show that these drawbacks are easily overcome for well configured CRISP caches. We outline the rationale behind the CRISP design, and report on early studies of CRISP caches in actual use and under synthetic load. While our experience with CRISP to date is at the scale of hundreds or thousands of clients, CRISP caches could be deployed to maximize capacity at any level of a regional or global cache hierarchy.


Computer Networks and Isdn Systems | 1998

Not all hits are created equal: cooperative proxy caching over a wide-area network

Michael Rabinovich; Jeffrey S. Chase; Syam Gadde

Abstract Given the benefits of sharing a cache among large user populations, Internet service providers will likely enter into peering agreements to share their caches. This position paper describes an approach for inter-proxy cooperation in this environment. While existing cooperation models focus on maximizing global hit ratios, values of cache hits in this environment depend on peering agreements and access latency of various proxies. It may well be that obtaining an object directly from the Internet is less expensive and faster than from a distant cache. Our approach takes advantage of these distinctions to reduce the overhead for locating objects in the global cache.


Journal of Magnetic Resonance Imaging | 2012

Function biomedical informatics research network recommendations for prospective multicenter functional MRI studies.

Gary H. Glover; Bryon A. Mueller; Jessica A. Turner; Theo G.M. van Erp; Thomas T. Liu; Douglas N. Greve; James T. Voyvodic; Jerod Rasmussen; Gregory G. Brown; David B. Keator; Vince D. Calhoun; Hyo Jong Lee; Judith M. Ford; Daniel H. Mathalon; Michele T. Diaz; Daniel S. O'Leary; Syam Gadde; Adrian Preda; Kelvin O. Lim; Cynthia G. Wible; Hal S. Stern; Aysenil Belger; Gregory McCarthy; Steven G. Potkin

This report provides practical recommendations for the design and execution of multicenter functional MRI (MC‐fMRI) studies based on the collective experience of the Function Biomedical Informatics Research Network (FBIRN). The study was inspired by many requests from the fMRI community to FBIRN group members for advice on how to conduct MC‐fMRI studies. The introduction briefly discusses the advantages and complexities of MC‐fMRI studies. Prerequisites for MC‐fMRI studies are addressed before delving into the practical aspects of carefully and efficiently setting up a MC‐fMRI study. Practical multisite aspects include: (i) establishing and verifying scan parameters including scanner types and magnetic fields, (ii) establishing and monitoring of a scanner quality program, (iii) developing task paradigms and scan session documentation, (iv) establishing clinical and scanner training to ensure consistency over time, (v) developing means for uploading, storing, and monitoring of imaging and other data, (vi) the use of a traveling fMRI expert, and (vii) collectively analyzing imaging data and disseminating results. We conclude that when MC‐fMRI studies are organized well with careful attention to unification of hardware, software and procedural aspects, the process can be a highly effective means for accessing a desired participant demographics while accelerating scientific discovery. J. Magn. Reson. Imaging 2012;36:39–54.


international conference of the ieee engineering in medicine and biology society | 2008

A National Human Neuroimaging Collaboratory Enabled by the Biomedical Informatics Research Network (BIRN)

David B. Keator; Jeffrey S. Grethe; Daniel S. Marcus; Syam Gadde; Sean Murphy; Steven D. Pieper; Douglas N. Greve; Randy Notestine; Henry J. Bockholt; Philip M. Papadopoulos

The aggregation of imaging, clinical, and behavioral data from multiple independent institutions and researchers presents both a great opportunity for biomedical research as well as a formidable challenge. Many research groups have well-established data collection and analysis procedures, as well as data and metadata format requirements that are particular to that group. Moreover, the types of data and metadata collected are quite diverse, including image, physiological, and behavioral data, as well as descriptions of experimental design, and preprocessing and analysis methods. Each of these types of data utilizes a variety of software tools for collection, storage, and processing. Furthermore sites are reluctant to release control over the distribution and access to the data and the tools. To address these needs, the biomedical informatics research network (BIRN) has developed a federated and distributed infrastructure for the storage, retrieval, analysis, and documentation of biomedical imaging data. The infrastructure consists of distributed data collections hosted on dedicated storage and computational resources located at each participating site, a federated data management system and data integration environment, an extensible markup language (XML) schema for data exchange, and analysis pipelines, designed to leverage both the distributed data management environment and the available grid computing resources.


Computer Communications | 2002

The Trickle-Down Effect: Web Caching and Server Request Distribution

Ronald P. Doyle; Jeffrey S. Chase; Syam Gadde; Amin Vahdat

Web proxies and Content Delivery Networks (CDNs) are widely used to accelerate Web content delivery and to conserve Internet bandwidth. These caching agents are highly effective for static content, which is an important component of all Web-based services. This paper explores the effect of ubiquitous Web caching on the request patterns seen by other components of an end-to-end content delivery architecture, including Web server clusters and interior caches. In particular, object popularity distributions in the Web tend to be Zipf-like, but caches disproportionately absorb requests for the most popular objects, changing the reference properties of the filtered request stream in fundamental ways. We call this the trickle-down effect. This paper uses trace-driven simulation and synthetic traffic patterns to illustrate the trickle-down effect and to investigate its impact on other components of a content delivery architecture, focusing on the implications for request distribution strategies in server clusters.


NeuroImage | 2013

Towards structured sharing of raw and derived neuroimaging data across existing resources

David B. Keator; Karl G. Helmer; Jason Steffener; Jessica A. Turner; T G M van Erp; Syam Gadde; Naveen Ashish; Gully A. P. C. Burns; B.N. Nichols

Data sharing efforts increasingly contribute to the acceleration of scientific discovery. Neuroimaging data is accumulating in distributed domain-specific databases and there is currently no integrated access mechanism nor an accepted format for the critically important meta-data that is necessary for making use of the combined, available neuroimaging data. In this manuscript, we present work from the Derived Data Working Group, an open-access group sponsored by the Biomedical Informatics Research Network (BIRN) and the International Neuroimaging Coordinating Facility (INCF) focused on practical tools for distributed access to neuroimaging data. The working group develops models and tools facilitating the structured interchange of neuroimaging meta-data and is making progress towards a unified set of tools for such data and meta-data exchange. We report on the key components required for integrated access to raw and derived neuroimaging data as well as associated meta-data and provenance across neuroimaging resources. The components include (1) a structured terminology that provides semantic context to data, (2) a formal data model for neuroimaging with robust tracking of data provenance, (3) a web service-based application programming interface (API) that provides a consistent mechanism to access and query the data model, and (4) a provenance library that can be used for the extraction of provenance data by image analysts and imaging software developers. We believe that the framework and set of tools outlined in this manuscript have great potential for solving many of the issues the neuroimaging community faces when sharing raw and derived neuroimaging data across the various existing database systems for the purpose of accelerating scientific discovery.


Neuroinformatics | 2012

XCEDE: an extensible schema for biomedical data.

Syam Gadde; Nicole Aucoin; Jeffrey S. Grethe; David B. Keator; Daniel S. Marcus; Steve Pieper; Fbirn, Mbirn, Birn-Cc

The XCEDE (XML-based Clinical and Experimental Data Exchange) XML schema, developed by members of the BIRN (Biomedical Informatics Research Network), provides an extensive metadata hierarchy for storing, describing and documenting the data generated by scientific studies. Currently at version 2.0, the XCEDE schema serves as a specification for the exchange of scientific data between databases, analysis tools, and web services. It provides a structured metadata hierarchy, storing information relevant to various aspects of an experiment (project, subject, protocol, etc.). Each hierarchy level also provides for the storage of data provenance information allowing for a traceable record of processing and/or changes to the underlying data. The schema is extensible to support the needs of various data modalities and to express types of data not originally envisioned by the developers. The latest version of the XCEDE schema and manual are available from http://www.xcede.org/.


Neuroinformatics | 2006

A general XML schema and SPM toolbox for storage of neuro-imaging results and anatomical labels

David B. Keator; Syam Gadde; Jeffrey S. Grethe; D Taylor; Steven G. Potkin; First Birn

With the increased frequency of multisite, large-scale collaborative neuro-imaging studies, the need for a general, self-documenting framework for the storage and retrieval of activation maps and anatomical labels becomes evident. To address this need, we have developed and extensible markup language (XML) schema and associated tools for the storage of neuro-imaging activation maps and anatomical labels. This schema, as part of the XML-based Clinical Experiment Data Exchange (XCEDE) schema, provides storage capabilities for analysis annotations, activation threshold parameters, and cluster and voxel-level statistics. Activation parameters contain information describing the threshold, degrees of freedom, FWHM smoothness, search volumes voxel sizes, expected voxels per cluster, and expected number of clusters in the statistical map. Cluster and voxel statistics can be stored along with the coordinates, threshold, and anatomical label information. Multiple threshold types can be documented for a given cluster or voxel along with the uncorrected and corrected probability values. Multiple atlases can be used to generate anatomical labels and stored for each significant voxel or cluter. Additionally, a toolbox for Statistical Parametric Mapping software (http://www. fil.ion.ucl.ac.uk/spm/) was created to capture the results from activation maps using the XML schema that supports both SPM99 and SPM2 versions (http:/nnbirn.net/Resources/Users/ Applications/xcede/SPM_XMLTools.htm). Support for anatomical labeling is available via the Talairach Daemon (http://ric.uthcsa. edu/projects/talairachdaemon.htm1) and Automated Anatomical Labeling (http://www. cyceron.fr/freeware/).


Computer Communications | 2001

Web caching and content distribution: a view from the interior

Syam Gadde; Jeffrey S. Chase; Michael Rabinovich

Research in Web caching has yielded analytical tools to model the behavior of large-scale Web caches. Recently, Wolman et al. (Proceedings of the 17th ACM Symposium on Operating Systems Principles, December 1999) have proposed an analytical model and used it to evaluate the potential of cooperative Web proxy caching for large populations. This paper shows how to apply the Wolman model to study the behavior of interior cache servers in multi-level caching systems. Focusing on interior caches gives a different perspective on the models implications, and it allows three new uses of the model. First, we apply the model to large-scale caching systems in which the interior nodes belong to third-party content distribution services. Second, we explore the effectiveness of content distribution services as conventional Web proxy caching becomes more prevalent. Finally, we correlate the models predictions of interior cache behavior with empirical observations from the root caches of the NLANR cache hierarchy.


Frontiers in Neuroinformatics | 2009

Derived data storage and exchange workflow for large-scale neuroimaging analyses on the BIRN grid

David B. Keator; Dingying Wei; Syam Gadde; H. Jeremy Bockholt; Jeffrey S. Grethe; Daniel S. Marcus; Nicole Aucoin; Ibrahim Burak Ozyurt

Organizing and annotating biomedical data in structured ways has gained much interest and focus in the last 30 years. Driven by decreases in digital storage costs and advances in genetics sequencing, imaging, electronic data collection, and microarray technologies, data is being collected at an ever increasing rate. The need to store and exchange data in meaningful ways in support of data analysis, hypothesis testing and future collaborative use is pervasive. Because trans-disciplinary projects rely on effective use of data from many domains, there is a genuine interest in informatics community on how best to store and combine this data while maintaining a high level of data quality and documentation. The difficulties in sharing and combining raw data become amplified after post-processing and/or data analysis in which the new dataset of interest is a function of the original data and may have been collected by multiple collaborating sites. Simple meta-data, documenting which subject and version of data were used for a particular analysis, becomes complicated by the heterogeneity of the collecting sites yet is critically important to the interpretation and reuse of derived results. This manuscript will present a case study of using the XML-Based Clinical Experiment Data Exchange (XCEDE) schema and the Human Imaging Database (HID) in the Biomedical Informatics Research Networks (BIRN) distributed environment to document and exchange derived data. The discussion includes an overview of the data structures used in both the XML and the database representations, insight into the design considerations, and the extensibility of the design to support additional analysis streams.

Collaboration


Dive into the Syam Gadde's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aysenil Belger

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michele T. Diaz

Pennsylvania State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge