Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chander Sehgal is active.

Publication


Featured researches published by Chander Sehgal.


grid computing | 2011

A Science Driven Production Cyberinfrastructure--the Open Science Grid

Mine Altunay; P. Avery; K. Blackburn; Brian Bockelman; M. Ernst; Dan Fraser; Robert Quick; Robert Gardner; Sebastien Goasguen; Tanya Levshina; Miron Livny; John McGee; Doug Olson; R. Pordes; Maxim Potekhin; Abhishek Singh Rana; Alain Roy; Chander Sehgal; I. Sfiligoi; Frank Wuerthwein

This article describes the Open Science Grid, a large distributed computational infrastructure in the United States which supports many different high-throughput scientific applications, and partners (federates) with other infrastructures nationally and internationally to form multi-domain integrated distributed systems for science. The Open Science Grid consortium not only provides services and software to an increasingly diverse set of scientific communities, but also fosters a collaborative team of practitioners and researchers who use, support and advance the state of the art in large-scale distributed computing. The scale of the infrastructure can be expressed by the daily throughput of around seven hundred thousand jobs, just under a million hours of computing, a million file transfers, and half a petabyte of data movement. In this paper we introduce and reflect on some of the OSG capabilities, usage and activities.


arXiv: High Energy Physics - Experiment | 2013

Snowmass Energy Frontier Simulations using the Open Science Grid A Snowmass 2013 whitepaper.

A. Avetisyan; Saptaparna Bhattacharya; M. Narain; S. Padhi; Jim Hirschauer; Tanya Levshina; Patricia McBride; Chander Sehgal; Marko Slyz; Mats Rynge; Sudhir Malik; J. Stupak

Snowmass is a US long-term planning study for the high-energy community by the American Physical Society’s Division of Particles and Fields. For its simulation studies, opportunistic resources are harnessed using the Open Science Grid infrastructure. Late binding grid technology, GlideinWMS, was used for distributed scheduling of the simulation jobs across many sites mainly in the US. The pilot infrastructure also uses the Parrot mechanism to dynamically access CvmFS in order to ascertain a homogeneous environment across the nodes. This report presents the resource usage and the storage model used for simulating large statistics Standard Model backgrounds needed for Snowmass Energy Frontier studies.


arXiv: Computational Physics | 2008

New science on the Open Science Grid

R. Pordes; Mine Altunay; P. Avery; Alina Bejan; K. Blackburn; Alan Blatecky; Robert Gardner; Bill Kramer; Miron Livny; John McGee; Maxim Potekhin; Rob Quick; Doug Olson; Alain Roy; Chander Sehgal; Torre Wenaus; Michael Wilde; F. Würthwein

The Open Science Grid (OSG) includes work to enable new science, new scientists, and new modalities in support of computationally based research. There are frequently significant sociological and organizational changes required in transformation from the existing to the new. OSG leverages its deliverables to the large-scale physics experiment member communities to benefit new communities at all scales through activities in education, engagement, and the distributed facility. This paper gives both a brief general description and specific examples of new science enabled on the OSG. More information is available at the OSG web site: www.opensciencegrid.org.


Journal of Physics: Conference Series | 2015

The OSG Open Facility: A Sharing Ecosystem

B Jayatilaka; Tanya Levshina; Mats Rynge; Chander Sehgal; M Slyz

The Open Science Grid (OSG) ties together individual experiments’ computing power, connecting their resources to create a large, robust computing grid, this computing infrastructure started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero. In the years since, the OSG has broadened its focus to also address the needs of other US researchers and increased delivery of Distributed High Through-put Computing (DHTC) to users from a wide variety of disciplines via the OSG Open Facility. Presently, the Open Facility delivers about 100 million computing wall hours per year to researchers who are not already associated with the owners of the computing sites, this is primarily accomplished by harvesting and organizing the temporarily unused capacity (i.e. opportunistic cycles) from the sites in the OSG. Using these methods, OSG resource providers and scientists share computing hours with researchers in many other fields to enable their science, striving to make sure that these computing power used with maximal efficiency. Furthermore, we believe that expanded access to DHTC is an essential tool for scientific innovation and work continues in expanding this service.


Journal of Physics: Conference Series | 2012

The Open Science Grid – Support for Multi-Disciplinary Team Science – the Adolescent Years

L. A. T. Bauerdick; M. Ernst; Dan Fraser; Miron Livny; R. Pordes; Chander Sehgal; F. Würthwein

As it enters adolescence the Open Science Grid (OSG) is bringing a maturing fabric of Distributed High Throughput Computing (DHTC) services that supports an expanding HEP community to an increasingly diverse spectrum of domain scientists. Working closely with researchers on campuses throughout the US and in collaboration with national cyberinfrastructure initiatives, we transform their computing environment through new concepts, advanced tools and deep experience. We discuss examples of these including: the pilot-job overlay concepts and technologies now in use throughout OSG and delivering 1.4 Million CPU hours/day; the role of campus infrastructures- built out from concepts of sharing across multiple local faculty clusters (made good use of already by many of the HEP Tier-2 sites in the US); the work towards the use of clouds and access to high throughput parallel (multi-core and GPU) compute resources; and the progress we are making towards meeting the data management and access needs of non-HEP communities with general tools derived from the experience of the parochial tools in HEP (integration of Globus Online, prototyping with IRODS, investigations into Wide Area Lustre). We will also review our activities and experiences as HTC Service Provider to the recently awarded NSF XD XSEDE project, the evolution of the US NSF TeraGrid project, and how we are extending the reach of HTC through this activity to the increasingly broad national cyberinfrastructure. We believe that a coordinated view of the HPC and HTC resources in the US will further expand their impact on scientific discovery.


Journal of Physics: Conference Series | 2014

Grid accounting service: state and future development

Tanya Levshina; Chander Sehgal; Brian Bockelman; D Weitzel; A Guru

During the last decade, large-scale federated distributed infrastructures have been continually developed and expanded. One of the crucial components of a cyber-infrastructure is an accounting service that collects data related to resource utilization and identity of users using resources. The accounting service is important for verifying pledged resource allocation per particular groups and users, providing reports for funding agencies and resource providers, and understanding hardware provisioning requirements. It can also be used for end-to-end troubleshooting as well as billing purposes. In this work we describe Gratia, a federated accounting service jointly developed at Fermilab and Holland Computing Center at University of Nebraska-Lincoln. The Open Science Grid, Fermilab, HCC, and several other institutions have used Gratia in production for several years. The current development activities include expanding Virtual Machines provisioning information, XSEDE allocation usage accounting, and Campus Grids resource utilization. We also identify the direction of future work: improvement and expansion of Cloud accounting, persistent and elastic storage space allocation, and the incorporation of WAN and LAN network metrics.


Journal of Physics: Conference Series | 2012

Supporting Shared Resource Usage for a Diverse User Community: the OSG Experience and Lessons Learned

G. Garzoglio; Tanya Levshina; Mats Rynge; Chander Sehgal; Marko Slyz


Journal of Physics: Conference Series | 2017

The OSG Open Facility: an on-ramp for opportunistic scientific computing

B Jayatilaka; Tanya Levshina; Chander Sehgal; R Gardner; Mats Rynge; Frank Würthwein


Proceedings of Science | 2014

OSG PKI transition: Experiences and lessons learned

Von Welch; Alain Deximo; Soichi Hayashi; Viplav D. Khadke; Rohan Mathure; Robert Quick; Mine Altunay; Chander Sehgal; Anthony Tiradani; Jim Basney


Archive | 2008

The Open Science Grid Executive Board on behalf of the OSG Consortium

R. Pordes; Mine Altunay; P. Avery; Alina Bejan; K. Blackburn; Robert Gardner; Bill Kramer; Miron Livny; John McGee; Rob Quick; Doug Olson; Alain Roy; Chander Sehgal; Michael Wilde; F. Würthwein

Collaboration


Dive into the Chander Sehgal's collaboration.

Top Co-Authors

Avatar

Mats Rynge

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Miron Livny

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alain Roy

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Doug Olson

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

F. Würthwein

University of California

View shared research outputs
Top Co-Authors

Avatar

John McGee

Renaissance Computing Institute

View shared research outputs
Top Co-Authors

Avatar

K. Blackburn

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge