Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Azher Mughal is active.

Publication


Featured researches published by Azher Mughal.


Journal of Physics: Conference Series | 2011

Powering physics data transfers with FDT

Zdenek Maxa; Badar Ahmed; D. Kcira; I. Legrand; Azher Mughal; M. Thomas; Ramiro Voicu

We present a data transfer system for the grid environment built on top of the open source FDT tool (Fast Data Transfer) developed by Caltech in collaboration with the National University of Science and Technology (Pakistan). The enhancement layer above FDT consists of a client program - fdtcp (FDT copy) and a fdtd service (FDT daemon). This pair of components allows for GSI authenticated data transfers and offers to the user (or data movement production service) interface analogous to grid middle-ware data transfer services - SRM (i.e. srmcp) or GridFTP (i.e. globus-url-copy). fdtcp/fdtd enables third-party, batched file transfers. An important aspect is monitoring by means of the MonALISA active monitoring light-weight library ApMon, providing real-time monitoring and arrival time estimates as well as powerful troubleshooting mechanism. The actual transfer is carried out by the FDT application, an efficient application capable of reading and writing at disk speed over wide area networks. FDTs excellent performance was demonstrated e.g. during SuperComputing 2009 Bandwidth Challenge. We also discuss the storage technology interface layer, specifically focusing on the open source Hadoop distributed file system (HDFS), presenting the recently developed FDT-HDFS sequential write adapter. The integration with CMS PhEDEx is described as well. The PhEDEx project (Physics Experiment Data Export) is responsible for facilitating large-scale CMS data transfers across the grid. Ongoing and future development involves interfacing with next generation network services developed by OGF NSI-WG, GLIF and DICE groups, allowing for network resource reservation and scheduling.


Journal of Physics: Conference Series | 2012

The DYNES Instrument: A Description and Overview

Jason Zurawski; Robert Ball; Artur Barczyk; Mathew Binkley; Jeff W. Boote; Eric L. Boyd; Aaron Brown; Robert Brown; Tom Lehman; Shawn Patrick McKee; Benjeman Meekhof; Azher Mughal; Harvey B Newman; Sandor Rozsa; Paul Sheldon; Alan J. Tackett; Ramiro Voicu; Stephen Wolff; Xi Yang

Scientific innovation continues to increase requirements for the computing and networking infrastructures of the world. Collaborative partners, instrumentation, storage, and processing facilities are often geographically and topologically separated, as is the case with LHC virtual organizations. These separations challenge the technology used to interconnect available resources, often delivered by Research and Education (RE a delicate balance is required to serve both long-lived, high capacity network flows, as well as more traditional end-user activities. The advent of dynamic circuit services, a technology that enables the creation of variable duration, guaranteed bandwidth networking channels, allows for the efficient use of common network infrastructures. These gains are seen particularly in locations where overall capacity is scarce compared to the (sustained peak) needs of user communities. Related efforts, including those of the LHCOPN [3] operations group and the emerging LHCONE [4] project, may take advantage of available resources by designating specific network activities as a “high priority”, allowing reservation of dedicated bandwidth or optimizing for deadline scheduling and predicable delivery patterns. This paper presents the DYNES instrument, an NSF funded cyberinfrastructure project designed to facilitate end-to-end dynamic circuit services [2]. This combination of hardware and software innovation is being deployed across R&E networks in the United States at selected end-sites located on University Campuses. DYNES is peering with international efforts in other countries using similar solutions, and is increasing the reach of this emerging technology. This global data movement solution could be integrated into computing paradigms such as cloud and grid computing platforms, and through the use of APIs can be integrated into existing data movement software.


network aware data management | 2011

Scientific data movement enabled by the DYNES instrument

Jason Zurawski; Eric L. Boyd; Tom Lehman; Shawn Patrick McKee; Azher Mughal; Harvey B Newman; Paul Sheldon; Steve Wolff; Xing Yang

Scientific innovation continues to increase requirements for the computing and networking infrastructures of the world. Collaborative partners, instrumentation, storage, and processing facilities are often geographically and topologically separated, thus complicating the problem of end-to-end data management. Networking solutions, provided by R&E focused organizations, often serve as a vital link between these distributed components. Capacity and traffic management are key concerns of these network operators; a delicate balance is required to serve both long-lived, high capacity network flows, as well as more traditional end-user activities. The advent of dynamic circuit services, a technology that enables the creation of variable duration, guaranteed bandwidth networking channels, has afforded operations staff greater control over traffic demands and has increased the overall quality of service for scientific users. This paper presents the DYNES instrument, an NSF funded cyberinfrastructure project designed to facilitate end-to-end dynamic circuit services. This combination of hardware and software innovation is being deployed across R&E networks in the United States, end sites located at University Campuses. DYNES is peering with international efforts in other countries using similar solutions, and is increasing the reach of this emerging technology. This global data movement solution could be integrated into computing paradigms such as cloud and grid computing platforms, and through the use of APIs can be integrated into existing data movement software.


Proceedings of International Symposium on Grids and Clouds 2015 — PoS(ISGC2015) | 2016

Integrating Network-Awareness and Network-Management into PhEDEx

Vlad Lapadatescu; Andrew Melo; Azher Mughal; Harvey Newman; Artur Barczyk; Paul Sheldon; Ramiro Voicu; T. Wildish; K. De; I. Legrand; Artem Petrosyan; Bob Ball; Jorge Batista; Shawn Patrick McKee

ANSE (Advanced Network Services for Experiments) is an NSF funded project, which aims to incorporate advanced network-aware tools in the mainstream production workflows of LHC’s two largest experiments: ATLAS and CMS. For CMS, this translates in the integration of bandwidth provisioning capabilities in PhEDEx, its data-transfer management tool. PhEDEx controls the large-scale data-flows on the WAN across the experiment, typically handling 1 PB of data per week, spread over 70 sites. This is only set to increase once LHC resumes operations in 2015. The goal of ANSE is to improve the overall working efficiency of the experiments, by allowing for more deterministic times to completion for a designated set of data transfers, through the use of end-to-end dynamic virtual circuits with guaranteed bandwidth. Through our work in ANSE, we have enhanced PhEDEx, allowing it to control a circuit’s lifecycle based on its own needs. By checking its current workload and past transfer history on normal links, PhEDEx is now able to make smart use of dynamic circuits, only creating one when it’s worth doing so. Different circuit management infrastructures can be used, via a plug-in system, making it highly adaptable. In this paper, we present the progress made by ANSE with regards to PhEDEx. We show how our system has evolved since the prototype phase we presented last year, and how it is now able to make use of dynamic circuits as a production-quality service. We describe its updated software architecture and how this mechanism can be refactored and used as a stand-alone system in other software domains (like ATLAS’ PanDA). We conclude, by describing the remaining work to be done ANSE (for PhEDEx) and discuss on future directions for continued development.


Proceedings of the Second Workshop on Innovating the Network for Data-Intensive Science | 2015

High speed scientific data transfers using software defined networking

Harvey B Newman; Azher Mughal; D. Kcira; I. Legrand; Ramiro Voicu; J. Bunn

The massive data volumes acquired, simulated, processed and analyzed by globally distributed scientific collaborations continue to grow exponentially. One leading example is the LHC program, now at the start of its second three year data taking cycle, searching for new particles and interactions in a previously inaccessible range of energies, which has experienced a 70% growth in peak data transfer rates over the last 12 months alone. Other major science programs such as LSST and SKA, and other disciplines ranging from earth observation to genomics, are expected to have similar or great needs than the LHC program within the next decade. The development of new methods for fast, efficient and reliable data transfers over national and global distances, and a new generation of intelligent, software-driven networks capable of supporting multiple science programs with diverse needs for high volume and/or real-time data delivery, are essential if these programs are to continue to progress, and meet their goals. In this paper we describe activities of the Caltech High Energy Physics team and collaborators, related to the use Software Defined Networking to help achieve fast and efficient data distribution and access. Results from Supercomputing 2014 are presented together with our work on the Advanced Network Services for the Experiments project, and a new project developing a Next Generation Integrated SDN Architecture, as well as our plans for Supercomputing 2015.


ieee international conference on high performance computing data and analytics | 2012

Efficient LHC Data Distribution across 100Gbps Networks

Harvey Newman; Artur Barczyk; Azher Mughal; Sandor Rozsa; Ramiro Voicu; I. Legrand; Steven Lo; Dorian Kcira; Randall Sobie; Ian Gable; Colin Leavett-Brown; Yvan Savard; Thomas Tam; Marilyn Hay; Shawn Patrick McKee; Roy Hocket; Ben Meekhof; Sergio Timoteo

During Supercomputing 2012 (SC12), an international team of high energy physicists, computer scientists, and network engineers led by the California Institute of Technology (Caltech), the University of Victoria, and the University of Michigan, together with Brookhaven National Lab, Vanderbilt and other partners, smashed their previous records for data transfers using the latest generation of wide area network circuitsWith three 100 gigabit/sec (100 Gbps) wide area network circuits [1] set up by the SCinet, Internet2, CENIC, CANARIE and BCnet, Starlight and US LHCNet network teams, and servers at each of the sites with 40 gigabit Ethernet (40GE) interfaces, the team reached a record transfer rate of 339 Gbps between Caltech, the University of Victoria Computing Center in British Columbia, the University of Michigan, and the Salt Palace Convention Center in Utah. This nearly doubled last years overall record, and eclipsed the record for a bidirectional transfer on a single link with a data flow of 187 Gbps between Victoria and Salt Lake.


IEEE\/OSA Journal of Optical Communications and Networking | 2017

Next-generation exascale network integrated architecture for global science [Invited]

Harvey B Newman; M. Spiropulu; J. Balcas; D. Kcira; I. Legrand; Azher Mughal; J. R. Vlimant; Ramiro Voicu

The next-generation exascale network integrated architecture (NGENIA-ES) is a project specifically designed to accomplish new levels of network and computing capabilities in support of global science collaborations through the development of a new class of intelligent, agile networked systems. Its path to success is built upon our ongoing developments in multiple areas, strong ties among our high energy physics, computer and network science, and engineering teams, and our close collaboration with key technology developers and providers deeply engaged in the national strategic computing initiative (NSCI). This paper describes the building of a new class of distributed systems, our work with the leadership computing facilities (LFCs), the use of software-defined networking (SDN) methods, and the use of data-driven methods for the scheduling and optimization of network resources. Sections I-III present the challenges of data-intensive research and the important ingredients of this ecosystem. Sections IV-VI describe some crucial elements of the foreseen solution and some of the progress so far. Sections VII-IX go into the details of orchestration, software-defined networking, and scheduling optimization. Finally, Section X talks about engagement and partnerships, and Section XI gives a summary. References are given at the end.


Journal of Physics: Conference Series | 2012

Disk-to-Disk network transfers at 100 Gb/s

Artur Barczyk; Ian Gable; Marilyn Hay; Colin Leavett-Brown; I. Legrand; Kim Lewall; Shawn Patrick McKee; Donald McWilliam; Azher Mughal; Harvey B Newman; Sandor Rozsa; Yvan Savard; Randall Sobie; Thomas Tam; Ramiro Voicu

A 100 Gbps network was established between the California Institute of Technology conference booth at the Super Computing 2011 conference in Seattle, Washington and the computing center at the University of Victoria in Canada. A circuit was established over the BCNET, CANARIE and Super Computing (SCInet) networks using dedicated equipment. The small set of servers at the endpoints used a combination of 10GE and 40GE technologies, and SSD drives for data storage. The configuration of the network and the server configuration are discussed. We will show that the system was able to achieve disk-to-disk transfer rates of 60 Gbps and memory-to-memory rates in excess of 180 Gbps across the WAN. We will discuss the transfer tools, disk configurations, and monitoring tools used in the demonstration.


Journal of Physics: Conference Series | 2010

Advancement in networks for HEP community

Harvey Newman; Artur Barczyk; Azher Mughal

The key role of networks has been brought into focus as a result of the worldwide-distributed computing model adopted by the four LHC experiments, as a necessary response to the unprecedented data volumes and computational needs of the LHC physics program. As we approach LHC startup and the era of LHC physics, the focus has increased as the experiments develop the tools and methods needed to distribute, process, access and cooperatively analyze datasets with aggregate volumes of Petabytes of simulated data even now, rising to many Petabytes of real and simulated data during the first years of LHC operation.


Journal of Physics: Conference Series | 2015

Named Data Networking in Climate Research and HEP Applications

Susmit Shannigrahi; Christos Papadopoulos; Edmund M. Yeh; Harvey B Newman; Artur Barczyk; Ran Liu; Alex Sim; Azher Mughal; Inder Monga; Jean-Roch Vlimant; John Wu

Collaboration


Dive into the Azher Mughal's collaboration.

Top Co-Authors

Avatar

Ramiro Voicu

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Artur Barczyk

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Harvey B Newman

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

D. Kcira

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sandor Rozsa

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge