Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Harvey Newman is active.

Publication


Featured researches published by Harvey Newman.


Proceedings of International Symposium on Grids and Clouds 2015 — PoS(ISGC2015) | 2016

Integrating Network-Awareness and Network-Management into PhEDEx

Vlad Lapadatescu; Andrew Melo; Azher Mughal; Harvey Newman; Artur Barczyk; Paul Sheldon; Ramiro Voicu; T. Wildish; K. De; I. Legrand; Artem Petrosyan; Bob Ball; Jorge Batista; Shawn Patrick McKee

ANSE (Advanced Network Services for Experiments) is an NSF funded project, which aims to incorporate advanced network-aware tools in the mainstream production workflows of LHC’s two largest experiments: ATLAS and CMS. For CMS, this translates in the integration of bandwidth provisioning capabilities in PhEDEx, its data-transfer management tool. PhEDEx controls the large-scale data-flows on the WAN across the experiment, typically handling 1 PB of data per week, spread over 70 sites. This is only set to increase once LHC resumes operations in 2015. The goal of ANSE is to improve the overall working efficiency of the experiments, by allowing for more deterministic times to completion for a designated set of data transfers, through the use of end-to-end dynamic virtual circuits with guaranteed bandwidth. Through our work in ANSE, we have enhanced PhEDEx, allowing it to control a circuit’s lifecycle based on its own needs. By checking its current workload and past transfer history on normal links, PhEDEx is now able to make smart use of dynamic circuits, only creating one when it’s worth doing so. Different circuit management infrastructures can be used, via a plug-in system, making it highly adaptable. In this paper, we present the progress made by ANSE with regards to PhEDEx. We show how our system has evolved since the prototype phase we presented last year, and how it is now able to make use of dynamic circuits as a production-quality service. We describe its updated software architecture and how this mechanism can be refactored and used as a stand-alone system in other software domains (like ATLAS’ PanDA). We conclude, by describing the remaining work to be done ANSE (for PhEDEx) and discuss on future directions for continued development.


Archive | 2014

OLiMPS. OpenFlow Link-layer MultiPath Switching

Harvey Newman; Artur Barczyk; Michael Bredel

The OLiMPS project’s goal was the development of an OpenFlow controller application allowing load balancing over multiple switched paths across a complex network topology. The second goal was to integrate the controller with Dynamic Circuit Network systems such as ESnet’s OSCARS. Both goals were achieved successfully, as laid out in this report.


Proceedings of XII Advanced Computing and Analysis Techniques in Physics Research — PoS(ACAT08) | 2009

MonALISA : A Distributed Service System for Monitoring, Control and Global Optimization

I. Legrand; Harvey Newman; Ramiro Voicu; C. Grigoras; Catalin Cirstoiu; Ciprian Dobre

The MonALISA (Monitoring Agents in A Large Integrated Services Architecture) framework provides a set of distributed services for monitoring, control, management and global optimization for large scale distributed systems. It is based on an ensemble of autonomous, multi-threaded, agent-based subsystems which are registered as dynamic services. They can be automatically discovered and used by other services or clients. The distributed agents can collaborate and cooperate in performing a wide range of management, control and global optimization tasks using real time monitoring information.


ieee international conference on high performance computing data and analytics | 2012

Efficient LHC Data Distribution across 100Gbps Networks

Harvey Newman; Artur Barczyk; Azher Mughal; Sandor Rozsa; Ramiro Voicu; I. Legrand; Steven Lo; Dorian Kcira; Randall Sobie; Ian Gable; Colin Leavett-Brown; Yvan Savard; Thomas Tam; Marilyn Hay; Shawn Patrick McKee; Roy Hocket; Ben Meekhof; Sergio Timoteo

During Supercomputing 2012 (SC12), an international team of high energy physicists, computer scientists, and network engineers led by the California Institute of Technology (Caltech), the University of Victoria, and the University of Michigan, together with Brookhaven National Lab, Vanderbilt and other partners, smashed their previous records for data transfers using the latest generation of wide area network circuitsWith three 100 gigabit/sec (100 Gbps) wide area network circuits [1] set up by the SCinet, Internet2, CENIC, CANARIE and BCnet, Starlight and US LHCNet network teams, and servers at each of the sites with 40 gigabit Ethernet (40GE) interfaces, the team reached a record transfer rate of 339 Gbps between Caltech, the University of Victoria Computing Center in British Columbia, the University of Michigan, and the Salt Palace Convention Center in Utah. This nearly doubled last years overall record, and eclipsed the record for a bidirectional transfer on a single link with a data flow of 187 Gbps between Victoria and Salt Lake.


Archive | 2013

US LHCNet: Transatlantic Networking for the LHC and the U.S. HEP Community

Harvey Newman; Artur Barczyk

US LHCNet provides the transatlantic connectivity between the Tier1 computing facilities at the Fermilab and Brookhaven National Labs and the Tier0 and Tier1 facilities at CERN, as well as Tier1s elsewhere in Europe and Asia. Together with ESnet, Internet2, and other R&E Networks participating in the LHCONE initiative, US LHCNet also supports transatlantic connections between the Tier2 centers (where most of the data analysis is taking place) and the Tier1s as needed. Given the key roles of the US and European Tier1 centers as well as Tier2 centers on both continents, the largest data flows are across the Atlantic, where US LHCNet has the major role. US LHCNet manages and operates the transatlantic network infrastructure including four Points of Presence (PoPs) and currently six transatlantic OC-192 (10Gbps) leased links. Operating at the optical layer, the network provides a highly resilient fabric for data movement, with a target service availability level in excess of 99.95%. This level of resilience and seamless operation is achieved through careful design including path diversity on both submarine and terrestrial segments, use of carrier-grade equipment with built-in high-availability and redundancy features, deployment of robust failover mechanisms based on SONET protection schemes, as well as the designmorexa0» of facility-diverse paths between the LHC computing sites. The US LHCNet network provides services at Layer 1(optical), Layer 2 (Ethernet) and Layer 3 (IPv4 and IPv6). The flexible design of the network, including modular equipment, a talented and agile team, and flexible circuit lease management, allows US LHCNet to react quickly to changing requirements form the LHC community. Network capacity is provisioned just-in-time to meet the needs, as demonstrated in the past years during the changing LHC start-up plans.«xa0less


Journal of Physics: Conference Series | 2010

Advancement in networks for HEP community

Harvey Newman; Artur Barczyk; Azher Mughal

The key role of networks has been brought into focus as a result of the worldwide-distributed computing model adopted by the four LHC experiments, as a necessary response to the unprecedented data volumes and computational needs of the LHC physics program. As we approach LHC startup and the era of LHC physics, the focus has increased as the experiments develop the tools and methods needed to distribute, process, access and cooperatively analyze datasets with aggregate volumes of Petabytes of simulated data even now, rising to many Petabytes of real and simulated data during the first years of LHC operation.


Archive | 2016

Traffic Optimization for ExaScale Science Applications

Yang Yang; Qiao Xiang; Harvey Newman; Jing-Ye Zhang; Justas Balcas; Haizhou Du; Azher Mughal; Greg Bernstein


Archive | 2016

Software-Defined Networking: From Edge to Core

Harvey Newman; Richard Mount; Torre Wenaus


Proceedings of International Symposium on Grids and Clouds (ISGC) 2014 — PoS(ISGC2014) | 2014

Integrating network-awareness and network-management into PhEDEx: first results from the ANSE project

Vlad Lapadatescu; T. Wildish; Artur Barczyk; Harvey Newman; Ramiro Voicu; Shawn Patrick McKee; Jorge Batista; Bob Ball; K. De; Artem Petrosyan; Paul Sheldon; A. Melo


Archive | 2014

2014 Fourth International Workshop on Network-Aware Data Management NDM 2014

Vishal Ahuja; Matthew K. Farrens; Dipak Ghosal; Mehmet Balman; Eric Pouyoul; Brian Tierney; K R Krish; M. Safdar Iqbal; M. Mustafa Rafique; Ali Raza Butt; Artur Barczyk; Azher Mughal; Harvey Newman; I. Legrand; Michael Bredel; Ramiro Voicu; Vlad Lapadatescu; T. Wildish

Collaboration


Dive into the Harvey Newman's collaboration.

Top Co-Authors

Avatar

Artur Barczyk

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Azher Mughal

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ramiro Voicu

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian Tierney

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

K. De

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Michael Bredel

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge