Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Artur Barczyk is active.

Publication


Featured researches published by Artur Barczyk.


ieee international conference on high performance computing data and analytics | 2012

Multipathing with MPTCP and OpenFlow

Ronald van der Pol; Sander Boele; F. Dijkstra; Artur Barczyk; Gerben van Malenstein; Jim Hao Chen; Joe Mambretti

Data sets in e-science are increasing exponentially in size. To transfer these huge data sets we need to make efficient use of all available network capacity. This means using multiple paths when available. In this paper a prototype of such a multipath network is presented. Several emerging network technologies are integrated to achieve the goal of efficient high end-to-end throughput. Multipath TCP is used by the end hosts to distribute the traffic across multiple paths and OpenFlow is used within the network to do the wide area traffic engineering. Extensive monitoring is part of the demonstration. A website will show the actual topology (including link outages), the paths provisioned through the network and traffic statistics on all links and the end-to-end aggregate throughput.


Proceedings of the third workshop on Hot topics in software defined networking | 2014

Flow-based load balancing in multipathed layer-2 networks using OpenFlow and multipath-TCP

Michael Bredel; Zdravko Bozakov; Artur Barczyk; Harvey B Newman

In this paper we address the challenge of traffic optimization for big data flows in layer-2 networks. We present an OpenFlow controller implementation that removes the necessity of a Spanning Tree Protocol, allows for the usage of multiple paths, and enables in-network per-flow load balancing. Moreover, we demonstrate how systems deploying Multipath-TCP can benefit from our solution.


Journal of Physics: Conference Series | 2012

The DYNES Instrument: A Description and Overview

Jason Zurawski; Robert Ball; Artur Barczyk; Mathew Binkley; Jeff W. Boote; Eric L. Boyd; Aaron Brown; Robert Brown; Tom Lehman; Shawn Patrick McKee; Benjeman Meekhof; Azher Mughal; Harvey B Newman; Sandor Rozsa; Paul Sheldon; Alan J. Tackett; Ramiro Voicu; Stephen Wolff; Xi Yang

Scientific innovation continues to increase requirements for the computing and networking infrastructures of the world. Collaborative partners, instrumentation, storage, and processing facilities are often geographically and topologically separated, as is the case with LHC virtual organizations. These separations challenge the technology used to interconnect available resources, often delivered by Research and Education (RE a delicate balance is required to serve both long-lived, high capacity network flows, as well as more traditional end-user activities. The advent of dynamic circuit services, a technology that enables the creation of variable duration, guaranteed bandwidth networking channels, allows for the efficient use of common network infrastructures. These gains are seen particularly in locations where overall capacity is scarce compared to the (sustained peak) needs of user communities. Related efforts, including those of the LHCOPN [3] operations group and the emerging LHCONE [4] project, may take advantage of available resources by designating specific network activities as a “high priority”, allowing reservation of dedicated bandwidth or optimizing for deadline scheduling and predicable delivery patterns. This paper presents the DYNES instrument, an NSF funded cyberinfrastructure project designed to facilitate end-to-end dynamic circuit services [2]. This combination of hardware and software innovation is being deployed across R&E networks in the United States at selected end-sites located on University Campuses. DYNES is peering with international efforts in other countries using similar solutions, and is increasing the reach of this emerging technology. This global data movement solution could be integrated into computing paradigms such as cloud and grid computing platforms, and through the use of APIs can be integrated into existing data movement software.


Proceedings of International Symposium on Grids and Clouds 2015 — PoS(ISGC2015) | 2016

Integrating Network-Awareness and Network-Management into PhEDEx

Vlad Lapadatescu; Andrew Melo; Azher Mughal; Harvey Newman; Artur Barczyk; Paul Sheldon; Ramiro Voicu; T. Wildish; K. De; I. Legrand; Artem Petrosyan; Bob Ball; Jorge Batista; Shawn Patrick McKee

ANSE (Advanced Network Services for Experiments) is an NSF funded project, which aims to incorporate advanced network-aware tools in the mainstream production workflows of LHC’s two largest experiments: ATLAS and CMS. For CMS, this translates in the integration of bandwidth provisioning capabilities in PhEDEx, its data-transfer management tool. PhEDEx controls the large-scale data-flows on the WAN across the experiment, typically handling 1 PB of data per week, spread over 70 sites. This is only set to increase once LHC resumes operations in 2015. The goal of ANSE is to improve the overall working efficiency of the experiments, by allowing for more deterministic times to completion for a designated set of data transfers, through the use of end-to-end dynamic virtual circuits with guaranteed bandwidth. Through our work in ANSE, we have enhanced PhEDEx, allowing it to control a circuit’s lifecycle based on its own needs. By checking its current workload and past transfer history on normal links, PhEDEx is now able to make smart use of dynamic circuits, only creating one when it’s worth doing so. Different circuit management infrastructures can be used, via a plug-in system, making it highly adaptable. In this paper, we present the progress made by ANSE with regards to PhEDEx. We show how our system has evolved since the prototype phase we presented last year, and how it is now able to make use of dynamic circuits as a production-quality service. We describe its updated software architecture and how this mechanism can be refactored and used as a stand-alone system in other software domains (like ATLAS’ PanDA). We conclude, by describing the remaining work to be done ANSE (for PhEDEx) and discuss on future directions for continued development.


Archive | 2014

OLiMPS. OpenFlow Link-layer MultiPath Switching

Harvey Newman; Artur Barczyk; Michael Bredel

The OLiMPS project’s goal was the development of an OpenFlow controller application allowing load balancing over multiple switched paths across a complex network topology. The second goal was to integrate the controller with Dynamic Circuit Network systems such as ESnet’s OSCARS. Both goals were achieved successfully, as laid out in this report.


ieee international conference on high performance computing data and analytics | 2012

Efficient LHC Data Distribution across 100Gbps Networks

Harvey Newman; Artur Barczyk; Azher Mughal; Sandor Rozsa; Ramiro Voicu; I. Legrand; Steven Lo; Dorian Kcira; Randall Sobie; Ian Gable; Colin Leavett-Brown; Yvan Savard; Thomas Tam; Marilyn Hay; Shawn Patrick McKee; Roy Hocket; Ben Meekhof; Sergio Timoteo

During Supercomputing 2012 (SC12), an international team of high energy physicists, computer scientists, and network engineers led by the California Institute of Technology (Caltech), the University of Victoria, and the University of Michigan, together with Brookhaven National Lab, Vanderbilt and other partners, smashed their previous records for data transfers using the latest generation of wide area network circuitsWith three 100 gigabit/sec (100 Gbps) wide area network circuits [1] set up by the SCinet, Internet2, CENIC, CANARIE and BCnet, Starlight and US LHCNet network teams, and servers at each of the sites with 40 gigabit Ethernet (40GE) interfaces, the team reached a record transfer rate of 339 Gbps between Caltech, the University of Victoria Computing Center in British Columbia, the University of Michigan, and the Salt Palace Convention Center in Utah. This nearly doubled last years overall record, and eclipsed the record for a bidirectional transfer on a single link with a data flow of 187 Gbps between Victoria and Salt Lake.


Journal of Physics: Conference Series | 2012

Disk-to-Disk network transfers at 100 Gb/s

Artur Barczyk; Ian Gable; Marilyn Hay; Colin Leavett-Brown; I. Legrand; Kim Lewall; Shawn Patrick McKee; Donald McWilliam; Azher Mughal; Harvey B Newman; Sandor Rozsa; Yvan Savard; Randall Sobie; Thomas Tam; Ramiro Voicu

A 100 Gbps network was established between the California Institute of Technology conference booth at the Super Computing 2011 conference in Seattle, Washington and the computing center at the University of Victoria in Canada. A circuit was established over the BCNET, CANARIE and Super Computing (SCInet) networks using dedicated equipment. The small set of servers at the endpoints used a combination of 10GE and 40GE technologies, and SSD drives for data storage. The configuration of the network and the server configuration are discussed. We will show that the system was able to achieve disk-to-disk transfer rates of 60 Gbps and memory-to-memory rates in excess of 180 Gbps across the WAN. We will discuss the transfer tools, disk configurations, and monitoring tools used in the demonstration.


Journal of Physics: Conference Series | 2011

The Dynamics of Network Topology

Ramiro Voicu; I. Legrand; Harvey B Newman; Artur Barczyk; C. Grigoras; Ciprian Dobre

Network monitoring is vital to ensure proper network operation over time, and is tightly integrated with all the data intensive processing tasks used by the LHC experiments. In order to build a coherent set of network management services it is very important to collect in near real-time information about the network topology, the main data flows, traffic volume and the quality of connectivity. A set of dedicated modules were developed in the MonALISA framework to periodically perform network measurements tests between all sites. We developed global services to present in near real-time the entire network topology used by a community. For any LHC experiment such a network topology includes several hundred of routers and tens of Autonomous Systems. Any changes in the global topology are recorded and this information is can be easily correlated with traffic patterns. The evolution in time of global network topology is shown a dedicated GUI. Changes in the global topology at this level occur quite frequently and even small modifications in the connectivity map may significantly affect the network performance. The global topology graphs are correlated with active end to end network performance measurements, done with the Fast Data Transfer application, between all sites. Access to both real-time and historical data, as provided by MonALISA, is also important for developing services able to predict the usage pattern, to aid in efficiently allocating resources globally.


Journal of Physics: Conference Series | 2010

Advancement in networks for HEP community

Harvey Newman; Artur Barczyk; Azher Mughal

The key role of networks has been brought into focus as a result of the worldwide-distributed computing model adopted by the four LHC experiments, as a necessary response to the unprecedented data volumes and computational needs of the LHC physics program. As we approach LHC startup and the era of LHC physics, the focus has increased as the experiments develop the tools and methods needed to distribute, process, access and cooperatively analyze datasets with aggregate volumes of Petabytes of simulated data even now, rising to many Petabytes of real and simulated data during the first years of LHC operation.


ieee international conference on high performance computing data and analytics | 2012

Openflow services for science: an international experimental research network demonstrating multi-domain automatic network topology discovery, direct dynamic path provisioning using edge signaling and control, integration with multipathing using MPTCP

Joe Mambretti; Jim Hao Chen; Fei Yeh; Chu-Sing Yang; Te-Lung Liu; Ronald van der Pol; Sander Boele; F. Dijkstra; Mon-Yen Luo; Artur Barczyk; Gerben van Malensteinz

Collaboration


Dive into the Artur Barczyk's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Azher Mughal

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ramiro Voicu

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Harvey B Newman

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Bredel

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge