Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ramiro Voicu is active.

Publication


Featured researches published by Ramiro Voicu.


Computer Physics Communications | 2009

MonALISA: An agent based, dynamic service system to monitor, control and optimize distributed systems ☆

I. Legrand; Harvey B Newman; Ramiro Voicu; Catalin Cirstoiu; C. Grigoras; Ciprian Dobre; Adrian Muraru; Alexandru Costan; M. Dediu; Corina Stratan

The MonALISA (Monitoring Agents in a Large Integrated Services Architecture) framework provides a set of distributed services for monitoring, control, management and global optimization for large scale distributed systems. It is based on an ensemble of autonomous, multi-threaded, agent-based subsystems which are registered as dynamic services. They can be automatically discovered and used by other services or clients. The distributed agents can collaborate and cooperate in performing a wide range of management, control and global optimization tasks using real time monitoring information.


ACM Queue | 2009

Monitoring and control of large systems with MonALISA

I. Legrand; Ramiro Voicu; Catalin Cirstoiu; C. Grigoras; L. Betev; Alexandru Costan

MonALISA developers describe how it works, the key design principles behind it, and the biggest technical challenges in building it.


Journal of Physics: Conference Series | 2011

Powering physics data transfers with FDT

Zdenek Maxa; Badar Ahmed; D. Kcira; I. Legrand; Azher Mughal; M. Thomas; Ramiro Voicu

We present a data transfer system for the grid environment built on top of the open source FDT tool (Fast Data Transfer) developed by Caltech in collaboration with the National University of Science and Technology (Pakistan). The enhancement layer above FDT consists of a client program - fdtcp (FDT copy) and a fdtd service (FDT daemon). This pair of components allows for GSI authenticated data transfers and offers to the user (or data movement production service) interface analogous to grid middle-ware data transfer services - SRM (i.e. srmcp) or GridFTP (i.e. globus-url-copy). fdtcp/fdtd enables third-party, batched file transfers. An important aspect is monitoring by means of the MonALISA active monitoring light-weight library ApMon, providing real-time monitoring and arrival time estimates as well as powerful troubleshooting mechanism. The actual transfer is carried out by the FDT application, an efficient application capable of reading and writing at disk speed over wide area networks. FDTs excellent performance was demonstrated e.g. during SuperComputing 2009 Bandwidth Challenge. We also discuss the storage technology interface layer, specifically focusing on the open source Hadoop distributed file system (HDFS), presenting the recently developed FDT-HDFS sequential write adapter. The integration with CMS PhEDEx is described as well. The PhEDEx project (Physics Experiment Data Export) is responsible for facilitating large-scale CMS data transfers across the grid. Ongoing and future development involves interfacing with next generation network services developed by OGF NSI-WG, GLIF and DICE groups, allowing for network resource reservation and scheduling.


Journal of Physics: Conference Series | 2012

The DYNES Instrument: A Description and Overview

Jason Zurawski; Robert Ball; Artur Barczyk; Mathew Binkley; Jeff W. Boote; Eric L. Boyd; Aaron Brown; Robert Brown; Tom Lehman; Shawn Patrick McKee; Benjeman Meekhof; Azher Mughal; Harvey B Newman; Sandor Rozsa; Paul Sheldon; Alan J. Tackett; Ramiro Voicu; Stephen Wolff; Xi Yang

Scientific innovation continues to increase requirements for the computing and networking infrastructures of the world. Collaborative partners, instrumentation, storage, and processing facilities are often geographically and topologically separated, as is the case with LHC virtual organizations. These separations challenge the technology used to interconnect available resources, often delivered by Research and Education (RE a delicate balance is required to serve both long-lived, high capacity network flows, as well as more traditional end-user activities. The advent of dynamic circuit services, a technology that enables the creation of variable duration, guaranteed bandwidth networking channels, allows for the efficient use of common network infrastructures. These gains are seen particularly in locations where overall capacity is scarce compared to the (sustained peak) needs of user communities. Related efforts, including those of the LHCOPN [3] operations group and the emerging LHCONE [4] project, may take advantage of available resources by designating specific network activities as a “high priority”, allowing reservation of dedicated bandwidth or optimizing for deadline scheduling and predicable delivery patterns. This paper presents the DYNES instrument, an NSF funded cyberinfrastructure project designed to facilitate end-to-end dynamic circuit services [2]. This combination of hardware and software innovation is being deployed across R&E networks in the United States at selected end-sites located on University Campuses. DYNES is peering with international efforts in other countries using similar solutions, and is increasing the reach of this emerging technology. This global data movement solution could be integrated into computing paradigms such as cloud and grid computing platforms, and through the use of APIs can be integrated into existing data movement software.


international symposium on parallel and distributed computing | 2008

A Monitoring Architecture for High-Speed Networks in Large Scale Distributed Collaborations

Alexandru Costan; Ciprian Dobre; Valentin Cristea; Ramiro Voicu

In this paper we present the architecture of a distributed framework that allows real-time accurate monitoring of large scale high-speed networks. An important component of a large-scale distributed collaboration is the complex network infrastructure on which it relies. For monitoring and controlling the networking resources an adequate instrument should offer the possibility to collect and store the relevant monitoring information, presenting significant perspectives and synthetic views of how the large distributed system performs. We therefore developed within the MonALISA monitoring framework a system able to collect, store, process and interpret the large volume of status information related to the US LHCNet research network. The system uses flexible mechanisms for data representation, providing access optimization and decision support, being able to present real-time and long-time history information through global or specific views and to take further automated control actions based on them.


Proceedings of International Symposium on Grids and Clouds 2015 — PoS(ISGC2015) | 2016

Integrating Network-Awareness and Network-Management into PhEDEx

Vlad Lapadatescu; Andrew Melo; Azher Mughal; Harvey Newman; Artur Barczyk; Paul Sheldon; Ramiro Voicu; T. Wildish; K. De; I. Legrand; Artem Petrosyan; Bob Ball; Jorge Batista; Shawn Patrick McKee

ANSE (Advanced Network Services for Experiments) is an NSF funded project, which aims to incorporate advanced network-aware tools in the mainstream production workflows of LHC’s two largest experiments: ATLAS and CMS. For CMS, this translates in the integration of bandwidth provisioning capabilities in PhEDEx, its data-transfer management tool. PhEDEx controls the large-scale data-flows on the WAN across the experiment, typically handling 1 PB of data per week, spread over 70 sites. This is only set to increase once LHC resumes operations in 2015. The goal of ANSE is to improve the overall working efficiency of the experiments, by allowing for more deterministic times to completion for a designated set of data transfers, through the use of end-to-end dynamic virtual circuits with guaranteed bandwidth. Through our work in ANSE, we have enhanced PhEDEx, allowing it to control a circuit’s lifecycle based on its own needs. By checking its current workload and past transfer history on normal links, PhEDEx is now able to make smart use of dynamic circuits, only creating one when it’s worth doing so. Different circuit management infrastructures can be used, via a plug-in system, making it highly adaptable. In this paper, we present the progress made by ANSE with regards to PhEDEx. We show how our system has evolved since the prototype phase we presented last year, and how it is now able to make use of dynamic circuits as a production-quality service. We describe its updated software architecture and how this mechanism can be refactored and used as a stand-alone system in other software domains (like ATLAS’ PanDA). We conclude, by describing the remaining work to be done ANSE (for PhEDEx) and discuss on future directions for continued development.


international conference on intelligent computer communication and processing | 2011

A monitoring framework for large scale networks

Ramiro Voicu; I. Legrand; Ciprian Dobre

Network monitoring is vital to ensure proper network operation over time, and is tightly integrated with data intensive processing tasks used by modern large scale distributed systems. We present a set of dedicated services developed within the MonALISA framework to provide network management. Such services provide in near real-time the globally aggregated status of an entire network. The time evolution of global network topology is presented in a dedicated GUI. Changes in the global topology at this level occur quite frequently and even small modifications in the connectivity map may significantly affect the network performance. The global topology graphs are correlated with active end-to-end network performance measurements, done using the Fast Data Transfer application, between all sites. Access to both real-time and historical data, as provided by MonALISA, is also important for developing services able to predict the usage pattern, to aid in efficiently allocating resources globally.


conference on computer as a tool | 2007

An Agent Based Framework to Monitor and Control High Performance Data Transfers

Ciprian Dobre; Ramiro Voicu; Adrian Muraru; I. Legrand

We present a distributed agent based system used to monitor, configure and control complex, large scale data transfers in the wide area network. The localhost information service agent (LISA) is a lightweight dynamic service that provides complete system and applications monitoring, is capable to dynamically configure system parameters and can help in optimizing distributed applications. As part of the MonALISA (monitoring agents in a large integrated services architecture) system, LISA is an end host agent capable to collect any type of monitoring information, to distribute them, and to take actions based on local or global decision units. The system has been used for the bandwidth challenge at supercomputing 2006 to coordinate global large scale data transfers using fast data transfer (FDT) application between hundreds of servers distributed on major grid sites involved in processing high energy physics data for the future large hadron collider experiments.


international conference on autonomic computing | 2011

Replication mechanisms for a distributed time series storage and retrieval service

Mugurel Ionut Andreica; I. Legrand; Ramiro Voicu

In this paper we present the prototype architecture of a distributed service which stores and retrieves time series data, together with replication mechanisms employed in order to provide both reliability and load balancing. The entries of each time series are stored locally on the machines running the instances of the service. Each entry is eventually fully replicated on every service instance. Our replication mechanisms depend on whether there is only one service instance receiving each entry of a time series from a client or there may be multiple such instances.


intelligent data acquisition and advanced computing systems: technology and applications | 2011

Monitoring large scale network topologies

Ciprian Dobre; Ramiro Voicu; I. Legrand

Network monitoring is vital to ensure proper network operation over time, and is tightly integrated with all the data intensive processing tasks used by the LHC experiments. In order to build a coherent set of network management services it is very important to collect in near real-time information about the network topology, the main data flows, traffic volume and the quality of connectivity. A set of dedicated modules were developed in the MonALISA framework to periodically perform network measurements tests between all sites. We developed global services to present in near real-time the entire network topology used by a community. The time evolution of global network topology is shown in a dedicated GUI. Changes in the global topology at this level occur quite frequently and even small modifications in the connectivity map may significantly affect the network performance. The global topology graphs are correlated with active end-to-end network performance measurements, done using the Fast Data Transfer application, between all sites. Access to both real-time and historical data, as provided by MonALISA, is also important for developing services able to predict the usage pattern, to aid in efficiently allocating resources globally.

Collaboration


Dive into the Ramiro Voicu's collaboration.

Top Co-Authors

Avatar

Ciprian Dobre

Politehnica University of Bucharest

View shared research outputs
Top Co-Authors

Avatar

Azher Mughal

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Harvey B Newman

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Artur Barczyk

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

D. Kcira

Université catholique de Louvain

View shared research outputs
Researchain Logo
Decentralizing Knowledge