Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrei Hutanu is active.

Publication


Featured researches published by Andrei Hutanu.


Future Generation Computer Systems | 2006

High-definition multimedia for multiparty low-latency interactive communication

Petr Holub; Luděk Matyska; Miloš Liška; Lukáš Hejtmánek; Jiří Denemark; Tomáš Rebok; Andrei Hutanu; Ravi Paruchuri; Jan Radil; Eva Hladká

We describe a high-quality collaborative environment that uses High-Definition (HD) video to achieve near realistic perception of a remote site. The capture part, consisting of a HD camera, Centaurus HD-SDI capture card, and UltraGrid software, produces a 1.5 Gbps UDP data stream of uncompressed HD video that is transferred over a 10GE network interface to the high-speed IP network. The HD video stream displaying uses either a software-based solution with color depth down-sampling and field de-interlacing, or another Centaurus card. Data distribution to individual participants of the videoconference is achieved using a user-controlled UDP packet reflector based on the Active Element idea. The viability of this system has been demonstrated at the iGrid 2005 conference for a three-way high quality videoconference among sites in the Czech Republic, Louisiana, and California.


ieee visualization | 2004

Interactive Exploration of Large Remote Micro-CT Scans

Steffen Prohaska; Andrei Hutanu; Ralf Kähler; Hans-Christian Hege

Datasets of tens of gigabytes are becoming common in computational and experimental science. This development is driven by advances in imaging technology, producing detectors with growing resolutions, as well as availability of cheap processing power and memory capacity in commodity-based computing clusters. We describe the design of a visualization system that allows scientists to interactively explore large remote data sets in an efficient and flexible way. The system is broadly applicable and currently used by medical scientists conducting an osteoporosis research project. Human vertebral bodies are scanned using a high resolution microCT scanner producing scans of roughly 8 GB size each. All participating research groups require access to the centrally stored data. Due to the rich internal bone structure, scientists need to interactively explore the full dataset at coarse levels, as well as visualize subvolumes of interest at the highest resolution. Our solution is based on HDF5 and GridFTP. When accessing data remotely, the HDF5 data processing pipeline is modified to support efficient retrieval of subvolumes. We reduce the overall latency and optimize throughput by executing high-level operations on the remote side. The GridFTP protocol is used to pass the HDF5 requests to a customized server. The approach takes full advantage of local graphics hardware for rendering. Interactive visualization is accomplished using a background thread to access the datasets stored in a multiresolution format. A hierarchical volume tenderer provides seamless integration of high resolution details with low resolution overviews.


international symposium on high-capacity optical networks and enabling technologies | 2007

EnLIGHTened Computing: An architecture for co-allocating network, compute, and other grid resources for high-end applications

Lina Battestilli; Andrei Hutanu; Gigi Karmous-Edwards; Daniel S. Katz; Jon MacLaren; Joe Mambretti; John H. Moore; Seung-Jong Park; Harry G. Perros; Syam Sundar; Savera Tanwir; Steven R. Thorpe; Yufeng Xin

Many emerging high performance applications require distributed infrastructure that is significantly more powerful and flexible than traditional grids. Such applications require the optimization, close integration, and control of all grid resources, including networks. The EnLIGHTened (ENL) computing project has designed an architectural framework that allows grid applications to dynamically request (in-advance or on-demand) any type of grid resource: computers, storage, instruments, and deterministic, high-bandwidth network paths, including lightpaths. Based on application requirements, the ENL middleware communicates with grid resource managers and, when availability is verified, co-allocates all the necessary resources. ENLpsilas domain network manager controls all network resource allocations to dynamically setup and delete dedicated circuits using generalized multiprotocol label switching (GMPLS) control plane signaling. In order to make optimal brokering decisions, the ENL middleware uses near-real-time performance information about grid resources. A prototype of this architectural framework on a national-scale testbed implementation has been used to demonstrate a small number of applications. Based on this, a set of changes for the middleware have been laid out and are being implemented.


Future Generation Computer Systems | 2006

Distributed and collaborative visualization of large data sets using high-speed networks

Andrei Hutanu; Gabrielle Allen; Stephen David Beck; Petr Holub; Hartmut Kaiser; Archit Kulshrestha; Miloš Liška; Jon MacLaren; Ludek Matyska; Ravi Paruchuri; Steffen Prohaska; Edward Seidel; Brygg Ullmer; Shalini Venkataraman

We describe an architecture for distributed collaborative visualization that integrates video conferencing, distributed data management and grid technologies as well as tangible interaction devices for visualization. High-speed, low-latency optical networks support high-quality collaborative interaction and remote visualization of large data.


international conference on move to meaningful internet systems | 2005

Shelter from the storm: building a safe archive in a hostile world

Jon MacLaren; Gabrielle Allen; Chirag Dekate; Dayong Huang; Andrei Hutanu; Chongjie Zhang

The storing of data and configuration files related to scientific experiments is vital if those experiments are to remain reproducible, or if the data is to be shared easily. The prescence of historical (observed) data is also important in order to assist in model evaluation and development. This paper describes the design and implementation process for a data archive, which was required for a coastal modelling project. The construction of the archive is described in detail, from its design through to deployment and testing. As we will show, the archive has been designed to tolerate failures in its communications with external services, and also to ensure that no information is lost if the archive itself fails, i.e. upon restarting, the archive will still be in exactly the same state.


collaborative computing | 2007

Uncompressed HD video for collaborative teaching — an experiment

Andrei Hutanu; Ravi Paruchuri; Daniel Eiland; Miloš Liška; Petr Holub; Steven R. Thorpe; Yufeng Xin

This article describes a distributed classroom experiment carried out by five universities in the US and Europe at the beginning of 2007. This experiment was motivated by the emergence of new digital media technology supporting uncompressed high-definition video capture, transport and display as well as the networking services required for its deployment across wide distances. The participating institutes have designed a distributed collaborative environment centered around the new technology and applied it to join the five sites into a single virtual classroom where a real course has been offered to the registered students. Here we are presenting the technologies utilized in the experiment, the results of a technology evaluation done with the help of the participating students and we identify areas of future improvements of the system. While there are a few hurdles in the path of successfully deploying this technology on a large scale, our experiment shows that the new technology is sustainable and the significant quality improvements brought by it can help build an effective distributed and collaborative classroom environment.


cluster computing and the grid | 2004

Remote partial file access using compact pattern descriptions

Thorsten Schütt; Andre Merzky; Andrei Hutanu; Florian Schintke

We present a method for the efficient access to parts of remote files. The efficiency is achieved by using a file format independent compact pattern description, that allows us to request several parts of a file in a single operation. This results in a drastically reduced number of remote operations and network latencies if compared to common solutions. We measured the time to access parts of remote files with compact patterns, compared it with normal GridFTP remote partial file access and observed a significant performance increase. Further we discuss how the presented pattern access can be used for an efficient read from multiple replicas and how this can be integrated into a data management system to support the storage of partial replicas for large scale simulations.


grid computing | 2010

High-performance remote data access for remote visualization

Andrei Hutanu; Gabrielle Allen; Tevfik Kosar

Visualization of remote data needs software that is fundamentally designed for distributed processing and high-speed networks. Motivated by and following the requirements of a distributed visualization application, this article describes the design and implementation of a fast and configurable remote data access system called eavivdata. Because wide-area networks may have a high latency, the remote data access system uses an architecture that effectively hides latency. Four remote data access architectures are analyzed and the results show that an architecture that combines bulk and pipeline processing is the best solution for high-throughput remote data access. The resulting system, also supporting high-speed transport protocols and configurable remote operations, is up to 400 times faster than a comparable existing remote data access system.


Proceedings of the 15th ACM Mardi Gras conference on From lightweight mash-ups to lambda grids: Understanding the spectrum of distributed computing requirements, applications, tools, infrastructures, interoperability, and the incremental adoption of key capabilities | 2008

Network flow based resource brokering and optimization techniques for distributed data streaming over optical networks

Cornelius Toole; Andrei Hutanu

This article analyzes the problem of optimizing access to and transport of large remote data through intelligent resource selection and configuration. With the availability of high speed optical networks, the main problem for remote data access is shifting from having enough network bandwidth to having enough data ready to saturate the network when requested by the application. Network bandwidth is now higher than disk bandwidth and this gives us the possibility of utilizing multiple distributed resources to saturate the network links. We are considering two types of scenarios, one where we use only disks as data sources and a more advanced scenario where compute resources in the network can be utilized as caches to increase instantaneous throughput. The problem we are facing is choosing and configuring the resources for this scenario. This is a non-trivial problem however as we are using application-driven network resource allocation (which gives us predictability and determinism in terms of network performance) the problem becomes tractable. We discuss optimization algorithms that are applicable to this problem, and present an algorithm that divides the problem in two sub-problems that can be solved using existing network flow algorithms.


middleware for grid computing | 2006

Generic support for bulk operations in grid applications

Stephan Hirmer; Hartmut Kaiser; Andre Merzky; Andrei Hutanu; Gabrielle Allen

Within grid environments, latencies for remote operations of any kind can, as the number of operations increases, become a dominant factor for overall application performance. Amongst various approaches for latency hiding, bulk operations provide one possible solution to reduce latencies for large numbers of similar operations. The identification of bulks can, however, pose a non-trivial exercise for application developers, often requiring changes to the implemented remote API, and hence direct code modifications to the application themselves.In this paper we show how bulk operations can be integrated into existing API implementations, and identify the required properties of the API to make this approach feasible. We also show that our approach considers any type of bulk operation, and is independent of the underlying middleware support for bulks. We further describe a prototype implementation (within the SAGA C++ reference implementation effort), and present performance measurements for bulks of remote file copy operations.

Collaboration


Dive into the Andrei Hutanu's collaboration.

Top Co-Authors

Avatar

Andre Merzky

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brygg Ullmer

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar

Ravi Paruchuri

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jon MacLaren

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar

Werner Benger

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar

Cornelius Toole

Louisiana State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge