Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lucas Correia Villa Real is active.

Publication


Featured researches published by Lucas Correia Villa Real.


ieee conference on mass storage systems and technologies | 2010

The Linear Tape File System

David Pease; Arnon Amir; Lucas Correia Villa Real; Brian Biskeborn; Michael Richmond; Atsushi Abe

While there are many financial and practical reasons to prefer tape storage over disk for various applications, the difficultly of using tape in a general way is a major inhibitor to its wider usage. We present a file system that takes advantage of a new generation of tape hardware to provide efficient access to tape using standard, familiar system tools and interfaces. The Linear Tape File System (LTFS) makes using tape as easy, flexible, portable, and intuitive as using other removable and sharable media, such as a USB drive.


acm multimedia | 2015

Dynamic Adjustment of Subtitles Using Audio Fingerprints

Lucas Correia Villa Real; Rodrigo Laiola Guimarães; Priscilla Avegliano

Anyone who ever downloaded subtitle files from the Internet has faced problems synchronizing them with the associated media files. Even with the efforts of communities on reviewing user-contributed subtitles and with mechanisms in movie players to automate the discovery of subtitles for a given media, users still face lip synchronization issues. In this work we conduct a study on several subtitle files associated with popular movies and TV series and analyze their differences. Based on that, we propose a two-phase subtitle synchronization method that annotates subtitles with audio fingerprints, which serve as synchronization anchors to the media player. Preliminary results obtained with our prototype suggest that our technique is effective and has minimal impact on the extension of subtitle formats and on media playback performance.


networking architecture and storages | 2015

An I/O scheduler for dual-partitioned tapes

Lucas Correia Villa Real; Michael Richmond; Brian Biskeborn; David Pease

For a long time, tapes have had a single logical partition that was efficiently operated by dedicated software in batch mode. Nowadays, after the introduction of the LTO 5 standard, tapes support logical partitioning and, with the arrival of the Linear Tape File System (LTFS), their contents are exposed through the file system interface as regular files and directories. As a consequence, tape medium can now be accessed by concurrent processes that may create files in parallel. This creates two potential problems: interleaved data blocks and expensive partition switches. This paper presents the design and implementation of Unified, an I/O scheduler for LTFS that addresses these problems. We observe the effectiveness of delayed writes on decreased file fragmentation and the reduction of partition switches with the use of buffering and of redundant file copies. We also find out that software-based read prefetching, commonly used to manage disk devices, does not improve read times on tapes, but rather introduces potential overhead. With the techniques described in this paper, the Unified scheduler allows tape operations to be performed close to the raw hardware speed.


european conference on parallel processing | 2014

IFM: A Scalable High Resolution Flood Modeling Framework

Swati Singhal; Sandhya Aneja; Frank Liu; Lucas Correia Villa Real; Thomas George

Accurate and timely flood forecasts are essential for effective management of flood disasters, which has become increasingly frequent over the last decade. Obtaining such forecasts requires high resolution integrated weather and flood models with computational costs optimized to provide sufficient lead time. Existing overland flood modeling software packages do not readily scale to topography grids of large size and only permit coarse resolution modeling of large regions. In this paper, we present a highly scalable, integrated flood forecasting system called IFM that runs on both shared and distributed memory architectures, effectively allowing the computation of domains with billions of cells. In order to optimize IFM for large areas, we focus on the computationally expensive overland routing engine. We describe a parallelization scheme and novel strategies to partition irregular domains to minimize load imbalance in the presence of memory constraints that results in 40% reduction in time compared to best uniform partitioning. We demonstrate the scalability of the proposed approach for up to 8192 processors on large scale real-world domains. Our model can provide a 48-hour flood forecast on a watershed of 656 million cells in under 5 minutes.


ieee international conference on high performance computing, data, and analytics | 2013

A hybrid parallelization approach for high resolution operational flood forecasting

Swati Singhal; Lucas Correia Villa Real; Thomas George; Sandhya Aneja; Yogish Sabharwal

Accurate and timely flood forecasts are becoming highly essential due to the increased incidence of flood related disasters over the last few years. Such forecasts require a high resolution integrated flood modeling approach. In this paper, we present an integrated flood forecasting system with an automated workflow over the weather modeling, surface runoff estimation and water routing components. We primarily focus on the water routing process which is the most compute intensive phase and present two parallelization strategies to scale it up to large grid sizes. Specifically, we employ nature-inspired decomposition of a simulation domain into watershed basins and propose a master slave model of parallelization for distributed processing of the basins. We also propose an intra-basin shared memory parallelization approach using OpenMP. Empirical evaluation of the proposed parallelization strategies indicates a potential for high speedups for certain types of scenarios (e.g., speedup of 13× with 16 threads using OpenMP parallelization for the large Rio de Janeiro basin).


acm multimedia | 2010

File-based media workflows using ltfs tapes

Arnon Amir; David Pease; Rainer Richter; Brian Biskeborn; Michael Richmond; Lucas Correia Villa Real

While digital video cameras have existed for over two decades digital video cassettes are still the primary storage medium in professional video archives. One of the major inhibitors in the transition to file-based workflows and media archives is the lack of an affordable, portable and archive compatible storage medium for the vast amounts of content produced. We address this need by a) defining the Linear Tape File System (LTFS) tape format for storing files, file properties, hierarchical directories and extended attributes; b) building file system software that allows LTFS tapes to be used in the same way as portable storage devices and c) leveraging LTFS to create efficient file-based media workflows. In the exhibit we present LTFS on LTO-5 (Linear Tape Open, Gen 5) tapes. We demonstrate file-based workflows with storyboards, video proxies and partial video restore of MXF (Material Exchange Format) professional video content. LTFS on LTO-5 tape can be 20 times higher in capacity, 10 times faster and 40 times cheaper than digital video cassette media. Furthermore, it combines the benefits of tape-based and file-based workflows. The new tape format streamlines file-based production, from video capture and transport to long-term archive. The tape format and file system implementation are available as open source.


ieee/acm international symposium cluster, cloud and grid computing | 2015

Architecture Aware Resource Allocation for Structured Grid Applications: Flood Modelling Case

Vaibhav Saxena; Thomas George; Yogish Sabharwal; Lucas Correia Villa Real

Numerous problems in science and engineering involve discretizing the problem domain as a regular structured grid and make use of domain decomposition techniques to obtain solutions faster using high performance computing. However, the load imbalance of the workloads among the various processing nodes can cause severe degradation in application performance. This problem is exacerbated for the case when the computational workload is non-uniform and the processing nodes have varying computational capabilities. In this paper, we present novel local search algorithms for regular partitioning of a structured mesh to heterogeneous compute nodes in a distributed setting. The algorithms seek to assign larger workloads to processing nodes having higher computation capabilities while maintaining the regular structure of the mesh in order to achieve a better load balance. We also propose a distributed memory (MPI) parallelization architecture that can be used to achieve a parallel implementation of scientific modelling software requiring structured grids on heterogeneous processing resources involving CPUs and GPUs. Our implementation can make use of the available CPU cores and multiple GPUs of the underlying platform simultaneously. Empirical evaluation on real world flood modelling domains on a heterogeneous architecture comprising of multicore CPUs and GPUs suggests that the proposed partitioning approach can provide a performance improvement of up to 8× over a naive uniform partitioning.


usenix large installation systems administration conference | 2008

IZO: applications of large-window compression to virtual machine management

Mark A. Smith; Jan Pieper; Daniel Gruhl; Lucas Correia Villa Real


Archive | 2013

Apparatus and methods for co-located social integration and interactions

Kiran Mantripragada; Lucas Correia Villa Real; Nicole Sultanum


Archive | 2011

SYSTEM, METHOD AND PROGRAM PRODUCT FOR PROVIDING POPULACE CENTRIC WEATHER FORECASTS

Victor Fernandes Cavalcante; Ricardo Herrmann; Kiran Mantripragada; Marco Aurelio Stelmar Netto; Lucas Correia Villa Real; Cleidson R. B. de Souza

Researchain Logo
Decentralizing Knowledge