Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Renan G. Cattelan is active.

Publication


Featured researches published by Renan G. Cattelan.


ACM Transactions on Multimedia Computing, Communications, and Applications | 2008

Watch-and-comment as a paradigm toward ubiquitous interactive video editing

Renan G. Cattelan; César A. C. Teixeira; Rudinei Goularte; Maria da Graça Campos Pimentel

The literature reports research efforts allowing the editing of interactive TV multimedia documents by end-users. In this article we propose complementary contributions relative to end-user generated interactive video, video tagging, and collaboration. In earlier work we proposed the watch-and-comment (WaC) paradigm as the seamless capture of an individuals comments so that corresponding annotated interactive videos be automatically generated. As a proof of concept, we implemented a prototype application, the WaCTool, that supports the capture of digital ink and voice comments over individual frames and segments of the video, producing a declarative document that specifies both: different media stream structure and synchronization. In this article, we extend the WaC paradigm in two ways. First, user-video interactions are associated with edit commands and digital ink operations. Second, focusing on collaboration and distribution issues, we employ annotations as simple containers for context information by using them as tags in order to organize, store and distribute information in a P2P-based multimedia capture platform. We highlight the design principles of the watch-and-comment paradigm, and demonstrate related results including the current version of the WaCTool and its architecture. We also illustrate how an interactive video produced by the WaCTool can be rendered in an interactive video environment, the Ginga-NCL player, and include results from a preliminary evaluation.


document engineering | 2004

Interactive multimedia annotations: enriching and extending content

Rudinei Goularte; Renan G. Cattelan; José Antonio Camacho-Guerrero; Valter R. Inacio Jr.; Maria da Graça Campos Pimentel

This paper discusses an approach to the problem of annotating multimedia content. Our approach provides annotation as metadata for indexing retrieval and semantic processing as well as content enrichment. We use an underlying model for structured multimedia descriptions and annotations allowing the establishment of spatial temporal and linking relationships. We discuss aspects related with documents and annotations used to guide the design of an application that allows annotations to be made with pen-based interaction with Tablet PCs. As a result a video stream can be annotated during the capture. The annotation can be further edited extended or played back synchronously.


document engineering | 2010

A social approach to authoring media annotations

Roberto Fagá Jr.; Vivian Genaro Motti; Renan G. Cattelan; César A. C. Teixeira; Maria da Graça Campos Pimentel

End-user generated content is responsible for the success of several collaborative applications, as it can be noted in the context of the web. The collaborative use of some of these applications is made possible, in many cases, by the availability of annotation features which allow users to include commentaries on each others content. In this paper we first discuss the opportunity of defining vocabularies that allow third-party applications to integrate annotations to end-user generated documents, and present a proposal for such a vocabulary. We then illustrate the usefulness of our proposal by detailing a tool which allows users to add multimedia annotations to end-user generated video content.


latin american web congress | 2004

M4Note: a multimodal tool for multimedia annotations

Rudinei Goularte; José Antonio Camacho-Guerrero; Valter R. Inacio Jr.; Renan G. Cattelan; Maria da Graça Campos Pimentel

This work discusses an approach to the problem of annotating multimedia content. Our approach provides annotation as metadata for indexing, retrieval and semantic processing as well as content enrichment. We use an underlying model for structured multimedia descriptions and annotations, allowing the establishment of spatial, temporal and linking relationships. We discuss aspects related with documents and annotations used to guide the design of an application that allows annotations to be made with pen-based interaction with tablet PCs. As a result, a video stream can be annotated at the same time that it is captured. Moreover, the annotation can be edited, extended or played back synchronously afterwards.


european conference on interactive tv | 2008

Ubiquitous Interactive Video Editing Via Multimodal Annotations

Maria da Graça Campos Pimentel; Rudinei Goularte; Renan G. Cattelan; Felipe S. Santos; César A. C. Teixeira

Considering that, when users watch a video with someone else, they are used to make comments regarding its contents --- such as a comment with respect to someone appearing in the video --- in previous work we exploited ubiquitous computing concepts to propose the watching-and-commentingauthoring paradigm in which a users comments are automatically captured so as to automatically generate a corresponding annotated interactive video. In this paper we revisit and extend our previous work and detail our prototype that supports the watching-and-editingparadigm, discussing how a ubiquitous computing platform may explore digital ink and associated gestures to support the authoring of multimedia content while enhancing the social aspects of video watching.


acm symposium on applied computing | 2008

Inkteractors: interacting with digital ink

Renan G. Cattelan; César A. C. Teixeira; Hélder Ribas; Ethan V. Munson; Maria da Graça Campos Pimentel

Digital inking systems accept pen-based input from the user, process and archive the resulting data as digital ink. However, the reviewing techniques currently available for such systems are limited. In this paper we formalize operators that model the user interaction during digital ink capture. Such operators can be applied in situations where it is important to have a customized view of the inking activity. We describe the implementation of a player that allows the user, by selecting the desired operators, to interact with digitally annotated documents while reviewing them.


Multimedia Tools and Applications | 2008

Automatically linking live experiences captured with a ubiquitous infrastructure

Alessandra Alaniz Macedo; Laércio Augusto Baldochi; José Antonio Camacho-Guerrero; Renan G. Cattelan; Maria da Graça Campos Pimentel

Ubiquitous computing aims at providing services to users in everyday environments such as the home. One research theme in this area is that of building capture and access applications which support information to be recorded (captured) during a live experience toward automatically producing documents for review (accessed). The recording demands instrumented environments with devices such as microphones, cameras, sensors and electronic whiteboards. Since each experience is usually related to many others (e.g. several meetings of a project), there is a demand for mechanisms supporting the automatic linking among documents relative to different experiences. In this paper we present original results relative to the integration of our previous efforts in the Infrastructure for Capturing, Accessing, Linking, Storing and Presenting information (CALiSP).


acm symposium on applied computing | 2009

User-media interaction with interactive TV

César A. C. Teixeira; Erick Lazaro Melo; Renan G. Cattelan; Maria da Graça Campos Pimentel

Watching TV is a practice many people enjoy and feel comfortable with. We propose the capture of the user interaction while interacting with a remote control to watch TV: such detailed information is most valuable to many applications and services. We discuss our proposed approach in the context of the Brazilian Interactive Digital TV platform.


international conference on design of communication | 2009

Context information exchange and sharing in a peer-to-peer community: a video annotation scenario

Roberto Fagá Jr.; Bruno C. Furtado; Felipe Maximino; Renan G. Cattelan; Maria da Graça Campos Pimentel

The literature reports many efforts toward supporting video annotation. In this paper, we present a peer-to-peer model for the exchange of context information and user annotations over video. The Context-Aware Peer-to-Peer Architecture (CAPPA) exploits the automatic capture of the user-interaction with personal devices, employs ontologies to store context information, and uses the context information to organize users in P2P groups for the collaborative exchange of information. We present our proposed model by discussing a prototype system, called CAPPA Service (CAPPAS), which allows users to create and to join peer-to-peer groups using Web-based social communities. Using CAPPAS, users are able to create multimodal annotations while watching videos. The service deploys a capture mode which can be adapted according to the context information collected from the P2P network. CAPPAS customization features include the adaptation of its graphical interfaces according to context information, and the automatic suggestion of text completion during annotation.


Multimedia Tools and Applications | 2010

Taking advantage of contextualized interactions while users watch TV

César A. C. Teixeira; Erick Lazaro Melo; Renan G. Cattelan; Maria da Graça Campos Pimentel

While watching TV, viewers use the remote control to turn the TV set on and off, change channel and volume, to adjust the image and audio settings, etc. Worldwide, research institutes collect information about audience measurement, which can also be used to provide personalization and recommendation services, among others. The interactive digital TV offers viewers the opportunity to interact with interactive applications associated with the broadcast program. Interactive TV infrastructure supports the capture of the user–TV interaction at fine-grained levels. In this paper we propose the capture of all the user interaction with a TV remote control—including short term and instant interactions: we argue that the corresponding captured information can be used to create content pervasively and automatically, and that this content can be used by a wide variety of services, such as audience measurement, personalization and recommendation services. The capture of fine grained data about instant and interval-based interactions also allows the underlying infrastructure to offer services at the same scale, such as annotation services and adaptative applications. We present the main modules of an infrastructure for TV-based services, along with a detailed example of a document used to record the user–remote control interaction. Our approach is evaluated by means of a proof-of-concept prototype which uses the Brazilian Digital TV System, the Ginga-NCL middleware.

Collaboration


Dive into the Renan G. Cattelan's collaboration.

Top Co-Authors

Avatar

Rafael Dias Araujo

Federal University of Uberlandia

View shared research outputs
Top Co-Authors

Avatar

Fabiano A. Dorça

Federal University of Uberlandia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Taffarel Brant-Ribeiro

Federal University of Uberlandia

View shared research outputs
Top Co-Authors

Avatar

César A. C. Teixeira

Federal University of São Carlos

View shared research outputs
Top Co-Authors

Avatar

Miller M. Mendes

Federal University of Uberlandia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Erick Lazaro Melo

Federal University of São Carlos

View shared research outputs
Top Co-Authors

Avatar

Igor Mendonça

Federal University of Uberlandia

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge