Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Karin Koogan Breitman is active.

Publication


Featured researches published by Karin Koogan Breitman.


international conference on cloud computing | 2010

An Architecture for Distributed High Performance Video Processing in the Cloud

Rafael Pereira; Marcello Azambuja; Karin Koogan Breitman; Markus Endler

Video processing applications are notably data intense, time, and resource consuming. Upfront infrastructure investment is usually high, specially when dealing with applications where time-to- market is a crucial requirement, e.g., breaking news and journalism. Such infrastructures are often inefficient, because due to demand variations, resources may end up idle a good portion of the time. In this paper, we propose the Split&Merge architecture for high performance video processing, a generalization of the MapReduce paradigm that rationalizes the use of resources by exploring on demand computing. To illustrate the approach, we discuss an implementation of the Split&Merge architecture, that reduces video encoding times to fixed duration, independently of the input size of the video file, by using dynamic resource provisioning in the Cloud.


Journal of the Brazilian Computer Society | 1999

The world's a stage: a survey on requirements engineering using a real-life case study

Karin Koogan Breitman; Julio Cesar Sampaio do Prado Leite; Anthony Finkelstein

In this article we present a survey on the area of Requirements Engineering anchored on the analysis of a real life case study, the London Ambulance Service [56]. We aim at bringing to context new methods, techniques and tools that should be of help to both reaserchers and practitioners. The case study in question is of special interest in that it is available to the public and deals with a very large system, of which the software system is only a part of. The survey is divided into four topics of interest: viewpoints, social aspects, evolution and non-functional requirements. This division resulted from the work method adopted by the authors. Our main goal is to bridge recent findings in Requirements Engineering research to a real world problem. In this light, we believe this article to be an important educational device.


ieee international conference semantic computing | 2012

Publishing Statistical Data on the Web

Percy Salas; Michael Martin; Fernando Maia Da Mota; Sören Auer; Karin Koogan Breitman; Marco A. Casanova

Statistical data is one of the most important sources of information, relevant for large numbers of stakeholders in the governmental, scientific and business domains alike. In this article, we overview how statistical data can be managed on the Web. With OLAP2 Data Cube and CSV2 Data Cube we present two complementary approaches on how to extract and publish statistical data. We also discuss the linking, repair and the visualization of statistical data. As a comprehensive use case, we report on the extraction and publishing on the Web of statistical data describing 10 years of life in Brazil.


IEEE Computer | 2007

Database Conceptual Schema Matching

Marco A. Casanova; Karin Koogan Breitman; Daniela F. Brauner; André Marins

A database conceptual schema is a high-level description of how database concepts are organized, which is typically as classes of objects and their attributes. A fundamental operation in many database applications, schema matching involves finding a mapping mu between the concepts in a source scheme S and the concepts in a target schema T such that, if t = mu(s), then s and t have the same meaning. Along with data warehousing, query mediation relies heavily on schema marching. This application uses a mediator to translate user queries, formulated in terms of a common schema M, into queries that local databases can handle. The mediator must therefore be able to match each local schema with M. Query mediation is particularly challenging in the context of the Web, where the number of local databases, over which the mediator has little control, is enormous. We examine three major approaches to schema matching - syntactic, semantic, and a priori - using examples, with a focus on mediator design.


international conference on enterprise information systems | 2009

Instance-Based OWL Schema Matching

Luiz André P. Paes Leme; Marco A. Casanova; Karin Koogan Breitman; Antonio L. Furtado

Schema matching is a fundamental issue in many database applications, such as query mediation and data warehousing. It becomes a challenge when different vocabularies are used to refer to the same real-world concepts. In this context, a convenient approach, sometimes called extensional, instance-based or semantic, is to detect how the same real world objects are represented in different databases and to use the information thus obtained to match the schemas. This paper describes an instance-based schema matching technique for an OWL dialect. The technique is based on similarity functions and is backed up by experimental results with real data downloaded from data sources found on the Web.


Lecture Notes in Computer Science | 2003

Lexicon based ontology construction

Karin Koogan Breitman; Julio Cesar Sampaio do Prado Leite

In order to secure interoperability and allow autonomous agent interaction, software for the web will be required to provide machine processable ontologies. Traditional deliverables of the software development process are the code, technical documentation, to support development and maintenance and use documentation, to provide user support. In the case of web applications, ontologies will also be part of the deliverables. Ontologies will allow machines to process and integrate Web resources intelligently, enable quick and accurate web search, and facilitate communication between a multitude of heterogeneous web-accessible agents [1]. We understand that the responsibility, not only for making explicit this requirement, but also to implement the ontology, belongs to software engineers. Currently the development of ontologies is more of a craft then a systematic discipline. We are proposing a process for the systematic construction of ontologies, centered on the concept of application languages. This concept is rooted on a representation scheme called the language extended lexicon (LEL). We demonstrate our approach using an example in which we implement a machine processable ontology for a meeting scheduler using the ontology language DAML+OIL.


IEEE Intelligent Systems | 2012

Open government data in Brazil

Karin Koogan Breitman; Percy Salas; Marco A. Casanova; Daniel Saraiva; José Viterbo; Regis Pires Magalhães; Ednylton Franzosi; Miriam Chaves

This article discusses the current status of open government data in Brazil and summarizes the lessons learned from publishing Brazilian government data as linked data.


IEEE Computer | 2010

When TV Dies, Will It Go to the Cloud?

Karin Koogan Breitman; Markus Endler; Rafael Pereira; Marcello Azambuja

The paper mentions that coupled with the expected growth in bandwidth through the next decade, cloud computing will change the face of TV. The Internet brought the potential to completely reinvent TV. First, it let users see what they wanted, when they wanted, while suppressing the need for additional hardware. Second, and more importantly, the Net removes the barrier that separates producers, distributors, and consumers.Third, the Internet allows mixing and matching of multisource content. It has become commonplace for networks to mix their own footage with user-generated content to provide a more holistic experience.From a technical viewpoint, huge challenges remain, however, including the ability to process, index, store, and distribute nearly limitless amounts of data. This is why cloud computing will play a major role in redefining TV in the next few years.


Computer Physics Communications | 2014

Uncertainty quantification through the Monte Carlo method in a cloud computing setting

Americo Cunha; Rafael Nasser; Rubens Sampaio; Hélio Lopes; Karin Koogan Breitman

The Monte Carlo (MC) method is the most common technique used for uncertainty quantification, due to its simplicity and good statistical results. However, its computational cost is extremely high, and, in many cases, prohibitive. Fortunately, the MC algorithm is easily parallelizable, which allows its use in simulations where the computation of a single realization is very costly. This work presents a methodology for the parallelization of the MC method, in the context of cloud computing. This strategy is based on the MapReduce paradigm, and allows an efficient distribution of tasks in the cloud. This methodology is illustrated on a problem of structural dynamics that is subject to uncertainties. The results show that the technique is capable of producing good results concerning statistical moments of low order. It is shown that even a simple problem may require many realizations for convergence of histograms, which makes the cloud computing strategy very attractive (due to its high scalability capacity and low-cost). Additionally, the results regarding the time of processing and storage space usage allow one to qualify this new methodology as a solution for simulations that require a number of MC realizations beyond the standard.


data compression conference | 2011

A Cloud Based Architecture for Improving Video Compression Time Efficiency: The Split & Merge Approach

Rafael Pereira; Karin Koogan Breitman

In this paper we argue that the combination of mature video compressing techniques, to emergent Cloud Computing technology, using the Split&Merge architecture, can drastically improve time efficiency of the compression process.

Collaboration


Dive into the Karin Koogan Breitman's collaboration.

Top Co-Authors

Avatar

Marco A. Casanova

Pontifical Catholic University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Antonio L. Furtado

Pontifical Catholic University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Simone Diniz Junqueira Barbosa

Pontifical Catholic University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Percy Salas

Pontifical Catholic University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

José Viterbo

Federal Fluminense University

View shared research outputs
Top Co-Authors

Avatar

Rafael Pereira

Pontifical Catholic University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Julio Cesar Sampaio do Prado Leite

Pontifical Catholic University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniela F. Brauner

Pontifical Catholic University of Rio de Janeiro

View shared research outputs
Researchain Logo
Decentralizing Knowledge