Márcia Cristina Cera
Universidade Federal do Pampa
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Márcia Cristina Cera.
Brazilian Journal of Computers in Education | 2012
Márcia Cristina Cera; Mateus Henrique Dal Forno; Vanessa Gindri Vieira
The constant technological advances require that professionals have, in addition to specific knowledge, skills such as proactivity, initiative, self-learning ability, communication and teamwork. However, in traditional higher education, generally, the focus turns to knowledge acquisition, appearing as an optional activities that encourage such skills. Seeking to narrow the gap between the academic and the job market, is envisioned the Software Engineering (ES) course at UNIPAMPA. The method chosen to support the approximation between theory and practice was the PBL (Problem Based Learning). Besides teaching the basic concepts, the course reserve part of their semester to practice them across disciplines called Problem Solving. In them, students are encouraged to solve a real problem by developing a computer system. Divided into teams, they apply concepts and practices of software engineering and exercise coordination and project development of your team, simulating the environment of a software development company. The aim of this paper is to show how the use of PBL may contribute to both the learning of software engineering, and to encourage professional aimed skills. As a case study, this paper presents the design and operationalization of Problem Solving I, which is offered to freshman students in the ES. Through the returns from the class of 2011, it was realized that even with a preliminary overview of the course, students realized the importance of the use of PBL as a means to bridge the gap between theory and professional practices of software engineering.
Journal of Parallel and Distributed Computing | 2016
Arthur Francisco Lorenzon; Márcia Cristina Cera; Antonio Carlos Schneider Beck
Abstract Thread-level parallelism (TLP) is being widely exploited in embedded and general-purpose multicore processors (GPPs) to increase performance. However, parallelizing an application involves extra executed instructions and accesses to the shared memory, to communicate and synchronize. The overhead of accessing the shared memory, which is very costly in terms of delay and energy because it is at the bottom of the hierarchy, varies depending on the communication model and level of data exchange/synchronization of the application. On top of that, multicore processors are implemented using different architectures, organizations and memory subsystems. In this complex scenario, we evaluate 14 parallel benchmarks implemented with 4 different parallel programming interfaces (PPIs), with distinct communication rates and TLP, running on five representative multicore processors targeted to general-purpose and embedded systems. We show that while the former presents the best performance and the latter will be the most energy efficient, there is no single option that offers the best result for both. We also demonstrate that in applications with low levels of communication, what matters is the communication model, not a specific PPI. On the other hand, applications with high communication demands have a huge search space that can be explored. For those, Pthreads is the most efficient PPI for Intel Processors, while OpenMP is the best for ARM ones. MPI is the worst choice in almost any scenario, and gets very inefficient as the TLP increases. We also evaluate energy delay x product (ED x P), weighting performance towards energy by varying the value of x . In a representative case where energy is the most important, three different processors can be the best alternative for different values of x . Finally, we explore how static power influences total energy consumption, showing that its increase brings benefits to ARM multiprocessors, with the opposite effect for Intel ones.
ieee computer society annual symposium on vlsi | 2015
Arthur Francisco Lorenzon; Anderson Luiz Sartor; Márcia Cristina Cera; Antonio Carlos Schneider Beck
Thread-level parallelism (TLP) exploitation for embedded systems has been a challenge for software developers: while it is necessary to take advantage of the availability of multiple cores, it is also mandatory to consume less energy. To speed up the development process and make it as transparent as possible, software designers use parallel programming interfaces (PPIs). However, as will be shown in this paper, each one implements different ways to exchange data, influencing performance, energy consumption and energy-delay product (EDP), which varies across different embedded processors. By evaluating four PPIs and three multicore processors, we demonstrate that it is possible to save up to 62% in energy consumption and achieve up to 88% of EDP improvements by just switching the PPI, and that the efficiency (i.e., The best possible use of the available resources) decreases as the number of threads increases in almost all cases, but at distinct rates.
international symposium on circuits and systems | 2015
Arthur Francisco Lorenzon; Márcia Cristina Cera; Antonio Carlos Schneider Beck
Energy consumption in multicore embedded systems has become a constant concern. Thread-Level Parallelism exploitation may reduce energy consumption because it saves static power consumption of the processor, since performance is obtained. However, as will be shown in this paper, the influence of the static power on the energy consumption and Energy-Delay Product will depend on how significant it is in the processor. By evaluating different levels of static power in the respect to the total power consumption in two embedded processors (ARM and Atom), we demonstrate that if the right value of static power consumption is tuned during the designing and manufacturing, it is possible to save up 35% in energy consumption and achieve up to 20% of improvements in the EDP efficiency (i.e., the best possible use of the available resources). We also show that the more communication the parallel application has, the lower is the impact of static power of the processor in the total energy consumption.
computer software and applications conference | 2015
Arthur Francisco Lorenzon; Anderson Luiz Sartor; Márcia Cristina Cera; Antonio Carlos Schneider Beck
Thread-Level Parallelism (TLP) exploitation for embedded systems has been a challenge for software developers: while it is necessary to take advantage of the availability of multiple cores, it is also mandatory to consume less energy. To speed up the development process and make it as transparent as possible, software designers use Parallel Programming Interfaces (PPIs). However, as will be shown in this paper, each PPI implements different ways to exchange data using shared memory regions, influencing performance, energy consumption and Energy-Delay Product (EDP), which varies across different embedded processors. By evaluating four PPIs and three multicore processors (ARM A8, A9 and Intel Atom), we demonstrate that by simply switching PPI it is possible to save up to 59% in energy consumption and achieve up to 85% of EDP improvements, in the most significant case. We also show that the efficiency (i.e., The best possible use of the available resources) decreases as the number of threads increases in almost all cases, but at distinct rates.
2013 III Brazilian Symposium on Computing Systems Engineering | 2013
Arthur Francisco Lorenzon; Márcia Cristina Cera; Antonio Carlos Schneider Beck Filho
The popularity of multicore embedded systems brings new challenges to the development of parallel applications: at the same time it is necessary to exploit the availability of multiple cores, it is also mandatory to consume less energy. To speed up the development process and make it as more transparent as possible to the programmer, parallelism is exploited through the use of Application Programming Interfaces (API). Each one of these API implements different ways to exchange data. However, data exchange occurs between the threads in shared memory regions, which have higher energy consumption. Therefore, each API will present different energy costs to communicate. In this paper, we present the first step to show that different APIs have different impacts on energy consumption, through the analysis of the communication mechanism that each one employs, the number of memory accesses necessary for the communication, and the number of executed instructions according to the API used. Our results show that OpenMP presents a higher communication overhead, with more memory accesses and instructions executed, when compared to Pthreads.
2012 13th Symposium on Computer Systems | 2012
Arthur Francisco Lorenzon; Márcia Cristina Cera; Fabio Diniz Rossi
MPI-2 dynamic process creation brings new opportunities, since it allows to add some flexibility into MPI applications. However, the spawning of processes is synchronous and it establishes an hierarchical communication relationship among processes. This paper investigate the impact of MPI-2 features over the Game of Life problem. We implement different versions of our test application aiming to identify both, spawning and hierarchical communication overhead. Our results show the impact achieved with our test environment. In addition, we discuss some directions to reduce the overhead taking advantage of MPI features.
signal processing systems | 2015
Arthur Francisco Lorenzon; Márcia Cristina Cera; Antonio Carlos Schneider Beck
Anais do Salão Internacional de Ensino, Pesquisa e Extensão | 2016
Adriano Marques Garcia; Márcia Cristina Cera; Arthur Francisco Lorenzon
Anais do Salão Internacional de Ensino, Pesquisa e Extensão | 2016
Thayson Rafael Karlinski; Márcia Cristina Cera; Arthur Francisco Lorenzon; Antonio Carlos Scheneider Beck