Francisco J. Mesa-Martinez
University of California, Santa Cruz
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Francisco J. Mesa-Martinez.
international symposium on computer architecture | 2007
Francisco J. Mesa-Martinez; Joseph Nayfach-Battilana; Jose Renau
Simulation environments are an indispensable tool in the design, prototyping, performance evaluation, and analysis of computer systems. Simulator must beable to faithfully reflect the behavior of the system being analyzed. To ensure the accuracy of the simulator, it must be verified and determined to closely match empirical data. Modern processors provide enough performance counters to validate the majority of the performance models; nevertheless, the information provided is not enough to validate power and thermal models. In order to address some of the difficulties associated with the validation of power andthermal models, this paper proposes an infrared measurement setup to capture run-time power consumption and thermal characteristics of modern chips. We use infrared cameras with high spatial resolution (10x10μm) and high frame rate (125fps) to capture thermal maps. To generate a detailed power breakdown (leakage and dynamic) for each processor floorplan unit, we employ genetic algorithms. The genetic algorithm finds a power equation for each floorplan block that produces the measured temperature for a given thermal package. The difference between the predicted power and the externally measured power consumption for an AMD Athlon analyzed in this paper has less than 1% discrepancy. As an example of applicability, we compare the obtained measurements with CACTI power models, and propose extensions to existing thermal models to increase accuracy.
architectural support for programming languages and operating systems | 2010
Francisco J. Mesa-Martinez; Ehsan K. Ardestani; Jose Renau
Temperature is a dominant factor in the performance, reliability, and leakage power consumption of modern processors. As a result, increasing numbers of researchers evaluate thermal characteristics in their proposals. In this paper, we measure a real processor focusing on its thermal characterization executing diverse workloads. Our results show that in real designs, thermal transients operate at larger scales than their performance and power counterparts. Conventional thermal simulation methodologies based on profile-based simulation or statistical sampling, such as Simpoint, tend to explore very limited execution spans. Short simulation times can lead to reduced matchings between performance and thermal phases. To illustrate these issues we characterize and classify from a thermal standpoint SPEC00 and SPEC06 applications, which are traditionally used in the evaluation of architectural proposals. This paper concludes with a list of recommendations regarding thermal modeling considerations based on our experimental insights.
international symposium on microarchitecture | 2005
Cyrus Bazeghi; Francisco J. Mesa-Martinez; Jose Renau
Microprocessor design complexity is growing rapidly. As a result, current development costs for top of the line processors are staggering, and are doubling every 4 years. As we design ever larger and more complex processors, it is becoming increasingly difficult to estimate how much time it will take to design and verify them. To compound this problem, processor design cost estimation still does not have a quantitative approach. Although designing a processor is very resource consuming, there is little work measuring, understanding, and estimating the effort required. To address this problem, this paper introduces /spl mu/Complexity, a methodology to measure and estimate processor design effort. /spl mu/Complexity consists of three main parts, namely a procedure to account for the contributions of the different components in the design, accurate statistical regression of experimental measures using a nonlinear mixed-effects model, and a productivity adjustment to account for the productivities of different teams. We use /spl mu/Complexity to evaluate a series of design effort estimators on several processor designs. Our analysis shows that the number of lines of HDL code, the sum of the fan-ins of the logic cones in the design, and a linear combination of the two metrics are good design effort estimators. On the other hand, power, area, frequency, number of flip-flops, and number of standard cells are poor estimators of design effort. We also show that productivity adjustments are necessary to produce accurate estimations.
Proceedings of the 2007 workshop on Experimental computer science | 2007
Francisco J. Mesa-Martinez; Michael F. Brown; Joseph Nayfach-Battilana; Jose Renau
The modeling of power and thermal behavior of processors requires challenging validation processes, which may be complex and undependable. In order to ameliorate some of the difficulties associated with the validation of power and thermal models, this paper describes an infrared measurement setup that simultaneously captures run-time power consumption, thermal characteristics, and performance activity counters from modern processors. We use infrared cameras with high spatial resolution (10x10/μm) and high frame rate (125Hz) to capture thermal maps. Power measurements are obtained with a multimeter, while performance counters are obtained after modifying the operating system (Linux), both at a sampling rate of 1KHz. The synchronized traces can then be used in the validation process of possible thermal, power, and processor activity models.
international parallel and distributed processing symposium | 2008
Francisco J. Mesa-Martinez; Michael F. Brown; Joseph Nayfach-Battilana; Jose Renau
The modeling of power and thermal behavior of modern processors requires challenging validation approaches, which may be complex and in some cases unreliable. In order to address some of the difficulties associated with the validation of power and thermal models, this document describes an infrared measurement setup that simultaneously captures run-time power consumption and thermal characteristics of a processor. We use infrared cameras with high spatial resolution (10 x 10 mum) and high frame rate (125 Hz) to capture thermal maps. Power measurements are obtained with a multimeter at a sampling rate of 1 K Hz. The synchronized traces can then be used in the validation process of possible thermal and power processor activity models.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2013
Ehsan K. Ardestani; Francisco J. Mesa-Martinez; Gabriel Southern; Elnaz Ebrahimi; Jose Renau
Power densities in modern processors induce thermal issues that limit performance. Power and thermal models add complexity to architectural simulators, limiting the depth of analysis. Prohibitive execution time overheads may be circumvented using sampling techniques. While these approaches work well when characterizing processor performance, they introduce new challenges when applied to the thermal domain. This paper aims to improve the accuracy and performance of sampled thermal simulation at the architectural level. To the best of our knowledge, this paper is the first to evaluate the impact of statistical sampling on thermal metrics through direct temperature measurements performed at runtime. Experiments confirm that sampling can accurately estimate certain thermal metrics. However, extra consideration needs to be taken into account to preserve the accuracy of temperature estimation in a sampled simulation. Mainly because, on average, thermal phases are much longer than performance phases. Based on these insights, we introduce a framework that extends statistical sampling techniques, used at the performance and power stages, to the thermal domain. The resulting technique yields an integrated performance, power, and temperature simulator that maintains accuracy, while reducing simulation time by orders of magnitude. In particular, this paper shows how dynamic frequency and voltage adaptations can be evaluated in a statistically sampled simulation. We conclude by showing how the increased simulation speed benefits architects in the exploration of the design space.
VLSI-SoC (Selected Papers) | 2009
Cyrus Bazeghi; Francisco J. Mesa-Martinez; Jose Renau
Design complexity is rapidly becoming a limiting factor in the design of modern high-performance digital systems. The increasing levels of design effort required to improve and implement critical processor and system structures have led to staggering design costs. As we design ever larger and more complex systems, it is becoming increasingly difficult to estimate how much time it takes to design and verify them. Novel quantitative and optimization approaches are needed to understand and deal with the limiting effects induced by design complexity, which remain for the most part hidden from the architect. To address part of these shortcomings, this work introduces μ Complexity and μPCBComplexity, a set of methodologies to measure and estimate design effort for modern processor and PCB (printed circuit board) designs.
IEEE Transactions on Very Large Scale Integration Systems | 2007
Cyrus Bazeghi; Francisco J. Mesa-Martinez; Brian Greskamp; Josep Torrellas; Jose Renau
System design complexity is growing rapidly. As a result, current development costs are constantly increasing. It is becoming increasingly difficult to estimate how much time it will take to design and verify these designs, which are getting denser and increasingly more complex. To compound this problem, circuit design cost estimation still does not have a quantitative approach. Although designing a system is very resource consuming, there is little work invested in measuring, understanding, and estimating the effort required. To address part of the current shortcomings, this paper introduces μPCBComplexity, a methodology to measure and estimate PCB (printed circuit board) design effort. PCBs are the central component of many systems and require large amounts of resources to properly design and verify. μPCBComplexity consists of two main parts; a procedure to account for the contributions of the different elements in the design, and a non-linear statistical regression of experimental measures in order to determine a good design effort metric. We use μPCBComplexity to evaluate a series of design effort estimators for twelve PCB designs. By using the proposed μPCBComplexity metric, designers can estimate PCB design effort.
international symposium on microarchitecture | 2007
Francisco J. Mesa-Martinez; Jose Renau
international conference on parallel architectures and compilation techniques | 2006
Francisco J. Mesa-Martinez; Michael C. Huangq; Jose Renau