Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Citron is active.

Publication


Featured researches published by Daniel Citron.


high performance computer architecture | 1995

Creating a wider bus using caching techniques

Daniel Citron; Larry Rudolph

The effective bandwidth of a bus and external communication ports can be increased by using a variant of data compression techniques that compacts words instead of data streams. The compaction is performed by caching the high order bits into a table and sending the index into the table along with the low order bits. A coherent table at the receiving end expands the word into it original form. Compaction/expansion units can be placed between processor and memory, between processor and local bus, and between devices that access the system bus. Simulations have shown that over 90% of all informative transferred can be sent in a single cycle when using a 32 bit processor connected by a 16 bit wide bus to a 32 bit memory module. This is for all forms of data, address, data, and instructions, and when a cache-based processor is used.<<ETX>>


international symposium on computer architecture | 2003

MisSPECulation: partial and misleading use of spec CPU2000 in computer architecture conferences

Daniel Citron

A majority of the papers published in leading computer architecture conferences use SPEC CPU2000, or its predecessor SPEC CPU95, which has become the de facto standard for measuring processor and/or memory-hierarchy performance. However, in most cases a subset of the suites benchmarks are simulated. For example: 27 papers were published in ISCA 2002, 16 used SPEC CINT2000, 4 used the whole suite, and only 3 papers explained their omissions.This paper quantifies the extent of this phenomenon in the ISCA, Micro, and HPCA conferences: 173 papers were surveyed, 115 used benchmarks from SPEC CINT, but only 23 used the whole suite. If this current trend continues, by the year 2005 80% of the papers will use the full CINT2000 suite, a year after CPU2004 shall be announced.We claim that results based upon a subset of a benchmark suite are speculative and conflict with Amdahls Law. The law implies that we must present the speedup of using the proposed technique on the whole suite. Projecting the law (by statistically supplying values for the missing benchmarks) to several published papers reduces promising results to average ones. Speedups are reduced from 1.42 to 1.16 in one case, from 1.43 to 1.13 in another, and from 1.76 to 1.15 in a third.Finally, we have found that the disregard for CFP2000 is unwarranted in papers that explore the data cache domain, the suite displays a higher data cache miss rate than CINT2000, which is used more frequently.


ACM Sigarch Computer Architecture News | 2006

The harmonic or geometric mean: does it really matter?

Daniel Citron; Adham Hurani; Alaa Gnadrey

For several decades, computer scientists have been arguing which mean is more appropriate for summarizing computer performance: the harmonic or the geometric. We show that many test cases used in the past to discredit one mean or the other are either artificial or incidental. Changing only one of the benchmarks may result in totally different conclusions.In addition, we conclude that for the SPEC CPU2000 benchmark suite, the choice of averaging has very little influence on the relative standing of different machines. Therefore, the decision to purchase one system rather then another should not be influenced by the type of averaging used.


virtual execution environments | 2005

Instrumenting annotated programs

Marina Biberstein; Vugranam C. Sreedhar; Bilha Mendelson; Daniel Citron; Alberto Giammaria

Instrumentation is commonly used to track application behavior: to collect program profiles; to monitor component health and performance; to aid in component testing; and more. Program annotation enables developers and tools to pass extra information to later stages of software development and execution. For example, the .NET runtime relies on annotations for a significant chunk of the services it provides. Both mechanisms are evolving into important parts of software development %, in the context of modern platforms such as Java and .NET.Instrumentation tools are generally not aware of the semantics of information passed via the annotation mechanism. This is especially true for post-compiler, e.g., run-time, instrumentation. The problem is that instrumentation may affect the correctness of annotations, rendering them invalid or misleading, and producing unforeseen side-effects during program execution. This problem has not been addressed so far.In this paper, we show the subtle interaction that takes place between annotations and instrumentation using several real-life examples. Many annotations are intended to provide information for the runtime; the virtual environment is a prominent annotation consumer, and must be aware of this conflict. It may also be required to provide runtime support to other annotation consumers. We propose an annotation taxonomy and show how instrumentation affects various annotations that were used in research and in industrial applications. We show how the annotations can expose enough information about themselves to prevent the instrumentation from accidentally corrupting the annotations. We demonstrate this approach on our annotations benchmark.


embedded software | 2004

Reducing program image size by extracting frozen code and data

Daniel Citron; Gadi Haber; Roy Levin

Constraints on the memory size of embedded systems require reducing the image size of executing programs. Common techniques include code compression and reduced instruction sets. We propose a novel technique that eliminates large portions of the executable image without compromising execution time (due to decompression) or code generation (due to reduced instruction sets). Frozen code and data portions are identified using profiling techniques and removed from the loadable image. They are replaced with branches to code stubs that load them in the unlikely case that they are accessed. The executable is sustained in a runnable mode.Analysis of the frozen portions reveals that most are error and uncommon input handlers. Only a minority of the code (less than 1%) that was identified as frozen during a training run, is also accessed with production datasets.The technique was applied on three benchmark suites (SPEC CINT2000, SPEC CFP2000, and MediaBench) and results in image size reductions of up to 73%, 92%, and 85% per suite, The average reductions are 59%, 79%, and 78% per suite.


PACS'03 Proceedings of the Third international conference on Power - Aware Computer Systems | 2003

“Look it up or do the math: an energy, area, and timing analysis of instruction reuse and memoization

Daniel Citron; Dror G. Feitelson

Instruction reuse and memoization exploit the fact that during a program run there are operations that execute more than once with the same operand values. By saving previous occurrences of instructions (operands and result) in dedicated, on-chip lookup tables, it is possible to avoid re-execution of these instructions. This has been shown to be efficient in a naive model that assumes single-cycle table lookup. We now extend the analysis to consider the energy, area, and timing overheads of maintaining such tables. We show that reuse opportunities abound in the SPEC CPU2000 benchmark suite, and that by judiciously selecting table configurations it is possible to exploit these opportunities with a minimal penalty. Energy consumption can be further reduced by employing confidence counters, which enable instructions that have a history of failed memoizations to be filtered out. We conclude by identifying those instructions that profit most from memoization, and the conditions under which it is beneficial.


empirical software engineering and measurement | 2013

Evaluating the FITTEST Automated Testing Tools: An Industrial Case Study

Cu D. Nguyen; Bilha Mendelson; Daniel Citron; Onn Shehory; Tanja E. J. Vos; Nelly Condori-Fernandez

This paper aims at evaluating a set of automated tools of the FITTEST EU project within an industrial case study. The case study was conducted at the IBM Research lab in Haifa, by a team responsible for building the testing environment for future development versions of an IBM system management product. The main function of that product is resource management in a networked environment. This case study has investigated whether current IBM Research testing practices could be improved or complemented by using some of the automated testing tools that were developed within the FITTEST EU project. Although the existing Test Suite from IBM Research (TSibm) that was selected for comparison is substantially smaller than the Test Suite generated by FITTEST (TSfittest), the effectiveness of TSfittest, measured by the injected faults coverage is significantly higher (50% vs 70%). With respect to efficiency, by normalizing the execution times, we found the TSfittest runs faster (9.18 vs. 6.99). This is due to the fact that the TSfittest includes shorter tests. Within IBM Research and for the testing of the target product in the simulated environment: the FITTEST tools can increase the effectiveness of the current practice and the test cases automatically generated by the FITTEST tools can help in more efficient identification of the source of the identified faults. Moreover, the FITTEST tools have shown the ability to automate testing within a real industry case.


Ibm Journal of Research and Development | 2011

Testing large-scale cloud management

Daniel Citron; Aviad Zlotnick

Testing for correctness and reliability is a major challenge in the development and deployment of cloud computing platforms. Testing a cloud composed of hundreds to thousands of servers is often cost-prohibitive because of the extensive amount of hardware required. Simulation and emulation, i.e., traditional alternatives to hardware, are too abstract or too slow for testing production code in environments with many servers. We propose a testing approach that combines simulation and emulation in a cloud simulator that runs on a single processor yet enables testing of cloud management software as if the software were managing hundreds of servers and thousands of virtual machine instances. This approach alleviates a significant obstacle on the path to high-quality cloud computing systems.


international parallel and distributed processing symposium | 2004

Overlapping memory operations with circuit evaluation in reconfigurable computing

Yosi Ben-Asher; Daniel Citron; Gadi Haber

Summary form only given. We consider the problem of compiling programs, written in a general high-level programming language, into hardware circuits executed by an FPGA (field programmable gate array) unit. In particular, we consider the problem of synthesizing nested loops that frequently access array elements stored in an external memory (outside the FPGA). We propose an aggressive compilation scheme, based on loop unrolling and code flattening techniques, where array references from/to the external memory are overlapped with uninterrupted hardware evaluation of the synthesized loops circuit. We implement a restricted programming language called DOL based on the proposed compilation scheme and our experimental results provide preliminary evidence that aggressive compilation can be used to compile large code segments into circuits, including overlapping of hardware operations and memory references.


ieee symposium on visual languages | 1994

Parallel roadmaps: discrete to continuous

Iaakov Exman; Daniel Citron

Parallel Roadmaps are apparently naive visual constructs, which remain intelligible and provide invaluable debugging clues as program complexity increases. With a few activities, roadmaps are discrete and detailed. With large numbers of them, continuous regions are formed by coalescing of adjacent activities. The main thesis of this paper is that a smooth transition from discrete to continuous maps, without changing the basic graphical entities, is an effective means to deal with parallelism at largely different scales.<<ETX>>

Collaboration


Dive into the Daniel Citron's collaboration.

Top Co-Authors

Avatar

Dror G. Feitelson

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tanja E. J. Vos

Polytechnic University of Valencia

View shared research outputs
Researchain Logo
Decentralizing Knowledge