Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marcel Zalmanovici is active.

Publication


Featured researches published by Marcel Zalmanovici.


Proceedings of the 1st international forum on Next-generation multicore/manycore technologies | 2008

Performance analysis and visualization tools for cell/B.E. multicore environment

Duc Vianney; Gadi Haber; Andre Heilper; Marcel Zalmanovici

Code porting, optimizing and tuning has become a challenging task in multicore/many cores environment. It requires a different set of performance visualization tools to handle the complexity of the many cores and the size of performance data to find opportunities for optimization. This paper discusses performance visualization tools available for Cell/B.E. under the IBM Software Development Kit (SDK) for Multicore Acceleration Version 3.0. It also presents a methodology for porting, optimizing and tuning Cell applications by utilizing those tools. The paper starts with a simple scalar program example which can also be found in the IBM tutorial for the Cell programming, and then describes all the needed steps to make it fully tuned and scaled for the Cell Broadband Engine.


ACM Transactions on Architecture and Code Optimization | 2013

JIT technology with C/C++: Feedback-directed dynamic recompilation for statically compiled languages

Dorit Nuzman; Revital Eres; Sergei Dyshel; Marcel Zalmanovici; José G. Castaños

The growing gap between the advanced capabilities of static compilers as reflected in benchmarking results and the actual performance that users experience in real-life scenarios makes client-side dynamic optimization technologies imperative to the domain of static languages. Dynamic optimization of software distributed in the form of a platform-agnostic Intermediate-Representation (IR) has been very successful in the domain of managed languages, greatly improving upon interpreted code, especially when online profiling is used. However, can such feedback-directed IR-based dynamic code generation be viable in the domain of statically compiled, rather than interpreted, languages? We show that fat binaries, which combine the IR together with the statically compiled executable, can provide a practical solution for software vendors, allowing their software to be dynamically optimized without the limitation of binary-level approaches, which lack the high-level IR of the program, and without the warm-up costs associated with the IR-only software distribution approach. We describe and evaluate the fat-binary-based runtime compilation approach using SPECint2006, demonstrating that the overheads it incurs are low enough to be successfully surmounted by dynamic optimization. Building on Java JIT technologies, our results already improve upon common real-world usage scenarios, including very small workloads.


foundations of software engineering | 2016

Cluster-based test suite functional analysis

Marcel Zalmanovici; Orna Raz; Rachel Tzoref-Brill

A common industrial challenge is that of analyzing large legacy free text test suites in order to comprehend their functional content. The analysis results are used for different purposes, such as dividing the test suite into disjoint functional parts for automation and management purposes, identifying redundant test cases, and extracting models for combinatorial test generation while reusing the legacy test suite. Currently the analysis is performed manually, which hinders the ability to analyze many such large test suites due to time and resource constraints. We report on our practical experience in automated analysis of real-world free text test suites from six different industrial companies. Our novel, cluster-based approach provides significant time savings for the analysis of the test suites, varying from a reduction of 35% to 97% compared to the human time required, thus enabling functional analysis in many cases where manual analysis is infeasible in practice.


automated software engineering | 2012

Refactoring techniques for aggressive object inlining in Java applications

Yosi Ben Asher; Tomer Gal; Gadi Haber; Marcel Zalmanovici

Object Inlining (OI) is a known optimization in object oriented programming in which referenced objects of class B are inlined into their referencing objects of class A by making all fields and methods of class B part of class A. The optimization saves all the new operations of B type objects from class A and at the same time replaces all indirect accesses, from A to fields of B, by direct accesses. To the best of our knowledge, in-spite of the significant performance potential of the OI optimization, reported performance measurements were relatively moderate. This is because an aggressive OI optimization requires complex analysis and code transformations to overcome problems like multiple references to the inlinable object, object references that escape their object scope, etc.To extract the full potential of OI, we propose a two-stage process. The first stage includes automatic analysis of the source code that informs the user, via comments in the IDE, about code transformations that are needed in order to enable or to maximize the potential of the OI optimization. In the second stage, the OI optimization is applied automatically on the source code as a code refactoring operation, or preferably, as part of the compilation process prior to javac run.We show that this half-automated technique helps to extract the full potential of OI. The proposed OI refactoring process also determines the order of applying the inlinings of the objects and enables us to apply inlinings of objects created inside a method; thus enabling us to reach better performance gain.In this work we also include an evaluation of the OI optimization effects on multithreaded applications running on multicore machines.The comments and the OI transformation were implemented in the Eclipse JDT (Java Development Tools) plugin. The system was then applied on the SPECjbb2000 source code along with profiling data collected by the Eclipse TPTP plugin. The proposed system achieved 46% improvement in performance.


international conference on software engineering | 2017

Proactive and pervasive combinatorial testing

Dale E. Blue; Orna Raz; Rachel Tzoref-Brill; Paul Wojciak; Marcel Zalmanovici

Combinatorial testing (CT) is a well-known technique for improving the quality of test plans while reducing testing costs. Traditionally, CT is used by testers at testing phase to design a test plan based on a manual definition of the test space. In this work, we extend the traditional use of CT to other parts of the development life cycle. We use CT at early design phase to improve design quality. We also use CT after test cases have been created and executed, in order to find gaps between design and test. For the latter use case we deploy a novel technique for a semi-automated definition of the test space, which significantly reduces the effort associated with manual test space definition. We report on our practical experience in applying CT for these use cases to three large and heavily deployed industrial products. We demonstrate the value gained from extending the use of CT by (1) discovering latent design flaws with high potential impact, and (2) correlating CT-uncovered gaps between design and test with field reported problems.


Proceedings of the 2nd International Workshop on Quality-Aware DevOps | 2016

Coverage-based metrics for cloud adaptation

Yonit Magid; Rachel Tzoref-Brill; Marcel Zalmanovici

This work introduces novel combinatorial coverage based metrics for deciding upon automated Cloud infrastructure adaptation. Our approach utilizes a Combinatorial Testing engine, traditionally used for testing at the development phase, in order to measure the load behavior of a system in production. We determine how much the measured load behavior at runtime differs from the one observed during testing. We further estimate the involved risk of encountering untested behavior in the current configuration of the system as well as when transitioning to a new Cloud configuration using possible adaptation actions such as migration and scale-out. Based on our risk assessment, a Cloud adaptation engine may consequently decide on an adaptation action in order to transform the system to a configuration with a lesser associated risk. Our work is part of a larger project that deals with automated Cloud infrastructure adaptation. We introduce the overall approach for automated adaptation, as well as our coverage-based metrics for risk assessment and the algorithms to calculate them. We demonstrate our metrics on an example setting consisting of two sub-components with multiple instances, comprising a typical installation of a telephony application.


Archive | 2007

Device, System, and Method of Computer Program Optimization

Guy Bashkansky; Gad Haber; Marcel Zalmanovici


Archive | 2008

Iterative Compilation Supporting Entity Instance-Specific Compiler Option Variations

Guy Bashkansky; Gad Haber; Yaakov Yaari; Marcel Zalmanovici


Archive | 2012

FLATTENING CONDITIONAL STATEMENTS

Marcel Zalmanovici


Archive | 2008

Apparatus for and Method for Life-Time Test Coverage for Executable Code

Daniel Citron; Itzhack Goldberg; Moshe Klausner; Marcel Zalmanovici

Researchain Logo
Decentralizing Knowledge