Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yehuda Naveh is active.

Publication


Featured researches published by Yehuda Naveh.


Ibm Journal of Research and Development | 2007

Workforce optimization: identification and assignment of professional workers using constraint programming

Yehuda Naveh; Yossi Richter; Yaniv Altshuler; Donna L. Gresh; Daniel P. Connors

Matching highly skilled people to available positions is a high-stakes task that requires careful consideration by experienced resource managers. A wrong decision may result in significant loss of value due to understaffing, underqualification or overqualification of assigned personnel, and high turnover of poorly matched workers. While the importance of quality matching is clear, dealing with pools of hundreds of jobs and resources in a dynamic market generates a significant amount of pressure to make decisions rapidly. We present a novel solution designed to bridge the gap between the need for high-quality matches and the need for timeliness. By applying constraint programming, a subfield of artificial intelligence, we are able to deal successfully with the complex constraints encountered in the field and reach near-optimal assignments that take into account all resources and positions in the pool. The considerations include constraints on job role, skill level, geographical location, language, potential retraining, and many more. Constraints are applied at both the individual and team levels. This paper introduces the technology and then describes its use by IBM Global Services, where large numbers of service and consulting employees are considered when forming teams assigned to customer projects.


high level design validation and test | 2002

X-Gen: a random test-case generator for systems and SoCs

Roy Emek; Itai Jaeger; Yehuda Naveh; Gadi Bergman; Guy Aloni; Yoav Katz; Monica Farkash; Igor Dozoretz; Alex Goldin

We present X-Gen, a model-based test-case generator designed for systems and systems on a chip (SoC). X-Gen provides a framework and a set of building blocks for system-level test-case generation. At the core of this framework lies a system model, which consists of component types, their configuration, and the interactions between them. Building blocks include commonly used concepts such as memories, registers, and address translation mechanisms. Once a system is modeled, X-Gen provides a rich language for describing test cases. Through this language, users can specify requests that cover the full spectrum between highly directed tests to completely random ones. X-Gen is currently in preliminary use at IBM for the verification of two different designs - a high-end multi-processor server and a state-of-the-art SoC.


Ai Magazine | 2007

Constraint-based random stimuli generation for hardware verification

Yehuda Naveh; Michal Rimon; Itai Jaeger; Yoav Katz; Michael Vinov; Eitan Marcus; Gil Shurek

We report on random stimuli generation for hardware verification at IBM as a major applica-tion of various artificial intelligence technologies, including knowledge representation, expert systems, and constraint satisfaction. For more than a decade we have developed several related tools, with huge payoffs. Research and development around this application are still thriving, as we continue to cope with the ever-increasing complexity of modern hardware systems and demanding business environments.


high level design validation and test | 2005

Harnessing machine learning to improve the success rate of stimuli generation

Shai Fine; Ari Freund; Itai Jaeger; Yishay Mansour; Yehuda Naveh; Avi Ziv

The initial state of a design under verification has a major impact on the ability of stimuli generators to successfully generate the requested stimuli. For complexity reasons, most stimuli generators use sequential solutions without planning ahead. Therefore, in many cases, they fail to produce a consistent stimuli due to an inadequate selection of the initial state. We propose a new method, based on machine learning techniques, to improve generation success by learning the relationship between the initial state vector and generation success. We applied the proposed method in two different settings, with the objective of improving generation success and coverage in processor and system level generation. In both settings, the proposed method significantly reduced generation failures and enabled faster coverage


international conference on service operations and logistics, and informatics | 2007

Optimatch: Applying Constraint Programming to Workforce Management of Highly-skilled Employees

Yossi Richter; Yehuda Naveh; Donna L. Gresh; Daniel P. Connors

Today many companies face the challenge of matching highly-skilled professionals to high-end positions in large organizations and human deployment agencies. Unlike traditional Workforce Management problems such as shift scheduling, highly-skilled employees are professionally distinguishable from each other and hence non-interchangeable. Our work specifically focuses on the services industry, where much of the revenue comes from the assignment of highly professional workers. Here, non-accurate matches may result in significant monetary losses and other negative effects. We deal with very large pools of both positions and employees, where optimal decisions should be made rapidly in a dynamic environment. Since traditional Operations Research (OR) methods fail to answer this problem, we employ Constraint Programming (CP), a subfield of Artificial Intelligence with strong algorithmic foundations. Our CP model builds on new constraint propagators designed for this problem (but applicable elsewhere), as well as on information retrieval methods used for analyzing the complex text describing high-end professionals and positions. Optimatch, which is based on this technology and is being used by IBM services organizations, provides strong experimental results.


IEEE Transactions on Applied Superconductivity | 2001

Physics of high j/sub c/ Nb/AlO/sub x//Nb Josephson junctions and prospects of their applications

Yehuda Naveh; Dmitri V. Averin; Konstantin K. Likharev

At critical current density of the order of 100 kA/cm/sup 2/, tunnel Josephson junctions become overdamped and may be used in RSFQ circuits without external shunting, dramatically increasing circuit density. However, the physics of electron transport in such high-j/sub c/ junctions differs from the usual direct tunneling and until recently remained unclear. We have found that the observed dc I-V curves of niobium-trilayer junctions with j/sub c/=210 kA/cm/sup 2/ can be explained quantitatively by resonant tunneling through strongly disordered barriers. According to this interpretation, random spread of critical current in high-j/sub c/ junctions may be rather small (below 1% r.m.s.) even in deep-submicron junctions, making VLSI RSFQ circuits, with density above 10 MJJ/cm/sup 2/, feasible.


high level design validation and test | 2003

Scheduling of transactions for system-level test-case generation

Roy Emek; Yehuda Naveh

We present a methodology for scheduling system-level transactions generated by a test-case generator. A system, in this context, may be composed of multiple processors, busses, bus-bridges, memories, etc. The methodology is based on an exploration of scheduling abilities in a hardware system. In its focus is a language for specifying transactions and their ordering. Through the use of hierarchy, the language provides the possibility of applying high-level scheduling requests. The methodology is realized in X-Gen, a system-level test-case generator used in IBM. The model and algorithm used by this tool are also discussed.


principles and practice of constraint programming | 2006

Generalizing alldifferent: the somedifferent constraint

Yossi Richter; Ari Freund; Yehuda Naveh

We introduce the SomeDifferent constraint as a generalization of AllDifferent. SomeDifferent requires that values assigned to some pairs of variables will be different. It has many practical applications. For example, in workforce management, it may enforce the requirement that the same worker is not assigned to two jobs which are overlapping in time. Propagation of the constraint for hyper-arc consistency is NP hard. We present a propagation algorithm with worst case time complexity O(n3βn) where n is the number of variables and β≈3.5 (ignoring a trivial dependence on the representation of the domains). We also elaborate on several heuristics which greatly reduce the algorithms running time in practice. We provide experimental results, obtained on a real-world workforce management problem and on synthetic data, which demonstrate the feasibility of our approach.


international conference on computer design | 2004

Quality improvement methods for system-level stimuli generation

Roy Emek; Itai Jaeger; Yoav Katz; Yehuda Naveh

Functional verification of systems is aimed at validating the integration of previously verified components. It deals with complex designs, and invariably suffers from scarce resources. We present a set of methods, collectively known as testing knowledge, aimed at increasing the quality of automatically generated system-level test-cases. Testing knowledge reduces the time and effort required to achieve high coverage of the verified design.


knowledge discovery and data mining | 2013

Analysis of advanced meter infrastructure data of water consumption in apartment buildings

Einat Kermany; Hanna Mazzawi; Dorit Baras; Yehuda Naveh; Hagai Michaelis

We present our experience of using machine learning techniques over data originating from advanced meter infrastructure (AMI) systems for water consumption in a medium-size city. We focus on two new use cases that are of special importance to city authorities. One use case is the automatic identification of malfunctioning meters, with a focus on distinguishing them from legitimate non-consumption such as during periods when the household residents are on vacation. The other use case is the identification of leaks or theft in the unmetered common areas of apartment buildings. These two use cases are highly important to city authorities both because of the lost revenue they imply and because of the hassle to the residents in cases of delayed identification. Both cases are inherently complex to analyze and require advanced data mining techniques in order to achieve high levels of correct identification. Our results provide for faster and more accurate detection of malfunctioning meters as well as leaks in the common areas. This results in significant tangible value to the authorities in terms of increase in technician efficiency and a decrease in the amount of wasted, non-revenue, water.

Collaboration


Dive into the Yehuda Naveh's collaboration.

Researchain Logo
Decentralizing Knowledge