Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Armin Shmilovici is active.

Publication


Featured researches published by Armin Shmilovici.


IEEE Transactions on Semiconductor Manufacturing | 2002

Data mining for improving a cleaning process in the semiconductor industry

Dan Braha; Armin Shmilovici

As device geometry continues to shrink, micro-contaminants have an increasingly negative impact on yield. By diminishing the contamination problem, semiconductor manufacturers will significantly improve wafer yield. This paper presents a comprehensive and successful application of data mining methodologies to the refinement of a new dry cleaning technology that utilizes a laser beam for the removal of micro-contaminants. Experiments with three classification-based data mining methods (decision tree induction, neural networks, and composite classifiers) have been conducted. The composite classifier architecture has been shown to yield higher accuracy than the accuracy of each individual classifier on its own. The paper suggests that data mining methodologies may be particularly useful when data is scarce, and the various physical and chemical parameters that affect the process exhibit highly complex interactions. Another implication is that on-line monitoring of the cleaning process using data mining may be highly effective.


Technometrics | 2003

Context-Based Statistical Process Control

Irad Ben-Gal; Gail Morag; Armin Shmilovici

Most statistical process control (SPC) methods are not suitable for monitoring nonlinear and state-dependent processes. This article introduces the context-based SPC (CSPC) methodology for state-dependent data generated by a finite-memory source. The key idea of the CSPC is to monitor the statistical attributes of a process by comparing two context trees at any monitoring period of time. The first is a reference tree that represents the “in control” reference behavior of the process; the second is a monitored tree, generated periodically from a sample of sequenced observations, that represents the behavior of the process at that period. The Kullback–Leibler (KL) statistic is used to measure the relative “distance” between these two trees, and an analytic distribution of this statistic is derived. Monitoring the KL statistic indicates whether there has been any significant change in the process that requires intervention. An example of buffer-level monitoring in a production system demonstrates the viability of the new method with respect to conventional methods.


Computing in Economics and Finance | 2003

Using a Stochastic Complexity Measure to Check the Efficient Market Hypothesis

Armin Shmilovici; Yael Alon-Brimer; Shmuel Hauser

The weak form of the Efficient Market Hypothesis (EMH) states that current market price reflects fully the information from past prices and rules out prediction based on price data alone. No recent test of time series of stock returns rejects this weak-form hypothesis. This research offers another test of the weak form of the EHM that leads to different conclusions for some time series.The stochastic complexity of a time series is a measure of the number of bits needed to represent and reproduce the information in the time series. In an efficient market, compression of the time series is not possible, because there are no patterns and the stochastic complexity is high. In this research, Rissanens context tree algorithm is used to identify recurring patterns in the data, and use them for compression. The weak form of the EMH is tested for 13 international stock indices and for all the stocks that comprise the Tel-Aviv 25 index (TA25), using sliding windows of 50, 75, and 100 consecutive daily returns. Statistically significant compression is detected in ten of the international stock index series. In the aggregate, 60% to 84% of the TA25 stocks tested demonstrate compressibility beyond randomness. This indicates potential market inefficiency.


Data Mining and Knowledge Discovery | 2005

Support Vector Machines

Armin Shmilovici

Support Vector Machines (SVMs) are a set of related methods for supervised learning, applicable to both classification and regression problems. A SVM classifiers creates a maximum-margin hyperplane that lies in a transformed input space and splits the example classes, while maximizing the distance to the nearest cleanly split examples. The parameters of the solution hyperplane are derived from a quadratic programming optimization problem. Here, we provide several formulations, and discuss some key concepts.


IEEE Transactions on Semiconductor Manufacturing | 2003

On the use of decision tree induction for discovery of interactions in a photolithographic process

Dan Braha; Armin Shmilovici

This paper delineates a comprehensive and successful application of decision tree induction to 1054 records of production lots taken from a lithographic process with 45 processing steps. Complex interaction effects among manufacturing equipment that lead to increased product variability have been detected. The extracted information has been confirmed by the process engineers, and used to improve the lithographic process. The paper suggests that decision tree induction may be particularly useful when data is multidimensional, and the various process parameters and machinery exhibit highly complex interactions. Another implication is that on-line monitoring of the manufacturing process (e.g., closed-loop critical dimensions control) using data mining may be highly effective.


convention of electrical and electronics engineers in israel | 2010

Using the confusion matrix for improving ensemble classifiers

Nadav David Marom; Lior Rokach; Armin Shmilovici

The code matrix enables to convert a multi class problem into an ensemble of binary classifiers. We suggest a new un-weighted framework for iteratively extending the code matrix which based on confusion matrix. The confusion matrix holds important information which is exploited by the suggested framework. Evaluating the confusion matrix at each iteration enables to make a decision regarding the next one against all classifier that should be added to the current code matrix. We demonstrate the benefits of the method by applying it to Error Correcting Code based ensemble and to AdbaBoost. We use Orthogonal arrays as the basic code matrix.


Data Mining and Knowledge Discovery | 2008

Pessimistic cost-sensitive active learning of decision trees for profit maximizing targeting campaigns

Lior Rokach; Lihi Naamani; Armin Shmilovici

In business applications such as direct marketing, decision-makers are required to choose the action which best maximizes a utility function. Cost-sensitive learning methods can help them achieve this goal. In this paper, we introduce Pessimistic Active Learning (PAL). PAL employs a novel pessimistic measure, which relies on confidence intervals and is used to balance the exploration/exploitation trade-off. In order to acquire an initial sample of labeled data, PAL applies orthogonal arrays of fractional factorial design. PAL was tested on ten datasets using a decision tree inducer. A comparison of these results to those of other methods indicates PAL’s superiority.


2008 4th International IEEE Conference Intelligent Systems | 2008

A methodology for the design of a fuzzy data warehouse

Lior Sapir; Armin Shmilovici; Lior Rokach

A data warehouse is a special database used for storing business oriented information for future analysis and decision-making. In business scenarios, where some of the data or the business attributes are fuzzy, it may be useful to construct a warehouse that can support the analysis of fuzzy data. Here, we outline how Kimballpsilas methodology for the design of a data warehouse can be extended to the construction of a fuzzy data warehouse. A case study demonstrates the viability of the methodology.


Journal of Manufacturing Systems | 1992

Heuristics for dynamic selection and routing of parts in an FMS

Armin Shmilovici; Oded Maimon

Abstract As part of a hierarchical production control system, a policy is proposed for the admittance of new parts to a flexible manufacturing system (FMS) together with three different heuristics for dynamic routing of parts. The heuristics that consider the process plans flexibility with respect to machines and operations are based upon fixed priorities, least reduction in entropy, and minimum flow resistance. The heuristics are simulated for different FMS characteristics and the results are analyzed.


Fuzzy Sets and Systems | 1998

On the solution of differential equations with fuzzy spline wavelets

Armin Shmilovici; Oded Maimon

Fuzzy systems built with spline wavelets can approximate any function in a general Hilbert space. Since wavelets are used extensively for the efficient solution of various types of differential equations, it is demonstrated that fuzzy spline wavelets can be used for the solution of the same type of problems. The advantage in using fuzzy spline wavelets for the solution of such problems is that the solution would enjoy the excellent numerical and computational characteristics of the fast wavelet transform, while retaining the explanatory power of fuzzy system. The method is demonstrated with the feedforward control of a flexible robotic arm.

Collaboration


Dive into the Armin Shmilovici's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lior Rokach

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Lihi Naamani

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Ben-Shimon

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Shmuel Hauser

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Dan Braha

New England Complex Systems Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chang Liu

Ben-Gurion University of the Negev

View shared research outputs
Researchain Logo
Decentralizing Knowledge