Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ofer Melnik is active.

Publication


Featured researches published by Ofer Melnik.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2004

Mixed group ranks: preference and confidence in classifier combination

Ofer Melnik; Yehuda Vardi; Cun-Hui Zhang

Classifier combination holds the potential of improving performance by combining the results of multiple classifiers. For domains with very large numbers of classes, such as biometrics, we present an axiomatic framework of desirable mathematical properties for combination functions of rank-based classifiers. This framework represents a continuum of combination rules, including the Borda Count, Logistic Regression, and Highest Rank combination methods as extreme cases. Intuitively, this framework captures how the two complementary concepts of general preference for specific classifiers and the confidence it has in any specific result (as indicated by ranks) can be balanced while maintaining consistent rank interpretation. Mixed Group Ranks (MGR) is a new combination function that balances preference and confidence by generalizing these other functions. We demonstrate that MGR is an effective combination approach by performing multiple experiments on data sets with large numbers of classes and classifiers from the FERET face recognition study.


congress on evolutionary computation | 2000

A game-theoretic investigation of selection methods used in evolutionary algorithms

Sevan G. Ficici; Ofer Melnik; Jordan B. Pollack

The replicator equation used in evolutionary game theory (EGT) assumes that strategies reproduce in direct proportion to their payoffs; this is akin to the use of fitness-proportionate selection in an evolutionary algorithm (EA). In this paper, we investigate how various other selection methods commonly used in EAs can affect the discrete-time dynamics of EGT. In particular, we show that the existence of evolutionary stable strategies (ESS) is sensitive to the selection method used. Rather than maintain the dynamics and equilibria of EGT, the selection methods we test either impose a fixed-point dynamic virtually unrelated to the payoffs of the game matrix, or they give limit cycles or induce chaos. These results are significant to the field of evolutionary computation because EGT can be understood as a coevolutionary algorithm operating under ideal conditions: an infinite population, noiseless payoffs and complete knowledge of the phenotype space. Thus, certain selection methods, which may operate effectively in simple evolution, are pathological in an ideal-world coevolutionary algorithm, and therefore dubious under real-world conditions.


Machine Learning | 2002

Decision Region Connectivity Analysis: A Method for Analyzing High-Dimensional Classifiers

Ofer Melnik

In this paper we present a method to extract qualitative information from any classification model that uses decision regions to generalize (e.g., feed-forward neural nets, SVMs, etc). The methods complexity is independent of the dimensionality of the input data or model, making it computationally feasible for the analysis of even very high-dimensional models. The qualitative information extracted by the method can be directly used to analyze the classification strategies employed by a model, and also to compare strategies across different model types.


international symposium on neural networks | 1998

A gradient descent method for a neural fractal memory

Ofer Melnik; Jordan B. Pollack

It has been demonstrated that higher order recurrent neural networks exhibit an underlying fractal attractor as an artifact of their dynamics. These fractal attractors offer a very efficient mechanism to encode visual memories in a neural substrate, since even a simple twelve weight network can encode a very large set of different images. The main problem in this memory model, which so far has remained unaddressed, is how to train the networks to learn these different attractors. Following other neural training methods this paper proposes a gradient descent method to learn the attractors. The method is based on an error function which examines the effects of the current network transform on the desired fractal attractor. It is tested across a bank of different target fractal attractors and at different noise levels. The results show positive performance across three error measures.


international symposium on neural networks | 2000

RAAM for infinite context-free languages

Ofer Melnik; Simon D. Levy; Jordan B. Pollack

With its ability to represent variable sized trees in fixed width patterns, recursive auto-associative memory (RAAM) is a bridge between connectionist and symbolic systems. In the past, due to limitations in our understanding, its development plateaued. By examining RAAM from a dynamical systems perspective we overcome most of the problems that previously plagued it. In fact, using a dynamical systems analysis we can now prove that not only is RAAM capable of generating parts of a context free language (a/sup n/b/sup n/) but is capable of expressing the whole language.


Cognitive Systems Research | 2002

Theory and scope of exact representation extraction from feed-forward networks

Ofer Melnik; Jordan B. Pollack

An algorithm to extract representations from feed-forward threshold networks is outlined. The representation is based on polytopic decision regions in the input space - and is exact not an approximation. Using this exact representation we explore scope questions, such as when and where do networks form artifacts, or what can we tell about network generalization from its representation. The exact nature of the algorithm also lends itself to theoretical questions about representation extraction in general, such as what is the relationship between factors such as input dimensionality, number of hidden units, number of hidden layers, and how the network output is interpreted to the potential complexity of the networks function.


international symposium on neural networks | 2000

Exact representations from feed-forward networks

Ofer Melnik; O. Pollack

We present an algorithm to extract representations from multiple hidden layer, multiple output feedforward perceptron threshold networks. The representation is based on polytopic decision regions in the input space and is exact not an approximation like most other network analysis methods. Multiple examples show some of the knowledge that can be extracted from networks by using this algorithm, including the geometrical form of artifacts and bad generalization. We compare threshold and sigmoidal networks with respect to the expressiveness of their decision regions, and also prove lower bounds for any algorithm which extracts decision regions from arbitrary neural networks.


international conference on multiple classifier systems | 2005

A probability model for combining ranks

Ofer Melnik; Yehuda Vardi; Cun-Hui Zhang

Mixed Group Ranks is a parametric method for combining rank based classifiers that is effective for many-class problems. Its parametric structure combines qualities of voting methods with best rank approaches. In [1] the parameters of MGR were estimated using a logistic loss function. In this paper we describe how MGR can be cast as a probability model. In particular we show that using an exponential probability model, an algorithm for efficient maximum likelihood estimation of its parameters can be devised. While casting MGR as an exponential probability model offers provable asymptotic properties (consistency), the interpretability of probabilities allows for flexiblity and natural integration of MGR mixture models.


Archive | 2004

Selection in Coevolutionary Algorithms and the Inverse Problem

Sevan G. Ficici; Ofer Melnik; Jordan B. Pollack

The inverse problem in the collective intelligence framework concerns how the private utility functions of agents can be engineered so that their selfish behaviors collectively give rise to a desired world state. In this chapter we examine several selection and fitnesssharing methods used in coevolution and consider their operation with respect to the inverse problem. The methods we test are truncation and linear-rank selection and competitive and similarity-based fitness sharing. Using evolutionary game theory to establish the desired world state, our analyses show that variable-sum games with polymorphic Nash are problematic for these methods. Rather than converge to polymorphic Nash, the methods we test produce cyclic behavior, chaos, or attractors that lack game-theoretic justification and therefore fail to solve the inverse problem. The private utilities of the evolving agents may thus be viewed as poorly factored—improved private utility does not correspond to improved world utility.


international symposium on neural networks | 2000

Using graphs to analyze high-dimensional classifiers

Ofer Melnik; Jordan B. Pollack

We present a method to extract qualitative information from any classification model that uses decision regions, independent on the dimensionality of the data and model. The qualitative information can be directly used to analyze the classification strategies employed by a model, and also to directly compare strategies across different models. We apply the method to compare between two types of classifiers using real-world high-dimensional data.

Collaboration


Dive into the Ofer Melnik's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Simon D. Levy

Washington and Lee University

View shared research outputs
Researchain Logo
Decentralizing Knowledge