Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nathan Intrator is active.

Publication


Featured researches published by Nathan Intrator.


Magnetic Resonance in Medicine | 2009

Free water elimination and mapping from diffusion MRI

Ofer Pasternak; Nir A. Sochen; Yaniv Gur; Nathan Intrator; Yaniv Assaf

Relating brain tissue properties to diffusion tensor imaging (DTI) is limited when an image voxel contains partial volume of brain tissue with free water, such as cerebrospinal fluid or edema, rendering the DTI indices no longer useful for describing the underlying tissue properties. We propose here a method for separating diffusion properties of brain tissue from surrounding free water while mapping the free water volume. This is achieved by fitting a bi‐tensor model for which a mathematical framework is introduced to stabilize the fitting. Applying the method on datasets from a healthy subject and a patient with edema yielded corrected DTI indices and a more complete tract reconstruction that passed next to the ventricles and through the edema. We were able to segment the edema into areas according to the condition of the underlying tissue. In addition, the volume of free water is suggested as a new quantitative contrast of diffusion MRI. The findings suggest that free water is not limited to the borders of the brain parenchyma; it therefore contributes to the architecture surrounding neuronal bundles and may indicate specific anatomical processes. The analysis requires a conventional DTI acquisition and can be easily merged with existing DTI pipelines. Magn Reson Med, 2009.


Connection Science | 1996

Bootstrapping with Noise: An Effective Regularization Technique

Yuval Raviv; Nathan Intrator

Bootstrap samples with noise are shown to be an effective smoothness and capacity control technique for training feedforward networks and for other statistical methods such as generalized additive models. It is shown that noisy bootstrap performs best in conjunction with weight-decay regularization and ensemble averaging. The two-spiral problem, a highly non-linear, noise-free data, is used to demonstrate these findings. The combination of noisy bootstrap and ensemble averaging is also shown useful for generalized additive modelling, and is also demonstrated on the well-known Cleveland heart data.


Neural Networks | 1992

Invited Article: Objective function formulation of the BCM theory of visual cortical plasticity: Statistical connections, stability conditions

Nathan Intrator; Leon N. Cooper

In this paper, we present an objective function formulation of the Bienenstock, Cooper, and Munro (BCM) theory of visual cortical plasticity that permits us to demonstrate the connection between the unsupervised BCM learning procedure and various statistical methods, in particular, that of Projection Pursuit. This formulation provides a general method for stability analysis of the fixed points of the theory and enables us to analyze the behavior and the evolution of the network under various visual rearing conditions. It also allows comparison with many existing unsupervised methods. This model has been shown successful in various applications such as phoneme and 3D object recognition. We thus have the striking and possibly highly significant result that a biological neuron is performing a sophisticated statistical procedure.


Network: Computation In Neural Systems | 1997

Optimal ensemble averaging of neural networks

Ury Naftaly; Nathan Intrator; D. Horn

Based on an observation about the different effect of ensemble averaging on the bias and variance portions of the prediction error, we discuss training methodologies for ensembles of networks. We demonstrate the effect of variance reduction and present a method of extrapolation to the limit of an infinite ensemble. A significant reduction of variance is obtained by averaging just over initial conditions of the neural networks, without varying architectures or training sets. The minimum of the ensemble prediction error is reached later than that of a single network. In the vicinity of the minimum, the ensemble prediction error appears to be flatter than that of the single network, thus simplifying optimal stopping decision. The results are demonstrated on sunspots data, where the predictions are among the best obtained, and on the 1993 energy prediction competition data set B.†School of Physics and Astronomy. [email protected].‡School of Mathematical Sciences. [email protected].§School of Physics and ...


International Journal on Document Analysis and Recognition | 1999

Offline cursive script word recognition : a survey

Tal Steinherz; Ehud Rivlin; Nathan Intrator

Abstract. We review the field of offline cursive word recognition. We mainly deal with the various methods that were proposed to realize the core of recognition in a word recognition system. These methods are discussed in view of the two most important properties of such a system: the size and nature of the lexicon involved, and whether or not a segmentation stage is present. We classify the field into three categories: segmentation-free methods, which compare a sequence of observations derived from a word image with similar references of words in the lexicon; segmentation-based methods, that look for the best match between consecutive sequences of primitive segments and letters of a possible word; and the perception-oriented approach, that relates to methods that perform a human-like reading technique, in which anchor features found all over the word are used to boot-strap a few candidates for a final evaluation phase.


Neural Computation | 1999

Boosted mixture of experts: an ensemble learning scheme

Ran Avnimelech; Nathan Intrator

We present a new supervised learning procedure for ensemble machines, in which outputs of predictors, trained on different distributions, are combined by a dynamic classifier combination model. This procedure may be viewed as either a version of mixture of experts (Jacobs, Jordan, Nowlan, & Hinton, 1991), applied to classification, or a variant of the boosting algorithm (Schapire, 1990). As a variant of the mixture of experts, it can be made appropriate for general classification and regression problems by initializing the partition of the data set to different experts in a boostlike manner. If viewed as a variant of the boosting algorithm, its main gain is the use of a dynamic combination model for the outputs of the networks. Results are demonstrated on a synthetic example and a digit recognition task from the NIST database and compared with classical ensemble approaches.


Neural Computation | 1999

Boosting regression estimators

Ran Avnimelech; Nathan Intrator

There is interest in extending the boosting algorithm (Schapire, 1990) to fit a wide range of regression problems. The threshold-based boosting algorithm for regression used an analogy between classification errors and big errors in regression. We focus on the practical aspects of this algorithm and compare it to other attempts to extend boosting to regression. The practical capabilities of this model are demonstrated on the laser data from the Santa Fe times-series competition and the Mackey-Glass time series, where the results surpass those of standard ensemble average.


Psychology of Learning and Motivation | 1997

Learning as extraction of low-dimensional representations.

Shimon Edelman; Nathan Intrator

This chapter discusses that the effectiveness of living representational systems suggest that there must be something special about such systems that allows them to harbor representations of the world. It seems to be more likely that the phenomenon of representation is another natural category, which developed under evolutionary pressure in response to certain traits of the world with which the system interacts. Some of the relevant properties of the world contribute more than others in any given case of successful representation. The chapter proposes that over and above those diverse properties there is a unifying principle: various aspects of the world are represented successfully insofar as they are expressed in a low-dimensional space. It also suggests that the possibility of effective representation stems from the low-dimensional nature of the real-world classification tasks: an intelligent system would do well merely by reflecting the low-dimensional distal space internally. This undertaking is not as straightforward as it sounds. The preceptual front end to any sophisticated representational system starts with a high-dimensional measurement stage, whose task is mainly to assure that none of the relevant dimensions of stimulus variation are lost in the process of encoding because the relevant dimensions of the distal stimulus variation are neither known in advance nor immediately available internally. The ultimate performance of the system depends, therefore, on its capability to reduce the dimensionality of the measurement space back to an acceptable level, which would be on par with that of the original, presumably low-dimensional, distal stimulus space.


international conference on pattern recognition | 1994

Face recognition using a hybrid supervised/unsupervised neural network

Nathan Intrator; Daniel Reisfeld; Yehezkel Yeshurun

Face recognition schemes that are applied directly to gray level pixel images are presented. Two methods for reducing the overfitting-a common problem in high dimensional classification schemes-are presented and the superiority of their combination is demonstrated. The classification scheme is preceded by preprocessing devoted to reducing the viewpoint and scale variability in the data.


IEEE Transactions on Signal Processing | 1998

Classification of underwater mammals using feature extraction based on time-frequency analysis and BCM theory

Quyen Q. Huynh; Leon N. Cooper; Nathan Intrator; Harel Z Shouval

Underwater mammal sound classification is demonstrated using a novel application of wavelet time-frequency decomposition and feature extraction using a Bienenstock, Cooper, and Munro (1982) (BCM) unsupervised network. Different feature extraction methods and different wavelet representations are studied. The system achieves outstanding classification performance even when tested with mammal sounds recorded at very different locations (from those used for training). The improved results suggest that nonlinear feature extraction from wavelet representations outperforms different linear choices of basis functions.

Collaboration


Dive into the Nathan Intrator's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Talma Hendler

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Noam Gavriely

Technion – Israel Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge