Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Barak A. Pearlmutter is active.

Publication


Featured researches published by Barak A. Pearlmutter.


ieee symposium on security and privacy | 1999

Detecting intrusions using system calls: alternative data models

Christina Warrender; Stephanie Forrest; Barak A. Pearlmutter

Intrusion detection systems rely on a wide variety of observable data to distinguish between legitimate and illegitimate activities. We study one such observable-sequences of system calls into the kernel of an operating system. Using system-call data sets generated by several different programs, we compare the ability of different data modeling methods to represent normal behavior accurately and to recognize intrusions. We compare the following methods: simple enumeration of observed sequences; comparison of relative frequencies of different sequences; a rule induction technique; and hidden Markov models (HMMs). We discuss the factors affecting the performance of each method and conclude that for this particular problem, weaker methods than HMMs are likely sufficient.


Neural Computation | 2001

Blind Source Separation by Sparse Decomposition in a Signal Dictionary

Michael Zibulevsky; Barak A. Pearlmutter

The blind source separation problem is to extract the underlying source signals from a set of linear mixtures, where the mixing matrix is unknown. This situation is common in acoustics, radio, medical signal and image processing, hyperspectral imaging, and other areas. We suggest a two-stage separation process: a priori selection of a possibly overcomplete signal dictionary (for instance, a wavelet frame or a learned dictionary) in which the sources are assumed to be sparsely representable, followed by unmixing the sources by exploiting the their sparse representability. We consider the general case of more sources than mixtures, but also derive a more efficient algorithm in the case of a nonovercomplete dictionary and an equal numbers of sources and mixtures. Experiments with artificial signals and musical sounds demonstrate significantly better separation than other known techniques.


Neural Computation | 1989

Learning state space trajectories in recurrent neural networks

Barak A. Pearlmutter

Many neural network learning procedures compute gradients of the errors on the output layer of units after they have settled to their final values. We describe a procedure for finding E/wij, where E is an error functional of the temporal trajectory of the states of a continuous recurrent network and wij are the weights of that network. Computing these quantities allows one to perform gradient descent in the weights to minimize E. Simulations in which networks are taught to move through limit cycles are shown. This type of recurrent network seems particularly suited for temporally continuous domains, such as signal processing, control, and speech.


Neural Computation | 1994

Fast exact multiplication by the Hessian

Barak A. Pearlmutter

Just storing the Hessian H (the matrix of second derivatives 2E/wiwj of the error E with respect to each pair of weights) of a large neural network is difficult. Since a common use of a large matrix like H is to compute its product with various vectors, we derive a technique that directly calculates Hv, where v is an arbitrary vector. To calculate Hv, we first define a differential operator Rv{f(w)} = (/r)f(w rv)|r=0, note that Rv{w} = Hv and Rv{w} = v, and then apply Rv{} to the equations used to compute w. The result is an exact and numerically stable procedure for computing Hv, which takes about as much computation, and is about as local, as a gradient evaluation. We then apply the technique to a one pass gradient calculation algorithm (backpropagation), a relaxation gradient calculation algorithm (recurrent backpropagation), and two stochastic gradient calculation algorithms (Boltzmann machines and weight perturbation). Finally, we show that this technique can be used at the heart of many iterative techniques for computing various properties of H, obviating any need to calculate the full Hessian.


international colloquium on grammatical inference | 1998

Results of the Abbadingo One DFA Learning Competition and a New Evidence-Driven State Merging Algorithm

Kevin J. Lang; Barak A. Pearlmutter; Rodney A. Price

This paper first describes the structure and results of the Abbadingo One DFA Learning Competition. The competition was designed to encourage work on algorithms that scale well—both to larger DFAs and to sparser training data. We then describe and discuss the winning algorithm of Rodney Price, which orders state merges according to the amount of evidence in their favor. A second winning algorithm, of Hugues Juille, will be described in a separate paper.


NeuroImage | 2002

Linear Spatial Integration for Single-Trial Detection in Encephalography

Lucas C. Parra; Chris Alvino; Akaysha Tang; Barak A. Pearlmutter; Nick Yeung; Allen Osman; Paul Sajda

Conventional analysis of electroencephalography (EEG) and magnetoencephalography (MEG) often relies on averaging over multiple trials to extract statistically relevant differences between two or more experimental conditions. In this article we demonstrate single-trial detection by linearly integrating information over multiple spatially distributed sensors within a predefined time window. We report an average, single-trial discrimination performance of Az approximately 0.80 and faction correct between 0.70 and 0.80, across three distinct encephalographic data sets. We restrict our approach to linear integration, as it allows the computation of a spatial distribution of the discriminating component activity. In the present set of experiments the resulting component activity distributions are shown to correspond to the functional neuroanatomy consistent with the task (e.g., contralateral sensorymotor cortex and anterior cingulate). Our work demonstrates how a purely data-driven method for learning an optimal spatial weighting of encephalographic activity can be validated against the functional neuroanatomy.


International Journal of Imaging Systems and Technology | 2005

Survey of Sparse and Non-Sparse Methods in Source Separation

Paul D. O'Grady; Barak A. Pearlmutter; Scott Rickard

Source separation arises in a variety of signal processing applications, ranging from speech processing to medical image analysis. The separation of a superposition of multiple signals is accomplished by taking into account the structure of the mixing process and by making assumptions about the sources. When the information about the mixing process and sources is limited, the problem is called ‘blind’. By assuming that the sources can be represented sparsely in a given basis, recent research has demonstrated that solutions to previously problematic blind source separation problems can be obtained. In some cases, solutions are possible to problems intractable by previous non‐sparse methods. Indeed, sparse methods provide a powerful approach to the separation of linear mixtures of independent data. This paper surveys the recent arrival of sparse blind source separation methods and the previously existing non‐sparse methods, providing insights and appropriate hooks into theliterature along the way.


IEEE Signal Processing Magazine | 2008

Hemodynamics for Brain-Computer Interfaces

Fiachra Matthews; Barak A. Pearlmutter; Tomas E. Ward; C. Soraghan; Charles Markham

This article brings together the various elements that constitute the signal processing challenges presented by a hemodynamics-driven functional near-infrared spectroscopy (fNIRS) based brain-computer interface (BCI). We discuss the use of optically derived measures of cortical hemodynamics as control signals for next generation BCIs. To this end we present a suitable introduction to the underlying measurement principle, we describe appropriate instrumentation and highlight how and where performance improvements can be made to current and future embodiments of such devices. Key design elements of a simple fNIRS-BCI system are highlighted while in the process identifying signal processing problems requiring improved solutions and suggesting methods by which this might be accomplished.


international conference on independent component analysis and signal separation | 2004

Soft-LOST: EM on a Mixture of Oriented Lines

Paul O'Grady; Barak A. Pearlmutter

Robust clustering of data into overlapping linear subspaces is a common problem. Here we consider one-dimensional subspaces that cross the origin. This problem arises in blind source separation, where the subspaces correspond directly to columns of a mixing matrix. We present an algorithm that identifies these subspaces using an EM procedure, where the E-step calculates posterior probabilities assigning data points to lines and M-step repositions the lines to match the points assigned to them. This method, combined with a transformation into a sparse domain and an L1-norm optimisation, constitutes a blind source separation algorithm for the under-determined case.


2006 16th IEEE Signal Processing Society Workshop on Machine Learning for Signal Processing | 2006

Convolutive Non-Negative Matrix Factorisation with a Sparseness Constraint

Paul D. O'Grady; Barak A. Pearlmutter

Discovering a representation which allows auditory data to be parsimoniously represented is useful for many machine learning and signal processing tasks. Such a representation can be constructed by non-negative matrix factorisation (NMF), a method for finding parts-based representations of non-negative data. We present an extension to NMF that is convolutive and includes a sparseness constraint. In combination with a spectral magnitude transform, this method discovers auditory objects and their associated sparse activation patterns.

Collaboration


Dive into the Barak A. Pearlmutter's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Zibulevsky

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anthony M. Zador

Cold Spring Harbor Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin J. Lang

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge