Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Larry M. Manevitz is active.

Publication


Featured researches published by Larry M. Manevitz.


Neural Networks | 1996

Approximating functions by neural networks: a constructive solution in the uniform norm

Mark Meltser; Moshe Shoham; Larry M. Manevitz

A method for constructively approximating functions in the uniform (i.e., maximal error) norm by successive changes in the weights and number of neurons in a neural network is developed. This is a realization of the approximation results of Cybenko, Hecht-Nielsen, Hornik, Stinchcombe, White, Gallant, Funahashi, Leshno et al., and others. The constructive approximation in the uniform norm is more appropriate for a number of examples, such as robotic arm motion, and stands in contrast with more standard methods, such as back-propagation, which approximate only in the average error norm. Copyright 1996 Elsevier Science Ltd


BMC Bioinformatics | 2009

Classification and biomarker identification using gene network modules and support vector machines

Malik Yousef; Mohamed Ketany; Larry M. Manevitz; Louise C. Showe; Michael K. Showe

BackgroundClassification using microarray datasets is usually based on a small number of samples for which tens of thousands of gene expression measurements have been obtained. The selection of the genes most significant to the classification problem is a challenging issue in high dimension data analysis and interpretation. A previous study with SVM-RCE (Recursive Cluster Elimination), suggested that classification based on groups of correlated genes sometimes exhibits better performance than classification using single genes. Large databases of gene interaction networks provide an important resource for the analysis of genetic phenomena and for classification studies using interacting genes.We now demonstrate that an algorithm which integrates network information with recursive feature elimination based on SVM exhibits good performance and improves the biological interpretability of the results. We refer to the method as SVM with Recursive Network Elimination (SVM-RNE)ResultsInitially, one thousand genes selected by t-test from a training set are filtered so that only genes that map to a gene network database remain. The Gene Expression Network Analysis Tool (GXNA) is applied to the remaining genes to form n clusters of genes that are highly connected in the network. Linear SVM is used to classify the samples using these clusters, and a weight is assigned to each cluster based on its importance to the classification. The least informative clusters are removed while retaining the remainder for the next classification step. This process is repeated until an optimal classification is obtained.ConclusionMore than 90% accuracy can be obtained in classification of selected microarray datasets by integrating the interaction network information with the gene expression information from the microarrays.The Matlab version of SVM-RNE can be downloaded from http://web.macam.ac.il/~myousef


Neural Plasticity | 2015

Decoding the Formation of New Semantics: MVPA Investigation of Rapid Neocortical Plasticity during Associative Encoding through Fast Mapping

Tali Atir-Sharon; Asaf Gilboa; Hananel Hazan; Ester Koilis; Larry M. Manevitz

Neocortical structures typically only support slow acquisition of declarative memory; however, learning through fast mapping may facilitate rapid learning-induced cortical plasticity and hippocampal-independent integration of novel associations into existing semantic networks. During fast mapping the meaning of new words and concepts is inferred, and durable novel associations are incidentally formed, a process thought to support early childhoods exuberant learning. The anterior temporal lobe, a cortical semantic memory hub, may critically support such learning. We investigated encoding of semantic associations through fast mapping using fMRI and multivoxel pattern analysis. Subsequent memory performance following fast mapping was more efficiently predicted using anterior temporal lobe than hippocampal voxels, while standard explicit encoding was best predicted by hippocampal activity. Searchlight algorithms revealed additional activity patterns that predicted successful fast mapping semantic learning located in lateral occipitotemporal and parietotemporal neocortex and ventrolateral prefrontal cortex. By contrast, successful explicit encoding could be classified by activity in medial and dorsolateral prefrontal and parahippocampal cortices. We propose that fast mapping promotes incidental rapid integration of new associations into existing neocortical semantic networks by activating related, nonoverlapping conceptual knowledge. In healthy adults, this is better captured by unique anterior and lateral temporal lobe activity patterns, while hippocampal involvement is less predictive of this kind of learning.


ieee convention of electrical and electronics engineers in israel | 2012

Early diagnosis of Parkinson's disease via machine learning on speech data

Hananel Hazan; Dan Hilu; Larry M. Manevitz; Lorraine O. Ramig; Shimon Sapir

Using two distinct data sets (from the USA and Germany) of healthy controls and patients with early or mild stages of Parkinsons disease, we show that machine learning tools can be used for the early diagnosis of Parkinsons disease from speech data. This could potentially be applicable before physical symptoms appear. In addition, we show that while the training phase of machine learning process from one country can be reused in the other; different features dominate in each country; presumably because of languages differences. Three results are presented: (i) separate training and testing by each country (close to 85% range); (ii) pooled training and testing (about 80% range) and (iii) cross-country (training in one and testing in the other) (about 75% ranges). We discovered that different feature sets were needed for each country (language).


international acm sigir conference on research and development in information retrieval | 2000

Document classification on neural networks using only positive examples (poster session)

Larry M. Manevitz; Malik Yousef

In this paper, we show how a simple feed-forward neural network can be trained to filter documents when only positive information is available, and that this method seems to be superior to more standard methods, such as tf-idf retrieval based on an “average vector”. A novel experimental finding that retrieval is enhanced substantially in this context by carrying out a certain kind of uniform transformation (“Hadamard”) of the information prior to the training of the network.


Neurocomputing | 1997

Assigning meaning to data: Using sparse distributed memory for multilevel cognitive tasks

Larry M. Manevitz; Yigal Zemach

It is shown how a single homogeneous SDM memory can be organized to link between low level information and high level correlations. To illustrate this, we report on experiments run in a unified memory retrieval system, that combined pattern recognition of individual English characters followed by the assignment of ‘meaning’ to a string by giving it a Hebrew translation. Symmetry allows the reverse action on the same memory (i.e. Hebrew character identification followed by translation of a string to English).


Computing Systems in Engineering | 1993

Heuristic finite element node numbering

Larry M. Manevitz; Dan Givoli; Micha Margi

Abstract The design and implementation of a preliminary (MARK I) version of a system for the numbering of the nodes of a given finite element mesh is described. The numbering is performed in such a way as to minimize the average bandwidth of the finite element stiffness matrix. The numbering procedure is heuristic and is designed to mimic the methods of a human expert numberer. The performance of the system is demonstrated by means of a number of two-dimensional examples.


ieee convention of electrical and electronics engineers in israel | 2012

Temporal pattern recognition via temporal networks of temporal neurons

Alex Frid; Hananel Hazan; Larry M. Manevitz

We show that real valued continuous functions can be recognized in a reliable way, with good generalization ability using an adapted version of the Liquid State Machine (LSM) that receives direct real valued input. Furthermore this system works without the necessity of preliminary extraction of signal processing features. This avoids the necessity of discretization and encoding that has plagued earlier attempts on this process. We show this is effective on a simulated signal designed to have the properties of a physical trace of human speech. The main changes to the basic liquid state machine paradigm are (i) external stimulation to neurons by normalized real values and (ii) adaptation of the integrate and fire neurons in the liquid to have a history dependent sliding threshold (iii) topological constraints on the network connectivity.


International Journal of Pattern Recognition and Artificial Intelligence | 2007

EFFICIENT COLLABORATIVE FILTERING IN CONTENT-ADDRESSABLE SPACES

Shlomo Berkovsky; Yaniv Eytani; Larry M. Manevitz

Collaborative Filtering (CF) is currently one of the most popular and most widely used personalization techniques. It generates personalized predictions based on the assumption that users with similar tastes prefer similar items. One of the major drawbacks of the CF from the computational point of view is its limited scalability since the computational effort required by the CF grows linearly both with the number of available users and items. This work proposes a novel efficient variant of the CF employed over a multidimensional content-addressable space. The proposed approach heuristically decreases the computational effort required by the CF algorithm by limiting the search process only to potentially similar users. Experimental results demonstrate that the proposed heuristic approach is capable of generating predictions with high levels of accuracy, while significantly improving the performance in comparison with the traditional implementations of the CF.


Neural Processing Letters | 1997

Interweaving Kohonen maps of different dimensions to handle measure zero constraints on topological mappings

Larry M. Manevitz

The usual “Kohonen” algorithm uses samples of points in a domain to develop a topological correspondence between a grid of “neurons” and a continuous domain. “Topological” means that near points are mapped to near points. However, for many applications there are additional constraints, which are given by sets of measure zero, which are not preserved by this method, because of insufficient sampling. In particular, boundary points do not typically map to boundary points because in general the likelihood of a sample point from a two-dimensional domain falling on the boundary is typically zero for continuous data, and extremely small for numerical data. A specific application, (assigning meshes for the finite element method), was recently solved by interweaving a two-dimensional Kohonen mapping on the entire grid with a one-dimensional Kohonen mapping on the boundary. While the precise method of interweaving was heuristic, the underlying rationale seems widely applicable. This general method is problem independent and suggests a direct generalization to higher dimensions as well.

Collaboration


Dive into the Larry M. Manevitz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dan Givoli

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge