Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where José M. Jerez is active.

Publication


Featured researches published by José M. Jerez.


Biological Cybernetics | 2007

Neuronal selectivity, population sparseness, and ergodicity in the inferior temporal visual cortex

Leonardo Franco; Edmund T. Rolls; Nikolaos C. Aggelopoulos; José M. Jerez

The sparseness of the encoding of stimuli by single neurons and by populations of neurons is fundamental to understanding the efficiency and capacity of representations in the brain, and was addressed as follows. The selectivity and sparseness of firing to visual stimuli of single neurons in the primate inferior temporal visual cortex were measured to a set of 20 visual stimuli including objects and faces in macaques performing a visual fixation task. Neurons were analysed with significantly different responses to the stimuli. The firing rate distribution of 36% of the neurons was exponential. Twenty-nine percent of the neurons had too few low rates to be fitted by an exponential distribution, and were fitted by a gamma distribution. Interestingly, the raw firing rate distribution taken across all neurons fitted an exponential distribution very closely. The sparseness as or selectivity of the representation of the set of 20 stimuli provided by each of these neurons (which takes a maximal value of 1.0) had an average across all neurons of 0.77, indicating a rather distributed representation. The sparseness of the representation of a given stimulus by the whole population of neurons, the population sparseness ap, also had an average value of 0.77. The similarity of the average single neuron selectivity as and population sparseness for any one stimulus taken at any one time ap shows that the representation is weakly ergodic. For this to occur, the different neurons must have uncorrelated tuning profiles to the set of stimuli.


PLOS ONE | 2014

A microRNA Signature Associated with Early Recurrence in Breast Cancer

Luis G. Pérez-Rivas; José M. Jerez; Rosario Carmona; Vanessa de Luque; Luis Vicioso; M. Gonzalo Claros; Enrique Viguera; Bella Pajares; Alfonso Sánchez; Nuria Ribelles; Emilio Alba; José Lozano

Recurrent breast cancer occurring after the initial treatment is associated with poor outcome. A bimodal relapse pattern after surgery for primary tumor has been described with peaks of early and late recurrence occurring at about 2 and 5 years, respectively. Although several clinical and pathological features have been used to discriminate between low- and high-risk patients, the identification of molecular biomarkers with prognostic value remains an unmet need in the current management of breast cancer. Using microarray-based technology, we have performed a microRNA expression analysis in 71 primary breast tumors from patients that either remained disease-free at 5 years post-surgery (group A) or developed early (group B) or late (group C) recurrence. Unsupervised hierarchical clustering of microRNA expression data segregated tumors in two groups, mainly corresponding to patients with early recurrence and those with no recurrence. Microarray data analysis and RT-qPCR validation led to the identification of a set of 5 microRNAs (the 5-miRNA signature) differentially expressed between these two groups: miR-149, miR-10a, miR-20b, miR-30a-3p and miR-342-5p. All five microRNAs were down-regulated in tumors from patients with early recurrence. We show here that the 5-miRNA signature defines a high-risk group of patients with shorter relapse-free survival and has predictive value to discriminate non-relapsing versus early-relapsing patients (AUC = 0.993, p-value<0.05). Network analysis based on miRNA-target interactions curated by public databases suggests that down-regulation of the 5-miRNA signature in the subset of early-relapsing tumors would result in an overall increased proliferative and angiogenic capacity. In summary, we have identified a set of recurrence-related microRNAs with potential prognostic value to identify patients who will likely develop metastasis early after primary breast surgery.


IEEE Transactions on Circuits and Systems | 2008

A New Decomposition Algorithm for Threshold Synthesis and Generalization of Boolean Functions

José Luis Subirats; José M. Jerez; Leonardo Franco

A new algorithm for obtaining efficient architectures composed of threshold gates that implement arbitrary Boolean functions is introduced. The method reduces the complexity of a given target function by splitting the function according to the variable with the highest influence. The procedure is iteratively applied until a set of threshold functions is obtained, leading to reduced depth architectures, in which the obtained threshold functions form the nodes and a and or or function is the output of the architecture. The algorithm is tested on a large set of benchmark functions and the results compared to previous existing solutions, showing a considerable reduction on the number of gates and levels of the obtained architectures. An extension of the method for partially defined functions is also introduced and the generalization ability of the method is analyzed.


TAEBC-2009 | 2009

Constructive Neural Networks

Leonardo Franco; David A. Elizondo; José M. Jerez

The book is a collection of invited papers on Constructive methods for Neural networks. Most of the chapters are extended versions of works presented on the special session on constructive neural network algorithms of the 18th International Conference on Artificial Neural Networks (ICANN 2008) held September 3-6, 2008 in Prague, Czech Republic. The book is devoted to constructive neural networks and other incremental learning algorithms that constitute an alternative to standard trial and error methods for searching adequate architectures. It is made of 15 articles which provide an overview of the most recent advances on the techniques being developed for constructive neural networks and their applications. It will be of interest to researchers in industry and academics and to post-graduate students interested in the latest advances and developments in the field of artificial neural networks.


Neural Processing Letters | 2009

Neural network architecture selection: can function complexity help?

Iván Gómez; Leonardo Franco; José M. Jerez

This work analyzes the problem of selecting an adequate neural network architecture for a given function, comparing existing approaches and introducing a new one based on the use of the complexity of the function under analysis. Numerical simulations using a large set of Boolean functions are carried out and a comparative analysis of the results is done according to the architectures that the different techniques suggest and based on the generalization ability obtained in each case. The results show that a procedure that utilizes the complexity of the function can help to achieve almost optimal results despite the fact that some variability exists for the generalization ability of similar complexity classes of functions.


Constructive Neural Networks | 2009

Constructive Neural Network Algorithms for Feedforward Architectures Suitable for Classification Tasks

Maria do Carmo Nicoletti; João Roberto Bertini; David A. Elizondo; Leonardo Franco; José M. Jerez

This chapter presents and discusses several well-known constructive neural network algorithms suitable for constructing feedforward architectures aiming at classification tasks involving two classes. The algorithms are divided into two different groups: the ones directed by the minimization of classification errors and those based on a sequential model. In spite of the focus being on two-class classification algorithms, the chapter also briefly comments on the multiclass versions of several two-class algorithms, highlights some of the most popular constructive algorithms for regression problems and refers to several other alternative algorithms.


BMC Cancer | 2013

Differential outcome of concurrent radiotherapy plus epidermal growth factor receptor inhibitors versus radiotherapy plus cisplatin in patients with human papillomavirus-related head and neck cancer

Bella Pajares; J. M. Trigo; Maria Dolores Toledo; Martina Álvarez; C González-Hermoso; Antonio Rueda; José Antonio Medina; Vanessa de Luque; José M. Jerez; Emilio Alba

BackgroundHuman papillomavirus (HPV)-related head and neck cancer has been associated with an improved prognosis in patients treated with radiotherapy (RT) +/− chemotherapy (CT); however, RT combined with epidermal growth factor receptor (EGFR) inhibitors has not been fully studied in this group of patients.MethodsImmunohistochemical expression of p16 and PCR of HPV16 DNA were retrospectively analyzed in tumor blocks from 108 stage III/IV head and neck cancer patients treated with RT+CT (56) or RT+EGFR inhibitors (52). Disease-free survival (DFS) and overall survival (OS) were analyzed by the Kaplan-Meier method.ResultsDNA of HPV16 was found in 12 of 108 tumors (11%) and p16 positivity in 18 tumors (17%), with similar rates in both arms of treatment. After a median follow-up time of 35 months (range 6–135), p16-positive patients treated with RT+EGFR inhibitors showed improved survival compared with those treated with RT+CT (2-year OS 88% vs. 60%, HR 0.18; 95% CI 0.04 to 0.88; p = 0.01; and 2-year DFS 75% vs. 47%, HR 0.17; 95% CI 0.03 to 0.8; p = 0.01). However, no differences were observed in p16-negative patients (2-year OS 56% vs. 53%, HR 0.97; 95% CI 0.55 to 1.7; p = 0.9; and 2-year DFS 43% vs. 45%, HR 0.99; 95% CI 0.57 to 1.7; p = 0.9).ConclusionsThis is the first study to show that p16-positive patients may benefit more from RT+EGFR inhibitors than conventional RT+CT. These results are hypothesis-generating and should be confirmed in prospective trials.


IEEE Transactions on Industrial Informatics | 2014

FPGA Implementation of the C-Mantec Neural Network Constructive Algorithm

Francisco Ortega-Zamorano; José M. Jerez; Leonardo Franco

Competitive majority network trained by error correction (C-Mantec), a recently proposed constructive neural network algorithm that generates very compact architectures with good generalization capabilities, is implemented in a field programmable gate array (FPGA). A clear difference with most of the existing neural network implementations (most of them based on the use of the backpropagation algorithm) is that the C-Mantec automatically generates an adequate neural architecture while the training of the data is performed. All the steps involved in the implementation, including the on-chip learning phase, are fully described and a deep analysis of the results is carried on using the two sets of benchmark problems. The results show a clear increase in the computation speed in comparison to the standard personal computer (PC)-based implementation, demonstrating the usefulness of the intrinsic parallelism of FPGAs in the neurocomputational tasks and the suitability of the hardware version of the C-Mantec algorithm for its application to real-world problems.


Integrated Computer-aided Engineering | 2017

Layer multiplexing FPGA implementation for deep back-propagation learning

Francisco Ortega-Zamorano; José M. Jerez; Iván Gómez; Leonardo Franco

Training of large scale neural networks, like those used nowadays in Deep Learning schemes, requires long computational times or the use of high performance computation solutions like those based on cluster computation, GPU boards, etc. As a possible alternative, in this work the Back-Propagation learning algorithm is implemented in an FPGA board using a multiplexing layer scheme, in which a single layer of neurons is physically implemented in parallel but can be reused any number of times in order to simulate multi-layer architectures. An on-chip implementation of the algorithm is carried out using a training/validation scheme in order to avoid overfitting effects. The hardware implementation is tested on several configurations, permitting to simulate architectures comprising up to 127 hidden layers with a maximum number of neurons in each layer of 60 neurons. We confirmed the correct implementation of the algorithm and compared the computational times against C and Matlab code executed in a multicore supercomputer, observing a clear advantage of the proposed FPGA scheme. The layer multiplexing scheme used provides a simple and flexible approach in comparison to standard implementations of the Back-Propagation algorithm representing an important step towards the FPGA implementation of deep neural networks, one of the most novel and successful existing models for prediction problems.


Neural Networks | 2012

C-Mantec: A novel constructive neural network algorithm incorporating competition between neurons

José Luis Subirats; Leonardo Franco; José M. Jerez

C-Mantec is a novel neural network constructive algorithm that combines competition between neurons with a stable modified perceptron learning rule. The neuron learning is governed by the thermal perceptron rule that ensures stability of the acquired knowledge while the architecture grows and while the neurons compete for new incoming information. Competition makes it possible that even after new units have been added to the network, existing neurons still can learn if the incoming information is similar to their stored knowledge, and this constitutes a major difference with existing constructing algorithms. The new algorithm is tested on two different sets of benchmark problems: a Boolean function set used in logic circuit design and a well studied set of real world problems. Both sets were used to analyze the size of the constructed architectures and the generalization ability obtained and to compare the results with those from other standard and well known classification algorithms. The problem of overfitting is also analyzed, and a new built-in method to avoid its effects is devised and successfully applied within an active learning paradigm that filter noisy examples. The results show that the new algorithm generates very compact neural architectures with state-of-the-art generalization capabilities.

Collaboration


Dive into the José M. Jerez's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge