Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Iván Gómez is active.

Publication


Featured researches published by Iván Gómez.


Journal of Photochemistry and Photobiology B-biology | 1998

Effects of solar radiation on photosynthesis, UV-absorbing compounds and enzyme activities of the green alga Dasycladus vermicularis from southern Spain

Iván Gómez; Eduardo Pérez-Rodríguez; Benjamín Viñegla; Félix L. Figueroa; Ulf Karsten

Abstract The effect of different wavebands of solar radiation (photosynthetically active radiation (PAR), ultraviolet A (UV-A) and ultraviolet B (UV-B)) produced by use of cut-off filters on chlorophyll fluorescence of the green alga Dasycladus vermicularis was assessed in summerautumn 1996 at a shallow site in Cabo de Gata-Nijar, southern Spain. Similar experiments were carried out under outdoor conditions at Malaga during summer and autumn 1997. In plants growing under in situ natural light conditions (2.5 m depth), the yield of variable chlorophyll fluorescence (ΔF/Fm′) decreases with increasing sunlight. The full solar spectrum (PAR + UV-A + UV-B) has more accentuated, long-lasting effects on fluorescence than irradiation deprived of UV-B. In general, decreases in ΔF/Fm′ do not exceed 30% in the three treatments. Under outdoor conditions, photoinhibition measured as a decrease in optimum quantum efficiency (Fv/Fm) varies between 40 and 75% with no obvious differences between treatments; however, recovery of photosynthesis after shade exposure is faster in plants treated with PAR + UV-A. Daily changes in nitrate reductase (NR) and carbonic anhydrase (CA) activities are antagonistic during the onset of natural radiation. The concentration of UV-absorbing compounds with maximum absorption at 348 and 332 nm is higher than that reported for other green algae. These compounds increase in plants exposed to the full solar spectrum (PAR + UV-A + UV-B) and decrease under PAR alone and PAR + UV-A conditions at noon. which underlines a possible photoprotective mechanism. Overall, data show that D. vermicularis is able to tolerate high solar radiation. Two physiological strategies seem to be basically active: dynamic photoinhibition at noon and an enhanced concentration of UV-screening substances.


Neural Processing Letters | 2009

Neural network architecture selection: can function complexity help?

Iván Gómez; Leonardo Franco; José M. Jerez

This work analyzes the problem of selecting an adequate neural network architecture for a given function, comparing existing approaches and introducing a new one based on the use of the complexity of the function under analysis. Numerical simulations using a large set of Boolean functions are carried out and a comparative analysis of the results is done according to the architectures that the different techniques suggest and based on the generalization ability obtained in each case. The results show that a procedure that utilizes the complexity of the function can help to achieve almost optimal results despite the fact that some variability exists for the generalization ability of similar complexity classes of functions.


Integrated Computer-aided Engineering | 2017

Layer multiplexing FPGA implementation for deep back-propagation learning

Francisco Ortega-Zamorano; José M. Jerez; Iván Gómez; Leonardo Franco

Training of large scale neural networks, like those used nowadays in Deep Learning schemes, requires long computational times or the use of high performance computation solutions like those based on cluster computation, GPU boards, etc. As a possible alternative, in this work the Back-Propagation learning algorithm is implemented in an FPGA board using a multiplexing layer scheme, in which a single layer of neurons is physically implemented in parallel but can be reused any number of times in order to simulate multi-layer architectures. An on-chip implementation of the algorithm is carried out using a training/validation scheme in order to avoid overfitting effects. The hardware implementation is tested on several configurations, permitting to simulate architectures comprising up to 127 hidden layers with a maximum number of neurons in each layer of 60 neurons. We confirmed the correct implementation of the algorithm and compared the computational times against C and Matlab code executed in a multicore supercomputer, observing a clear advantage of the proposed FPGA scheme. The layer multiplexing scheme used provides a simple and flexible approach in comparison to standard implementations of the Back-Propagation algorithm representing an important step towards the FPGA implementation of deep neural networks, one of the most novel and successful existing models for prediction problems.


Cognitive Computation | 2010

Multiclass Pattern Recognition Extension for the New C-Mantec Constructive Neural Network Algorithm

José Luis Subirats; José M. Jerez; Iván Gómez; Leonardo Franco

The new C-Mantec algorithm constructs compact neural network architectures for classsification problems, incorporating new features like competition between neurons and a built-in filtering stage of noisy examples. It was originally designed for tackling two class problems and in this work the extension of the algorithm to multiclass problems is analyzed. Three different approaches are investigated for the extension of the algorithm to multi-category pattern classification tasks: One-Against-All (OAA), One-Against-One (OAO), and P-against-Q (PAQ). A set of different sizes benchmark problems is used in order to analyze the prediction accuracy of the three multi-class implemented schemes and to compare the results to those obtained using other three standard classification algorithms.


international conference on artificial neural networks | 2006

Optimal synthesis of boolean functions by threshold functions

José Luis Subirats; Iván Gómez; José M. Jerez; Leonardo Franco

We introduce a new method for obtaining optimal architectures that implement arbitrary Boolean functions using threshold functions. The standard threshold circuits using threshold gates and weights are replaced by nodes computing directly a threshold function of the inputs. The method developed can be considered exhaustive as if a solution exist the algorithm eventually will find it. At all stages different optimization strategies are introduced in order to make the algorithm as efficient as possible. The method is applied to the synthesis of circuits that implement a flip-flop circuit and a multi-configurable gate. The advantages and disadvantages of the method are analyzed.


international conference on artificial neural networks | 2006

Neural network architecture selection: size depends on function complexity

Iván Gómez; Leonardo Franco; José Luis Subirats; José M. Jerez

The relationship between generalization ability, neural network size and function complexity have been analyzed in this work. The dependence of the generalization process on the complexity of the function implemented by neural architecture is studied using a recently introduced measure for the complexity of the Boolean functions. Furthermore an association rule discovery (ARD) technique was used to find associations among subsets of items in the whole set of simulations results. The main result of the paper is that for a set of quasi-random generated Boolean functions it is found that large neural networks generalize better on high complexity functions in comparison to smaller ones, which performs better in low and medium complexity functions.


The Scientific World Journal | 2014

The Generalization Complexity Measure for Continuous Input Data

Iván Gómez; Sergio A. Cannas; Omar Osenda; José M. Jerez; Leonardo Franco

We introduce in this work an extension for the generalization complexity measure to continuous input data. The measure, originally defined in Boolean space, quantifies the complexity of data in relationship to the prediction accuracy that can be expected when using a supervised classifier like a neural network, SVM, and so forth. We first extend the original measure for its use with continuous functions to later on, using an approach based on the use of the set of Walsh functions, consider the case of having a finite number of data points (inputs/outputs pairs), that is, usually the practical case. Using a set of trigonometric functions a model that gives a relationship between the size of the hidden layer of a neural network and the complexity is constructed. Finally, we demonstrate the application of the introduced complexity measure, by using the generated model, to the problem of estimating an adequate neural network architecture for real-world data sets.


distributed computing and artificial intelligence | 2016

Deep Neural Network Architecture Implementation on FPGAs Using a Layer Multiplexing Scheme

Francisco Ortega-Zamorano; José M. Jerez; Iván Gómez; Leonardo Franco

In recent years predictive models based on Deep Learning strategies have achieved enormous success in several domains including pattern recognition tasks, language translation, software design, etc. Deep learning uses a combination of techniques to achieve its prediction accuracy, but essentially all existing approaches are based on multi-layer neural networks with deep architectures, i.e., several layers of processing units containing a large number of neurons. As the simulation of large networks requires heavy computational power, GPUs and cluster based computation strategies have been successfully used. In this work, a layer multiplexing scheme is presented in order to permit the simulation of deep neural networks in FPGA boards. As a demonstration of the usefulness of the scheme deep architectures trained by the classical Back-Propagation algorithm are simulated on FPGA boards and compared to standard implementations, showing the advantages in computation speed of the proposed scheme.


Computer Methods and Programs in Biomedicine | 2016

Supervised discretization can discover risk groups in cancer survival analysis

Iván Gómez; Nuria Ribelles; Leonardo Franco; Emilio Alba; José M. Jerez

Discretization of continuous variables is a common practice in medical research to identify risk patient groups. This work compares the performance of gold-standard categorization procedures (TNM+A protocol) with that of three supervised discretization methods from Machine Learning (CAIM, ChiM and DTree) in the stratification of patients with breast cancer. The performance for the discretization algorithms was evaluated based on the results obtained after applying standard survival analysis procedures such as Kaplan-Meier curves, Cox regression and predictive modelling. The results show that the application of alternative discretization algorithms could lead the clinicians to get valuable information for the diagnosis and outcome of the disease. Patient data were collected from the Medical Oncology Service of the Hospital Clínico Universitario (Málaga, Spain) considering a follow up period from 1982 to 2008.


international conference on neural information processing | 2012

Data discretization using the extreme learning machine neural network

Juan Jesús Carneros; José M. Jerez; Iván Gómez; Leonardo Franco

Data discretization is an important processing step for several computational methods that work only with binary input data. In this work a method for discretize continuous data based on the use of the Extreme Learning Machine neural network architecture is developed and tested. The new method does not use data labels for performing the discretization process and thus is suitable for supervised and supervised data and also, as it is based on the Extreme Learning Machine, is very fast even for large input data sets. The efficiency of the new method is analyzed on several benchmark functions, testing the classification accuracy obtained with raw and discretized data, and also in comparison to results from the application of a state-of-the-art supervised discretization algorithm. The results indicate the suitability of the developed approach.

Collaboration


Dive into the Iván Gómez's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge