Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Antonio Cañas is active.

Publication


Featured researches published by Antonio Cañas.


Microprocessors and Microsystems | 2006

Hardware description of multi-layer perceptrons with different abstraction levels

Eva M. Ortigosa; Antonio Cañas; Eduardo Ros; Pilar Martínez Ortigosa; Sonia Mota; Javier Díaz

Abstract This paper presents different hardware implementations of a multi-layer perceptron (MLP) for speech recognition. When defining the designs, we have used two different abstraction levels: a register transfer level and a higher algorithmic-like level. The implementations have been developed and tested into reconfigurable hardware (FPGA) for embedded systems. We also present a comparative study of the costs for the two considered approaches with regards to the silicon area, speed and required computational resources. The research is completed with the study of different implementation versions with diverse degrees of parallelism. The final aim is the comparison of the methodologies applied in the two abstraction levels for designing hardware MLP’s or similar computational structures.


international work-conference on artificial and natural neural networks | 1993

Optimization of a Competitive Learning Neural Network by Genetic Algorithms

Juan Julián Merelo Guervós; M. Patón; Antonio Cañas; Alberto Prieto; Federico Morán

In this paper we present the use of a genetic algorithm (GA) for the optimization, in clustering tasks, of a new kind of fast-learning neural network. The network uses a combination of supervised and un-supervised learning that makes it suitable for automatic tuning -by means of the GA-of the learning parameters and initial weights in order to obtain the highest recognition score. Simulation results are presented showing as, for relatively simple clustering tasks, the GA finds in a few generations the parameters of the network that lead to a classification accuracy close to 100%.


field-programmable logic and applications | 2003

FPGA Implementation of Multi-layer Perceptrons for Speech Recognition

Eva M. Ortigosa; Pilar Martínez Ortigosa; Antonio Cañas; Eduardo Ros; Rodrigo Agís; Julio Ortega

In this work we present different hardware implementations of a multi-layer perceptron for speech recognition. The designs have been defined using two different abstraction levels: register transfer level (VHDL) and a higher algorithmic-like level (Handel-C). The implementations have been developed and tested into a reconfigurable hardware (FPGA) for embedded systems. A study of the two considered approaches costs (silicon area), speed and required computational resources is presented.


International Journal of Neural Systems | 2000

Short-term prediction of chaotic time series by using RBF network with regression weights.

Ignacio Rojas; Jesús González; Antonio Cañas; Antonio F. Díaz; Fernando Rojas; Manuel Sánchez Rodríguez

We propose a framework for constructing and training a radial basis function (RBF) neural network. The structure of the gaussian functions is modified using a pseudo-gaussian function (PG) in which two scaling parameters sigma are introduced, which eliminates the symmetry restriction and provides the neurons in the hidden layer with greater flexibility with respect to function approximation. We propose a modified PG-BF (pseudo-gaussian basis function) network in which the regression weights are used to replace the constant weights in the output layer. For this purpose, a sequential learning algorithm is presented to adapt the structure of the network, in which it is possible to create a new hidden unit and also to detect and remove inactive units. A salient feature of the network systems is that the method used for calculating the overall output is the weighted average of the output associated with each receptive field. The superior performance of the proposed PG-BF system over the standard RBF are illustrated using the problem of short-term prediction of chaotic time series.


Archive | 2007

Swad: Web System for Education Support

Antonio Cañas; D.J. Calandria; Eva M. Ortigosa; Eduardo Ros; Antonio F. Díaz

This chapter presents a platform for supporting education tasks; we call it SWAD (in Spanish, it stands for Web-System for Education Support). This platform has been gradually developed during the last 7 years and is currently used at the University of Granada in more than 578 different subjects of different degrees. We describe here the various web services provided by the platform for students and educators, such as electronic index card, class photograph, document downloading, student self-assessment through multiple-choice exam, online checking of grades, internal web mail, discussion forums and electronic blackboard. The chapter also gives details about its implementation and provides evaluation statistics about its use and users’ opinions after testing the platform.


international work conference on artificial and natural neural networks | 2009

FPGA Implementation of a Perceptron-like Neural Network for Embedded Applications

Eva M. Ortigosa; Antonio Cañas; Eduardo Ros; Richard R. Carrillo

In this work we present several hardware implementations of a standard multi-layer perceptron and a modified version called extended multilayer perceptron. The implementations have been developed and tested onto a FPGA prototyping board. The designs have been defined using a high level hardware description language, which enables the study of different implementation versions with diverse parallelism levels. The test bed application addressed is speech recognition. The contribution presented in this paper can be seen as a low cost portable system, which can be easily modified. We include a short study of the implementation costs (silicon area), speed and required computational resources.


Analog Integrated Circuits and Signal Processing | 2002

Parameter Configurations for Hole Extraction in Cellular Neural Networks (CNN)

Mancia Anguita; F. Javier Fernández; Antonio F. Díaz; Antonio Cañas; Francisco J. Pelayo

It is shown that the holes of the objects in an input image with a CT-CNN [1] or a DT-CNN [2] may be obtained in a single transient using just one linear parameter configuration. A set of local rules is given that describe how a CNN with a linear configuration may extract the hole of the objects of an input image in a single transient. The parameter configuration for DT-CNNs or for CT-CNNs is obtained as the solution of a single linear programming problem, including robustness as an objective. The tolerances to multiplicative and additive errors caused by circuit inaccuracies for the linear hole-extraction configurations proposed have been deduced. These tolerable errors have been corroborated by simulations. The tolerance to errors and the speed of the CT-CNN linear configuration proposed for hole extraction are compared with those of the CT-CNN nonlinear configuration found in the bibliography [3].


foundations of computer science | 2001

The lightweight protocol CLIC: performance of an MPI implementation on CLIC

A.F. Daz; Julio Ortega; Antonio Cañas; F.J. Fernandez; Alberto Prieto

The CLIC is a lightweight protocol that has recently been proposed for efficient communication in clusters using the Linux Operating System. Besides optimizing communication performance, by reducing the latencies and increasing the bandwidth figures even for short messages, the proposed communication layer also meets other requirements such as multiprogramming, portability, protection against corrupted programs, reliable message delivery, direct access to the network for all applications, etc. In this way, instead of removing the operating system kernel from the critical path and creating a user-level network interface, the operating system support has been optimized to provide reliable and efficient network software, avoiding the TCP/IP protocol stack. The LAM-MPI communication layer has been implemented on top of the proposed protocol and the communication performance has been tested in a cluster of PCs with Linux OS and interconnected with Fast Ethernet.


Archive | 2006

FPGA Implementation of a Fully and Partially Connected MLP

Antonio Cañas; Eva M. Ortigosa; Eduardo Ros; Pilar Martínez Ortigosa

In this work, we present several hardware implementations of a standard Multi-Layer Perceptron (MLP) and a modified version called eXtended Multi-Layer Perceptron (XMLP). This extended version is an MLP-like feed-forward network with two-dimensional layers and configurable connection pathways. The interlayer connectivity can be restricted according to well-defined patterns. This aspect produces a faster and smaller system with similar classification capabilities. The presented hardware implementations of this network model take full advantage of this optimization feature. Furthermore the software version of the XMLP allows configurable activation functions and batched backpropagation with different smoothing-momentum alternatives. The hardware implementations have been developed and tested on an FPGA prototyping board. The designs have been defined using two different abstraction levels: register transfer level (VHDL) and a higher algorithmic-like level (Handel-C). We compare the two description strategies. Furthermore we study different implementation versions with diverse degrees of parallelism. The test bed application addressed is speech recognition. The implementations described here could be used for low-cost portable systems. We include a short study of the implementation costs (silicon area), speed and required computational resources.


international work conference on artificial and natural neural networks | 2009

XMLP: a Feed-Forward Neural Network with Two-Dimensional Layers and Partial Connectivity

Antonio Cañas; Eva M. Ortigosa; Antonio F. Díaz; Julio Ortega

This work presents an MLP-like feed-forward network with two-dimensional layers partially connected and other features, such as configurable activation functions and batched backpropagation with different smoothing-momentum alternatives. We name this model eXtended Multi-Layer Perceptron (XMLP) because it extends the connectivity of the MLP. Here we describe its architecture, the various activation functions that it can use, its learning algorithm, and the possible use of discretization intended for hardware implementation. We also show a configurable graphic tool developed to train and simulate any MLP-like network (totally or partially connected). Finally, we present some results on speech recognition in order to compare total and partial connectivity, as well as continuous and discrete operation.

Collaboration


Dive into the Antonio Cañas's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge