Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian A. Morillas is active.

Publication


Featured researches published by Christian A. Morillas.


Eurasip Journal on Image and Video Processing | 2012

Real-time tone mapping on GPU and FPGA

Raquel Ureña; Pablo Martínez-Cañada; Juan Manuel Gómez-López; Christian A. Morillas; Francisco J. Pelayo

Low-level computer vision algorithms have high computational requirements. In this study, we present two real-time architectures using resource constrained FPGA and GPU devices for the computation of a new algorithm which performs tone mapping, contrast enhancement, and glare mitigation. Our goal is to implement this operator in a portable and battery-operated device, in order to obtain a low vision aid specially aimed at visually impaired people who struggle to manage themselves in environments where illumination is not uniform or changes rapidly. This aid device processes in real-time, with minimum latency, the input of a camera and shows the enhanced image on a head mounted display (HMD). Therefore, the proposed operator has been implemented on battery-operated platforms, one based on the GPU NVIDIA ION2 and another on the FPGA Spartan III, which perform at rates of 30 and 60 frames per second, respectively, when working with VGA resolution images (640 × 480).


Neural Processing Letters | 2010

Use of Phase in Brain---Computer Interfaces based on Steady-State Visual Evoked Potentials

M. A. Lopez-Gordo; Alberto Prieto; Francisco J. Pelayo; Christian A. Morillas

Brain–computer interfaces based on steady-state visual evoked potentials (SSVEP-BCIs) extract the amplitude of these potentials for classification. The use of the phase has not yet been widely used in on-line classification, since it requires a very accurate real time system that keeps synchronized the stimulation, recording and processing. In this paper, it has been presented an experiment, based on the AM modulation of flickering stimuli, that demonstrates that first, the phase shifts of different stimuli can be recovered from that of the corresponding SSVEPs without the need of a real time system; second, this information can be used efficiently to develop a BCI based on the classification of the phase shifts of the SSVEPs. Since the phase is statistically independent of the amplitude, the joint use in classification of both would improve the performance of this type of BCI.


Neurocomputing | 2004

Translating image sequences into spike patterns for cortical neuro-stimulation

Francisco J. Pelayo; Samuel F. Romero; Christian A. Morillas; Antonio Martínez; Eduardo Ros; Eduardo Fernández

Abstract This paper describes a bioinspired preprocessing and coding system devised for producing optimal multi-electrode stimulation at the cortical level, starting from image sequences and working at video rates. A hybrid platform with software and reconfigurable hardware delivers a continuously varying stream of pulses or spike patterns. The main objective of this work is to build a portable system for a visual neuro-prosthesis to stimulate efficiently an array of intra-cortical implanted microelectrodes. A set of parameters can be adjusted in the processing and spike-coding modules to trade-off their technology constraints with the biological plausibility of their functional features.


Clinical Neurophysiology | 2011

Customized stimulation enhances performance of independent binary SSVEP-BCIs.

M. A. Lopez-Gordo; Alberto Prieto; Francisco J. Pelayo; Christian A. Morillas

OBJECTIVE Brain-computer interfaces based on steady-state visual evoked potentials (SSVEP-BCIs) achieve the highest performance, due to their multiclass nature, in paradigms in which different visual stimuli are shown. Studies of independent binary SSVEP-BCIs have been previously presented in which it was not necessary to gaze at the stimuli at the cost of performance. Despite that, the energy of the SSVEPs is largely affected by the temporal and spatial frequencies of the stimulus, there are no studies in the BCI literature about its combined impact on the final performance of SSVEP-BCIs. The objective of this study is to present an experiment that evaluates the best configuration of the visual stimulus for each subject, thus minimizing the decline in performance of independent binary SSVEP-BCIs. METHODS The participants attended and ignored a single structured stimulus configured with a combination of spatial and temporal frequencies at a time. They were instructed to gaze at a central point during the whole experiment. The best combination of spatial and temporal frequencies achieved for each subject, in terms of signal-to-noise ratio (SNR), was subsequently determined. RESULTS The SNR showed a significant dependency on the combination of frequencies, in such a way that only a reduced set of these combinations was applicable for obtaining an optimum SNR. The selection of an inappropriate stimulus configuration may cause a degradation of the information transmission rate (ITR) as it does the SNR. CONCLUSIONS The appropriate selection of the optimal spatial and temporal frequencies determines the performance of independent binary SSVEP-BCIs. This fact is critical to enhance its low performance; hence, they should be adjusted independently for each subject. SIGNIFICANCE Independent binary SSVEP-BCIs can be used in patients who are unable to control their gaze sufficiently. The correct selection of the spatial and temporal frequencies has a considerable benefit on their low ITR that must be taken into account. In order to find the most suitable frequencies, a test similar to the presented in this study should be performed beforehand for each SSVEP-BCI user. This regard is not documented in studies conducted in the BCI literature.


Neurocomputing | 2007

A neuroengineering suite of computational tools for visual prostheses

Christian A. Morillas; Samuel F. Romero; Antonio Martínez; Francisco J. Pelayo; Leonardo Reyneri; Markus Bongard; Eduardo B. Fernandez

The cooperation between neuroscience and biomedical engineering gave rise to a recent, but growing research field, known as neuroengineering. We follow its principles to have a system providing basic descriptions of the visual world to the brains cortex. We describe a set of software and hardware tools to interface with neural tissue, in order to transmit visual information encoded into a bioinspired neural-like form. The set is composed of a retina-like encoder, and a platform to optimize electrical stimulation parameters for a multi-electrode implant. The main objective is to progress towards a functional visual neuroprosthesis for the blind.


International Journal of Neural Systems | 2016

A Computational Framework for Realistic Retina Modeling

Pablo Martínez-Cañada; Christian A. Morillas; Begoña Pino; Eduardo Ros; Francisco J. Pelayo

Computational simulations of the retina have led to valuable insights about the biophysics of its neuronal activity and processing principles. A great number of retina models have been proposed to reproduce the behavioral diversity of the different visual processing pathways. While many of these models share common computational stages, previous efforts have been more focused on fitting specific retina functions rather than generalizing them beyond a particular model. Here, we define a set of computational retinal microcircuits that can be used as basic building blocks for the modeling of different retina mechanisms. To validate the hypothesis that similar processing structures may be repeatedly found in different retina functions, we implemented a series of retina models simply by combining these computational retinal microcircuits. Accuracy of the retina models for capturing neural behavior was assessed by fitting published electrophysiological recordings that characterize some of the best-known phenomena observed in the retina: adaptation to the mean light intensity and temporal contrast, and differential motion sensitivity. The retinal microcircuits are part of a new software platform for efficient computational retina modeling from single-cell to large-scale levels. It includes an interface with spiking neural networks that allows simulation of the spiking response of ganglion cells and integration with models of higher visual areas.


international conference on artificial neural networks | 2005

Automatic generation of bio-inspired retina-like processing hardware

Antonio Martínez; Leonardo Reyneri; Francisco J. Pelayo; Samuel F. Romero; Christian A. Morillas; Begoña Pino

This paper describes a tool devised for automatic design of bioinspired visual processing models using reconfigurable digital hardware. The whole system is indicated for the analysis of vision models, especially those with real–time requirements. We achieve a synthesizable FPGA/ASIC design starting from a high level description of a retina, which is made and simulated through an ad-hoc program. Our tool allows a thorough simulation of the visual model at different abstraction levels, from functional simulation of the visual specifications up to hardware-oriented simulation of the developed FPGA model. The main objective of this work is to build a portable and flexible system for a visual neuro-prosthesis and to stimulate efficiently an array of intra–cortical implanted microelectrodes. A set of parameters can be adjusted in every step of the design flow in order to maximize the design flexibility of the model. Furthermore these parameters allow the different scientists who have to deal with the development to modify a well known characteristic.


international conference on artificial neural networks | 2005

A computational tool to test neuromorphic encoding schemes for visual neuroprostheses

Christian A. Morillas; Samuel F. Romero; Antonio Martínez; Francisco J. Pelayo; Eduardo B. Fernandez

Recent advances in arrays of microelectrodes open the door to both better understanding of the way the brain works and to the restoration of damaged perceptive and motor functions. In the case of sensorial inputs, direct multi-channel interfacing with the brain for neuro-stimulation requires a computational layer capable of handling the translation from external stimuli into appropriate trains of spikes. The work here presented aims to provide automated and reconfigurable transformation of visual inputs into addresses of microelectrodes in a cortical implant for the blind. The development of neuroprostheses such as this one will contribute to reveal the neural language of the brain for the representation of perceptions, and offers a hope to persons with deep visual impairments. Our system serves as a highly flexible test-bench for almost any kind of retina model, and allows the validation of these models against multi-electrode recordings from experiments with biological retinas. The current version is a PC-based platform, and a compact stand-alone device is under development for the autonomy and portability required in chronic implants. This tool is useful for psychologists, neurophysiologists, and neural engineers as it offers a way to deal with the complexity of multi-channel electrical interfaces for the brain.


computational color imaging workshop | 2015

First Stage of a Human Visual System Simulator: The Retina

Pablo Martínez-Cañada; Christian A. Morillas; J. Nieves; Begoña Pino; Francisco J. Pelayo

We propose a configurable simulation platform that reproduces the analog neural behavior of different models of the Human Visual System at the early stages. Our software can simulate efficiently many of the biological mechanisms found in retina cells, such as chromatic opponency in the red-green and blue-yellow pathways, signal gathering through chemical synapses and gap junctions or variations in the neuron density and the receptive field size with eccentricity. Based on an image-processing approach, simulated neurons can perform spatiotemporal and color processing of the input visual stimuli generating the visual maps of every intermediate stage, which correspond to membrane potentials and synaptic currents. An interface with neural network simulators has been implemented, which allows to reproduce the spiking output of some specific cells, such as ganglion cells, and integrate the platform with models of higher brain areas. Simulations of different retina models related to the color opponent mechanisms, obtained from electro-physiological experiments, show the capability of the platform to reproduce their neural response.


Journal of Rehabilitation Research and Development | 2013

Visual training and emotional state of people with retinitis pigmentosa

Helena Chacón-López; Francisco J. Pelayo; María Dolores López-Justicia; Christian A. Morillas; Raquel Ureña; Antonio Chacón-Medina; Begoña Pino

The purpose of the study was to improve the visual functioning of people with restriction in contrast sensitivity (CS), such as retinitis pigmentosa (RP), by means of a visual training program. Twenty-six volunteers with RP participated, distributed in two groups: 15 who made up the experimental group (who received the training program) and 11 who participated as a control group (without training). Participants were evaluated before beginning training, on completion, and 3 mo following completion for CS with the Pelli-Robson Contrast Sensitivity (P&R) test, visual functioning with the Visual Function Questionnaire (VFQ), and in emotional state with the Beck Depression Inventory (BDI). The training program is based on software that generates luminous stimuli of varying duration and intensity and registers the stimuli perceived by the subject. The outcomes showed significant differences posttraining in the experimental group in depression (F1,14 = 5.42; p < 0.04), VFQ (Z = -2.27; p < 0.02), and P&R in the right eye (Z = -1.99; p < 0.046) and left eye (Z = -2.30; p < 0.02) but not in binocular (Z = -0.96; p < 0.34). The outcomes showed that the experimental group made significant progress in all variables and these effects remained after 3 mo, which suggests that the program could be a helpful addition to RP rehabilitation and help mitigate the damage.

Collaboration


Dive into the Christian A. Morillas's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge