Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Manuel Graña is active.

Publication


Featured researches published by Manuel Graña.


Neurocomputing | 2016

Hyperspectral image nonlinear unmixing and reconstruction by ELM regression ensemble

Borja Ayerdi; Manuel Graña

Unmixing is the estimation of hyperspectral image pixels composition, specified as the fractional abundances of the composing materials, achieving image segmentation at sub-pixel resolution. Linear unmixing assumes that pixels are convex combinations of endmember spectra, hence endmember identification is required prior to unmixing processes. In our approach to non-linear unmixing by Extreme Learning Machine (ELM) regression ensembles, we do not need to perform endmember identification, which is implicit in the non-linear transformation. Instead we provide estimates of the fractional abundances of predefined material classes, which have been characterized by pure pixels extracted from the image according to available ground truth. In this paper, we introduce a formal discussion of the convergence properties of ELM regression ensembles that endorses the empirical results. The analysis shows them to converge to the exact regression value when the number of components of the ensemble grows, provided that the output is the average of the individual outputs. Besides, the proposed approach allows for a general validation procedure based on the reconstruction error over the entire hyperspectral image. Reconstruction error can be estimated using the mapping from fractional abundances to reconstructed spectra, also achieved by ELM regression ensembles. Therefore, validation can be carried out independently of training data, which can be used completely for model construction. Experimental results on well known benchmark images show that the approach has big advantage over state-of-the-art unmixing approaches.


Neurocomputing | 2015

Local activity features for computer aided diagnosis of schizophrenia on resting-state fMRI

Alexandre Savio; Manuel Graña

Resting state functional Magnetic Resonance Imaging (rs-fMRI) is increasingly used for the identification of image biomarkers of brain diseases or psychiatric conditions, such as Schizophrenia. The machine learning approach followed in this paper consists in performing feature extraction and subsequent classification experiments. Feature extraction methods that preserve spatial information allow to recover the anatomical localization of the voxels that provide discriminant information. Such locations may be further studied to assess their biological meaning as biomarkers for the disease. The power of this approach lies in the predictive accuracy of the classifier, so that features leading to higher accuracy results are assumed to have greater value as biomarkers. In this paper we apply this approach to brain local activity measures computed over rs-fMRI data from Schizophrenia patients and healthy control subjects obtained from a publicly available database (COBRE), which allows for the confirmation or falsification of our results. The extensive experimental work provides evidence that local activity measures, such as Regional Homogeneity (ReHo), may be useful for the intended purposes.


Archive | 2013

Knowledge Engineering, Machine Learning and Lattice Computing with Applications

Manuel Graña; Carlos Toro; Robert J. Howlett; Lakhmi C. Jain

Investigation of Random Subspace and Random Forest Regression Models Using Data with Injected Noise.- A Genetic Algorithm vs. Local Search Methods for Solving the Orienteering Problem in Large Networks.- Dynamic Structure of Volterra-Wiener Filter for Reference Signal Cancellation in Passive Radar.- A Web Browsing Cognitive Model.- Optimization of Approximate Decision Rules Relative to Number of Misclassifications: Comparison of Greedy and Dynamic Programming Approaches.- Set-Based Detection and Isolation of Intersampled Delays and Pocket Dropouts in Networked Control.- Analysis and Synthesis of the System for Processing of Sign Language Gestures and Translation of Mimic Subcode in Communication with Deaf People.- Prediction of Software Quality Based on Variables from the Development Process.- Mining High Performance Managers Based on the Results of Psychological Tests.- Semantics Preservation in Schema Mappings within Data Exchange Systems.- Multi-relational Learning for Recommendation of Matches between Semantic Structures.- Semantically Enhanced Text Stemmer (SETS) for Cross-Domain Document Clustering.- Ontology Recomposition.- Structured Construction of Knowledge Repositories with MetaConcept.- Association between Teleology and Knowledge Mapping.- Boosting Retrieval of Digital Spoken Content.- Constructing the Integral OLAP-Model for Scientific Activities Based on FCA.- Reasoning about Dialogical Strategies.- Low-Cost Computer Vision Based Automatic Scoring of Shooting Targets.- Using Bookmaker Odds to Predict the Final Result of Football Matches.


Cybernetics and Systems | 2016

Particle Swarm Optimization Quadrotor Control for Cooperative Aerial Transportation of Deformable Linear Objects

Julian Estevez; Jose Manuel Lopez-Guede; Manuel Graña

ABSTRACT We present a cooperative aerial robot system for the transportation of hoses. The hose–robot attachment makes the whole system physically interconnected but not rigid, so that control design becomes a difficult nonlinear optimization problem. The hose in quasistationary state can be modeled by sections of catenary curves. We use proportional integral derivative (PID) controllers for both quadrotor attitude and trajectory control, tuned by particle swarm optimization (PSO). In this work we test PSO minimizing an energy function to achieve the PID controller tuning for horizontal motion of quadrotor teams transporting hoses under different stress conditions.


Information Sciences | 2015

Reinforcement Learning endowed with safe veto policies to learn the control of Linked-Multicomponent Robotic Systems

Borja Fernandez-Gauna; Manuel Graña; Jose Manuel Lopez-Guede; Ismael Etxeberria-Agiriano; Igor Ansoategui

Performing reinforcement learning-based control of systems whose state space has many Undesired Terminal States (UTS) experiences severe convergence problems. We define UTS as terminal states without associated positive reward information. They appear in the training of over-constrained systems, when breaking a constraint implies that all the effort invested during a learning episode is lost without gathering any constructive information about how to achieve the target task. The random exploration performed by RL algorithms is unfruitful until the system reaches any final state bearing some reward that may be used to update the state-action value functions, hence UTS seriously impede the convergence of the learning process. The most efficient learning strategies avoid reaching any UTS, ensuring that each learning process episode provides useful reward information. Safe Modular State Action Veto (Safe-MSAV) policies learn specifically how to avoid state transitions leading to an UTS. The application of MSAV makes state space exploration much more efficient. Bigger ratio of UTS to the total number of states provide greater improvements. Safe-MSAV uses independent concurrent modules, each dealing with a separate kind of UTS. We report experiments on the control of Linked Multicomponent Robotic Systems (L-MCRS) showing a dramatic decrease on the computational resources required, ensuring faster as well as more accurate results than conventional exploration strategies that do not implement explicit mechanisms to avoid falling in UTS.


Archive | 2010

Multi-Robot Systems Control Implementation

Jose Manuel Lopez-Guede; Ekaitz Zulueta; Borja Fernández; Manuel Graña

Nowadays it is clear that multi-robot systems offer several advantages that are very difficult to reach with single systems. However, to leave the simulators and the academic environment it is a mandatory condition that they must fill: these systems must be economically attractive to increment their implantation in realistic scenarios. Due to multirobots systems are composed of several robots that generally are similar, if an economic optimisation is done in one of them, such optimisation can be replicated in each member of the team. In this paper we show a work to implement low level controllers with small computational needs that can be used in each of the subsystems that must be controlled in each of the robots that belongs to a multi-robot system. If a robot is in a multi-robot system that robot needs bigger computational capacity, because it has to do some tasks derived from being in the team, for example, coordination and communication with the remaining members of the team. Besides, occasionally, it has to deduce cooperatively the global strategy of the team. One of the theoretical advantage of multi-robot systems is that the cost of the team must be lower than the cost of a single robot with the same capabilities. To become this idea true it is mandatory that the cost of each member was under a certain value, and we can get this if each of them is equipped with very cheap computational systems. One of the cheapest and more flexible devices for control systems implementation are Field Programmable Gate Arrays (FPGAs). If we could implement a control loop using a very simple FPGA structure, the economic cost of each of them could be about 10 dollars. On the other hand, and under a pessimistic vision, the subsystems to control could have problems to be controlled using classic and well known control schemas as PID controllers. In this situation we can use other advanced control systems which try to emulate the human brain, as Predictive Control. This kind of control works using a world model and calculating some predictions about the response that it will show under some stimulus, and it obtains the better way of control the subsystem knowing which is the desired behavior from this moment until a certain instant later. The predictive controller tuning is a process that is done using analytical and manual methods. Such tuning process is expensive in computational terms, but it is done one time and in this paper we don’t deal with this problem. However, in spite of the great advantage of predictive control, which contributes to control systems that the classic control is unable to do, it has a great drawback: it is very computationally expensive while it is working. In section 4 we will revise the cause of this


Archive | 2016

Integrating Electronic Health Records in Clinical Decision Support Systems

Eider Sanchez; Carlos Toro; Manuel Graña

Electronic Health Records (EHR) are systematic collections of digital health information about individual patients or populations. They provide readily access to the complete medical history of the patient, which is useful for decision-making activities. In this paper we focus on a secondary benefit of EHR: the reuse of the implicit knowledge embedded in it to improve the knowledge on the mechanisms of a disease and/or the effectiveness of the treatments. In fact, all such patient data registries stored in EHR reflect implicitly different clinical decisions made by the clinical professionals that participated in the assistance of patients (e.g. criteria followed during decision making, patient parameters taken into account, effect of the treatments prescribed). This work proposes a methodology that allows the management of EHR not only as data containers and information repositories, but also as clinical knowledge repositories. Moreover, we propose an architecture for the extraction of the knowledge from EHR. Such knowledge can be fed into a Clinical Decision Support System (CDSS), in a way that could render benefits for the development of innovations from clinicians, health managers and medical researchers.


bioinformatics and biomedicine | 2015

Selected aspects of electronic health record analysis from the big data perspective

Boguslaw Cyganek; Manuel Graña; Andrzej Kasprzak; Krzysztof Walkowiak; Michal Wozniak

The electronic health record (EHR) groups all digital documents related to a given patient as anamnesis, results of the laboratory tests, prescriptions, recorded medical signals as ECG or images etc. Dealing with such data representation we face with plethora of problems as different form of data, unstructured data (as doctors notes), huge and fast growing volume, etc. It causes that EHR should be considered as the complex data representation. Accordingly, taking into consideration its complexity, hetorogenousity, fast growing and size we need special tools to analyse such medical big data. Such tools should be able to analyse datasets characterized by so-called 4Vs (volume, velocity, variety, and veracity). Notwithstandingly, we should also add the fifth V-value, because the only analytics tool deployment makes sense if it leads to healthcare improvement (as personalised patients care, unnecessary hospitalization decreasing or reducing the patients readmissions). In this paper we focus on the selected aspects EHR analysis from the big data perspective.


bioinformatics and biomedicine | 2015

Electronic Health Record: A review

Manuel Graña; Konrad Jackwoski

The Electronic Health Record (EHR) is becoming the central information object for various aspects of healthcare and medical related industries, from pharmaceuticals to bioengineering. This review provides a presentation of the state of affairs in several aspects of EHR, including security and privacy, data mining, design of decision support systems, acceptance by users and producers of health resources, and system implementation. In the last three years the number of publications has grown exponentially, therefore is rather difficult to be exhaustive, and the more technical aspects are expected to be quickly superseded by new advances.


Cybernetics and Systems | 2016

Experience-Based Electronic Health Records

Naiara Muro; Eider Sanchez; Carlos Toro; Eduardo Carrasco; Sebastián A. Ríos; Frank Guijarro; Manuel Graña

ABSTRACT Electronic Health Records are clinical information repositories that have been proposed primarily to provide access to all clinical data of a patient. They have been formally defined by a dual model composed of a reference model and an archetype model. Such dual approach allows semantic interoperability, thus making different systems understand each other. In this work we extend the current structure with a third Decisional Model that will allow reasoning over the embedded clinical contents. Such reasoning will be based on the reuse of the clinical experience gained by the corresponding clinical professionals during different decision procedures.

Collaboration


Dive into the Manuel Graña's collaboration.

Top Co-Authors

Avatar

Jose Manuel Lopez-Guede

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar

Borja Fernandez-Gauna

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar

Eider Sanchez

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar

Ion Marqués

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Miguel Velez-Reyes

University of Texas at El Paso

View shared research outputs
Top Co-Authors

Avatar

Alexandre Savio

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar

Arkaitz Artetxe

University of the Basque Country

View shared research outputs
Researchain Logo
Decentralizing Knowledge