Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Mera is active.

Publication


Featured researches published by David Mera.


Marine Pollution Bulletin | 2012

Adaptive thresholding algorithm based on SAR images and wind data to segment oil spills along the northwest coast of the Iberian Peninsula.

David Mera; José Manuel Cotos; José Varela-Pet; Oscar Garcia-Pineda

Satellite Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillage on the oceans surface. Several surveillance applications have been developed based on this technology. Environmental variables such as wind speed should be taken into account for better SAR image segmentation. This paper presents an adaptive thresholding algorithm for detecting oil spills based on SAR data and a wind field estimation as well as its implementation as a part of a functional prototype. The algorithm was adapted to an important shipping route off the Galician coast (northwest Iberian Peninsula) and was developed on the basis of confirmed oil spills. Image testing revealed 99.93% pixel labelling accuracy. By taking advantage of multi-core processor architecture, the prototype was optimized to get a nearly 30% improvement in processing time.


Computers & Geosciences | 2014

Automatic decision support system based on SAR data for oil spill detection

David Mera; José Manuel Cotos; José Varela-Pet; Pablo García Rodríguez; Andrés Caro

Global trade is mainly supported by maritime transport, which generates important pollution problems. Thus, effective surveillance and intervention means are necessary to ensure proper response to environmental emergencies. Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillages on the oceans surface. Several decision support systems have been based on this technology. This paper presents an automatic oil spill detection system based on SAR data which was developed on the basis of confirmed spillages and it was adapted to an important international shipping route off the Galician coast (northwest Iberian Peninsula). The system was supported by an adaptive segmentation process based on wind data as well as a shape oriented characterization algorithm. Moreover, two classifiers were developed and compared. Thus, image testing revealed up to 95.1% candidate labeling accuracy. Shared-memory parallel programming techniques were used to develop algorithms in order to improve above 25% of the system processing time. HighlightsAn automatic oil spill detection system based on SAR images was developed.A database with confirmed oil spills was used to develop the system.Image testing revealed up to 95.1% candidate labeling accuracy.Two classifiers were compared from the labeling accuracy viewpoint.The processing time was optimized via shared memory parallelization techniques.


international symposium on multimedia | 2014

Towards Fast Multimedia Feature Extraction: Hadoop or Storm

David Mera; Michal Batko; Pavel Zezula

The current explosion of data accelerated evolution of various content-based indexing techniques that allow to efficiently search in multimedia data such as images. However, index able features must be first extracted from the raw images before the indexing. This necessary step can be very time consuming for large datasets thus parallelization is desirable to speed the process up. In this paper, we experimentally compare two approaches to distribute the task among multiple machines: the Apache Hadoop and the Apache Storm projects.


Future Generation Computer Systems | 2010

Retelab: A geospatial grid web laboratory for the oceanographic research community

Carmen Cotelo; Andrés Gómez; J. Ignacio López; David Mera; José Manuel Cotos; J. Pérez Marrero; Constantino Vázquez

Retelab is a virtual laboratory for the Oceanographic research community. It is supported by a Grid infrastructure and its main objective is to provide an easy and useful tool for oceanographers, where computer skills are not an obstacle. To achieve these goals, Retelab includes improved versions of portal and Grid technologies related to security, data access, and job management. A solution based on a Role Access Management Model has been built for user access and registration, looking for a balance between simplicity and robustness. The sharing and discovery of scientific data is accomplished using a virtual database focused on metadata and designed specifically to store geospatial information. Finally, a comfortable and transparent procedure to submit and to monitor jobs has been developed. It is based on the integration and adaptation of the GridWay metascheduler to the multiuser portal environment in such a way that a single UNIX account can use several proxy certificates. The Virtual Laboratory has been tested by the implementation and deployment of several oceanographic applications.


Computers & Geosciences | 2017

On the use of feature selection to improve the detection of sea oil spills in SAR images

David Mera; Verónica Bolón-Canedo; José Manuel Cotos; Amparo Alonso-Betanzos

Fast and effective oil spill detection systems are crucial to ensure a proper response to environmental emergencies caused by hydrocarbon pollution on the oceans surface. Typically, these systems uncover not only oil spills, but also a high number of look-alikes. The feature extraction is a critical and computationally intensive phase where each detected dark spot is independently examined. Traditionally, detection systems use an arbitrary set of features to discriminate between oil spills and look-alikes phenomena. However, Feature Selection (FS) methods based on Machine Learning (ML) have proved to be very useful in real domains for enhancing the generalization capabilities of the classifiers, while discarding the existing irrelevant features. In this work, we present a generic and systematic approach, based on FS methods, for choosing a concise and relevant set of features to improve the oil spill detection systems. We have compared five FS methods: Correlation-based feature selection (CFS), Consistency-based filter, Information Gain, ReliefF and Recursive Feature Elimination for Support Vector Machine (SVM-RFE). They were applied on a 141-input vector composed of features from a collection of outstanding studies. Selected features were validated via a Support Vector Machine (SVM) classifier and the results were compared with previous works. Test experiments revealed that the classifier trained with the 6-input feature vector proposed by SVM-RFE achieved the best accuracy and Cohens kappa coefficient (87.1% and 74.06% respectively). This is a smaller feature combination with similar or even better classification accuracy than previous works. The presented finding allows to speed up the feature extraction phase without reducing the classifier accuracy. Experiments also confirmed the significance of the geometrical features since 75.0% of the different features selected by the applied FS methods as well as 66.67% of the proposed 6-input feature vector belong to this category. HighlightsFive feature selection methods were applied to improve oil spill detection systems.Feature selection discarded irrelevant features and improved the classifier accuracy.The SVM-RFE feature selection method obtained the best accuracy results.A 6-input SVM classifier showed an accuracy of 87.1% and a Kappa statistic of 74.06%.75.0% of the unique selected features belong to the geometrical category.


Multimedia Tools and Applications | 2017

Speeding up the multimedia feature extraction: a comparative study on the big data approach

David Mera; Michal Batko; Pavel Zezula

The current explosion of multimedia data is significantly increasing the amount of potential knowledge. However, to get to the actual information requires to apply novel content-based techniques which in turn require time consuming extraction of indexable features from the raw data. In order to deal with large datasets, this task needs to be parallelized. However, there are multiple approaches to choose from, each with its own benefits and drawbacks. There are also several parameters that must be taken into consideration, for example the amount of available resources, the size of the data and their availability. In this paper, we empirically evaluate and compare approaches based on Apache Hadoop, Apache Storm, Apache Spark, and Grid computing, employed to distribute the extraction task over an outsourced and distributed infrastructure.


distributed computing and artificial intelligence | 2009

A User Management Web System Based on Portlets for a Grid Environment Integrating Shibboleth, PURSe, PERMIS and Gridsphere

David Mera; José Manuel Cotos; José Ramon Rios Viqueira; José M. Varela

We propose in this project the development of a distributed collaborative environment, which will constitute a virtual laboratory for multidisciplinary research projects related to oceanographic remote sensing. We will give an overview and the current state of this project, and concretely we will show the security access management module. We propose a well balanced solution between security and simplicity based on the integration of several technologies, where a user can either be registered through a web portal using a portlets system or access directly via Shibboleth. Grid access and job execution are controlled by a Role Base Access Control system (RBAC) that makes use of attribute certificates for the users and Public Key Infrastructure (PKI).


Archive | 2018

Reconstruction of Tomographic Images through Machine Learning Techniques

Xosé Fernández-Fuentes; David Mera; Andrés Gómez

Some problems in the field of health or industry require to obtain information from the inside of a body without using invasive methods. Some techniques are able to get qualitative images. However, these images are not enough to solve some problems that require an accurate knowledge. Normally, the tomography processes are used to explore inside of a body. In this particular case, we are using the method called Electrical Impedance Tomography (EIT). The basic operation of this method is as follows: (1) The electrical potential difference is measured in the electrodes placed around the body. This part is known as forward model. (2) Get information from the inside of the body using the measured voltages. This problem is known as inverse problem. There are several approximations to solve this inverse problem. However, these solutions are focused on obtaining qualitative images. In this paper, we show the main challenges of how to obtain quantitative knowledge when Machine Learning techniques are used to solve this inverse problem.


Multimedia Tools and Applications | 2018

GeoHbbTV: A framework for the development and evaluation of geographic interactive TV contents

David Luaces; José Ramon Rios Viqueira; Pablo Gamallo; David Mera; Julián Flores

Synchronizing TV contents with applications is a topic that has gained much interest during the last years. Reaching the viewers through various channels (TV, web, mobile devices, etc.) has shown to be a means for increasing the audience. Related to the above, the hybrid TV standard HbbTV (Hybrid Broadcast Broadband TV) synchronizes the broadcast of video and audio with applications that may be delivered through either the broadcast channel or a broadband network. Thus, HbbTV applications may be developed to provide contextual information for emitted TV shows and advertisements. This paper reports on the integration of the automatic generation of geographic focus of text content with interactive TV. In particular it describes a framework for the incorporation of geographic context to TV shows and its visualization through HbbTV. To achieve this, geographic named entities are first extracted from the available subtitles and next the spatial extension of those entities is used for the production of context maps. An evaluation strategy has been devised and used to test alternative prototype implementations for TV newscast in Spanish language. Finally, to go beyond the initial solution proposed, some challenges for future research are also discussed.


Neural Computing and Applications | 2017

Polynomial Kernel Discriminant Analysis for 2D visualization of classification problems

Sadi Alawadi; M. Fernández-Delgado; David Mera; Senén Barro

In multivariate classification problems, 2D visualization methods can be very useful to understand the data properties whenever they transform the n-dimensional data into a set of 2D patterns which are similar to the original data from the classification point of view. This similarity can be understood as that a classification method works similarly on the original n-dimensional and on the 2D mapped patterns, i.e., the classifier performance should not be much lower on the mapped than on the original patterns. We propose several simple and efficient mapping methods which allow to visualize classification problems in 2D. In order to preserve the structure about the original classification problem, the mappings minimize different class overlap measures, combined with different functions (linear, quadratic and polynomic of several degrees) from

Collaboration


Dive into the David Mera's collaboration.

Top Co-Authors

Avatar

José Manuel Cotos

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

Carmen Cotelo

Centro de Supercomputación de Galicia

View shared research outputs
Top Co-Authors

Avatar

José Ramon Rios Viqueira

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

M. Fernández-Delgado

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

José M. Varela

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

José Varela-Pet

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

Julián Flores

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

Sadi Alawadi

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

Senén Barro

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge