Laura Fernández-Robles
University of León
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Laura Fernández-Robles.
international conference on pattern recognition | 2014
Oscar García-Olalla; Enrique Alegre; Laura Fernández-Robles; Víctor González-Castro
Local oriented statistical information booster (LOSIB) is a descriptor enhancer based on the extraction of the gray level differences along several orientations. Specifically, the mean of the differences along particular orientations is considered. In this paper we have carried out some experiments using several classical texture descriptors to show that classification results are better when they are combined with LOSIB, than without it. Both parametric and non-parametric classifiers, Support Vector Machine and k-Nearest Neighbourhoods respectively, were applied to assess this new method. Furthermore, two different texture dataset were evaluated: KTH-Tips-2a and Brodatz32 to prove the robustness of LOSIB. Global descriptors such as WCF4 (Wavelet Co-occurrence Features), that extracts Haralick features from the Wavelet Transform, have been combined with LOSIB obtaining an improvement of 16.94% on KTH and 7.55% on Brodatz when classifying with SVM. Moreover, LOSIB was used together with state-of-the-art local descriptors such as LBP (Local Binary Pattern) and several of its recent variants. Combined with CLBP (Complete LBP), the LOSIB booster results were improved in 5.80% on KTH-Tips 2a and 7.09% on the Brodatz dataset. For all the tested descriptors, we have observed that a higher performance has been achieved, with the two classifiers on both datasets, when using some LOSIB settings.
Eurasip Journal on Image and Video Processing | 2013
Oscar García-Olalla; Enrique Alegre; Laura Fernández-Robles; María Teresa García-Ordás; Diego García-Ordás
AbstractA new method to describe texture images using a hybrid combination of local and global texture descriptors is proposed in this paper. In this regard, a new adaptive local binary pattern (ALBP) descriptor is presented in order to carry out the local description. It is built by adding oriented standard deviation information to an ALBP descriptor in order to achieve a more complete representation of the images, and hence, it has been called adaptive local binary pattern with oriented standard deviation (ALBPS). Regarding semen vitality assessment, ALBPS outperformed previous literature works with an 81.88% accuracy and also yielded higher hit rates than the LBP and ALBP baseline methods. Concerning the global description of the images, several classical texture algorithms were tested and a descriptor based on wavelet transform and Haralick feature extraction (wavelet concurrent feature 13 (WCF13)) obtained the best results. Both local and global descriptors were combined, and the classification was carried out with a support vector machine. Two data sets have been evaluated: textures under varying illumination, pose and scale (KTH-TIPS) 2a data set and a second spermatozoa boar data set used to distinguish between dead or alive sperm heads. Therefore, our proposal is novel in three ways. First, a new local feature extraction method ALBPS is introduced. Second, a hybrid method combining the proposed local ALBPS and a global descriptor is presented, outperforming our first approach and all other methods evaluated for this problem. Third, texture classification accuracy is greatly improved with the two former texture descriptors presented. F score and accuracy values were computed in order to measure the performance. The best overall result was yielded by combining ALBPS with WCF13, reaching an F score = 0.886 and an accuracy of 85.63% in the spermatozoa data set and an 84.45% of hit rate in the KTH-TIPS 2a.
Computer Methods and Programs in Biomedicine | 2015
Oscar García-Olalla; Enrique Alegre; Laura Fernández-Robles; Patrik Malm; Ewert Bengtsson
The assessment of the state of the acrosome is a priority in artificial insemination centres since it is one of the main causes of function loss. In this work, boar spermatozoa present in gray scale images acquired with a phase-contrast microscope have been classified as acrosome-intact or acrosome-damaged, after using fluorescent images for creating the ground truth. Based on shape prior criteria combined with Otsus thresholding, regional minima and watershed transform, the spermatozoa heads were segmented and registered. One of the main novelties of this proposal is that, unlike what previous works stated, the obtained results show that the contour information of the spermatozoon head is important for improving description and classification. Other of this work novelties is that it confirms that combining different texture descriptors and contour descriptors yield the best classification rates for this problem up to date. The classification was performed with a Support Vector Machine backed by a Least Squares training algorithm and a linear kernel. Using the biggest acrosome intact-damaged dataset ever created, the early fusion approach followed provides a 0.9913 F-Score, outperforming all previous related works.
international conference on image analysis and recognition | 2012
Víctor González-Castro; Enrique Alegre; Oscar García-Olalla; Diego García-Ordás; María Teresa García-Ordás; Laura Fernández-Robles
The assessment of boar sperm head images according to their acrosome status is a very important task in the veterinary field. Unfortunately it can only be performed manually, which is slow, non-objective and expensive. It is important to provide companies an automatic and reliable method to perform this task. In this paper a new method which uses texture descriptors based on the Curvelet Transform is proposed. Its performance has been compared with other texture descriptors based on the Wavelet transform, and also with moments based descriptors, as they seem to be successful for this problem. Texture descriptors performed better, and curvelet-based ones achieved the best hit rate (97%) and area under the ROC curve (0.99).
international conference industrial engineering other applications applied intelligent systems | 2010
Víctor González-Castro; Rocío Alaiz-Rodríguez; Laura Fernández-Robles; Roberto Guzmán-Martínez; Enrique Alegre
Advances in image analysis make possible the automatic semen analysis in the veterinary practice. The proportion of sperm cells with damaged/intact acrosome, a major aspect in this assessment, depends strongly on several factors, including animal diversity and manipulation/ conservation conditions. For this reason, the class proportions have to be quantified for every future (test) semen sample. In this work, we evaluate quantification approaches based on the confusion matrix, the posterior probability estimates and a novel proposal based on the Hellinger distance. Our information theoretic-based approach to estimate the class proportions measures the similarity between several artificially generated calibration distributions and the test one at different stages: the data distributions and the classifier output distributions. Experimental results show that quantification can be conducted with a Mean Absolute Error below 0.02, what seems promising in this field.
computer analysis of images and patterns | 2015
Laura Fernández-Robles; George Azzopardi; Enrique Alegre; Nicolai Petkov
Wear evaluation of cutting tools is a key issue for prolonging their lifetime and ensuring high quality of products. In this paper, we present a method for the effective localisation of cutting edges of inserts in digital images of an edge profile milling head. We introduce a new image data set of 144 images of an edge milling head that contains 30 inserts. We use a circular Hough transform to detect the screws that fasten the inserts. In a cropped area around a detected screw, we use Cannys edge detection algorithm and Standard Hough Transform to localise line segments that characterise insert edges. We use this information and the geometry of the insert to identify which of these line segments is the cutting edge. The output of our algorithm is a set of quadrilateral regions around the identified cutting edges. These regions can then be used as input to other algorithms for the quality assessment of the cutting edges. Our results show that the proposed method is very effective for the localisation of the cutting edges of inserts in an edge profile milling machine.
Neurocomputing | 2016
Eduardo Fidalgo; Enrique Alegre; Víctor González-Castro; Laura Fernández-Robles
The combination of SIFT descriptors with other features usually improves image classification, like Edge-SIFT, which extracts keypoints from an edge image obtained after applying the compass operator to a colour image. We evaluate for the first time, how the use of different radii in the compass operator affects the classification performance. We demonstrate that the value proposed in the literature, radius=4.00, is not the optimum from an image classification point of view. We also put in evidence that in ideal conditions, choosing an appropriate radius for each image yields accuracy values even higher than 95%. Finally, we propose a new method to estimate the best radius for the compass operator in each dataset. Using a training subset selected on the basis of a minimum dispersion criterion of edges density, we construct a richer dictionary for each dataset in our Bag of Words pipeline. From that dictionary it is selected a radius for the whole dataset that yields higher accuracy than using the value proposed in the literature. Using this method, we obtained improvements in the accuracy up to 24.4% in Soccer, 6.77% in COIL-RWTH-2, 4.46% in Birds, 3.82% in ImageNet_Dogs, 2.75% in ImageNet_Birds, 2.02% in Flowers and 1.75% in Caltech101 datasets. It was demonstrated that compass radius in Edge-SIFT affects to classification.The classification performance of different radii was evaluated on eight datasets.It is shown that selecting a radius for each image results in better classification.A method to automatically estimate a better radius for each dataset is proposed.The estimated radius guarantees better results than the state-of-the-art.
international conference on pattern recognition | 2016
George Azzopardi; Laura Fernández-Robles; Enrique Alegre; Nicolai Petkov
The recently proposed trainable COSFIRE filters are highly effective in a wide range of computer vision applications, including object recognition, image classification, contour detection and retinal vessel segmentation. A COSFIRE filter is selective for a collection of contour parts in a certain spatial arrangement. These contour parts and their spatial arrangement are determined in an automatic configuration procedure from a single user-specified pattern of interest. The traditional configuration, however, does not guarantee the selection of the most distinctive contour parts. We propose a genetic algorithm-based optimization step in the configuration of COSFIRE filters that determines the minimum subset of contour parts that best characterize the pattern of interest. We use a public dataset of images of an edge milling head machine equipped with multiple cutting tools to demonstrate the effectiveness of the proposed optimization step for the detection and localization of such tools. The optimization process that we propose yields COSFIRE filters with substantially higher generalization capability. With an average of only six COSFIRE filters we achieve high precision P and recall R rates (P = 91.99%; R = 96.22%). This outperforms the original COSFIRE filter approach (without optimization) mostly in terms of recall. The proposed optimization procedure increases the efficiency of COSFIRE filters with little effect on the selectivity.
soco-cisis-iceute | 2017
Manuel Castejón-Limas; Hector Alaiz-Moreton; Laura Fernández-Robles; Javier Alfonso-Cendón; Camino Fernández-Llamas; Lidia Sánchez-González; Hilde Pérez
This paper explores the benefit of using the PAELLA algorithm in an innovative way. The PAELLA algorithm was originally developed in the context of outlier detection and data cleaning. As a consequence, it is usually seen as a discriminant tool that categorizes observations into two groups: core observations and outliers. A new look at the information contained in its output provides ample opportunity in the context of data driven predictive models. The information contained in the occurrence vector is used through the experiments reported in a quest for finding how to take advantage of that information. The results obtained in each successive experiment guide the researcher to a sensible use case in which this information proves extremely useful: probabilistic sampling regression.
soco-cisis-iceute | 2017
Surajit Saikia; Eduardo Fidalgo; Enrique Alegre; Laura Fernández-Robles
The task of retrieving a specific object from an image, which is similar to a query object is one of the critical applications in the computer vision domain. The existing methods fail to return similar objects when the region of interest is not specified correctly in a query image. Furthermore, when the feature vector is large, the retrieval from big collections is usually computationally expensive. In this paper, we propose an object retrieval method, which is based on the neural codes (activations) generated by the last inner-product layer of the Faster R-CNN network demonstrating that it can be used not only for object detection but for retrieval too. To evaluate the method, we have used a subset of ImageNet comprising of images related to indoor scenes, and to speed-up the retrieval, we first process all the images from the dataset and we save information (i.e. neural codes, objects present in the image, confidence scores and bounding box coordinates) corresponding to each detected object. Then, given a query image, the system detects the object present and retrieves its neural codes, which are then used to compute the cosine similarity against saved neural codes. We retrieved objects with high cosine similarity scores, and then we compared it with the results obtained using confidence scores. We showed that our approach takes only 0.534 s to retrieve all the 1454 objects in our test set.