Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marcos Martín-Fernández is active.

Publication


Featured researches published by Marcos Martín-Fernández.


Medical Image Analysis | 2005

An approach for contour detection of human kidneys from ultrasound images using Markov random fields and active contours.

Marcos Martín-Fernández; Carlos Alberola-López

In this paper, a novel method for the boundary detection of human kidneys from three dimensional (3D) ultrasound (US) is proposed. The inherent difficulty of interpretation of such images, even by a trained expert, makes the problem unsuitable for classical methods. The method here proposed finds the kidney contours in each slice. It is a probabilistic Bayesian method. The prior defines a Markov field of deformations and imposes the restriction of contour smoothness. The likelihood function imposes a probabilistic behavior to the data, conditioned to the contour position. This second function, which is also Markov, uses an empirical model of distribution of the echographical data and a function of the gradient of the data. The model finally includes, as a volumetric extension of the prior, a term that forces smoothness along the depth coordinate. The experiments that have been carried out on echographies from real patients validate the model here proposed. A sensitivity analysis of the model parameters has also been carried out.


Image and Vision Computing | 2009

Automatic noise estimation in images using local statistics. Additive and multiplicative cases

Santiago Aja-Fernández; Gonzalo Vegas-Sánchez-Ferrero; Marcos Martín-Fernández; Carlos Alberola-López

In this paper, we focus on the problem of automatic noise parameter estimation for additive and multiplicative models and propose a simple and novel method to this end. Specifically we show that if the image to work with has a sufficiently great amount of low-variability areas (which turns out to be a typical feature in most images), the variance of noise (if additive) can be estimated as the mode of the distribution of local variances in the image and the coefficient of variation of noise (if multiplicative) can be estimated as the mode of the distribution of local estimates of the coefficient of variation. Additionally, a model for the sample variance distribution for an image plus noise is proposed and studied. Experiments show the goodness of the proposed method, specially in recursive or iterative filtering methods.


Ultrasound in Medicine and Biology | 2003

A theoretical framework to three-dimensional ultrasound reconstruction from irregularly sampled data

Raúl San José-Estépar; Marcos Martín-Fernández; P.Pablo Caballero-Martínez; Carlos Alberola-López; Juan Ruiz-Alzola

Several techniques have been described in the literature in recent years for the reconstruction of a regular volume out of a series of ultrasound (US) slices with arbitrary orientations, typically scanned by means of US freehand systems. However, a systematic approach to such a problem is still missing. This paper focuses on proposing a theoretical framework for the 3-D US volume reconstruction problem. We introduce a statistical method for the construction and trimming of the sampling grid where the reconstruction will be carried out. The results using in vivo US data demonstrate that the computed reconstruction grid that encloses the region-of-interest (ROI) is smaller than those obtained from other reconstruction methods in those cases where the scanning trajectory deviates from a pure straight line. In addition, an adaptive Gaussian interpolation technique is studied and compared with well-known interpolation methods that have been applied to the reconstruction problem in the past. We find that the proposed method numerically outperforms former proposals in several control studies; subjective visual results also support this conclusion and highlight some potential deficiencies of methods previously proposed.


IEEE Transactions on Medical Imaging | 2004

Comments on: A methodology for evaluation of boundary detection algorithms on medical images

Carlos Alberola-López; Marcos Martín-Fernández; Juan Ruiz-Alzola

In this paper we analyze a result previously published about a comparison between two statistical tests used for evaluation of boundary detection algorithms on medical images. We conclude that the statement made by Chalana and Kim (1997) about the performance of the percentage test has a weak theoretical foundation, and according to our results, is not correct. In addition, we propose a one-sided hypothesis test for which the acceptance region can be determined in advance, as opposed to the two-sided confidence intervals proposed in the original paper, which change according to the estimated quantity.


Medical Image Analysis | 2011

Unsupervised 4D myocardium segmentation with a Markov Random Field based deformable model

Lucilio Cordero-Grande; Gonzalo Vegas-Sánchez-Ferrero; Pablo Casaseca-de-la-Higuera; J. Alberto San-Román-Calvar; Ana Revilla-Orodea; Marcos Martín-Fernández; Carlos Alberola-López

A stochastic deformable model is proposed for the segmentation of the myocardium in Magnetic Resonance Imaging. The segmentation is posed as a probabilistic optimization problem in which the optimal time-dependent surface is obtained for the myocardium of the heart in a discrete space of locations built upon simple geometric assumptions. For this purpose, first, the left ventricle is detected by a set of image analysis tools gathered from the literature. Then, the segmentation solution is obtained by the Maximization of the Posterior Marginals for the myocardium location in a Markov Random Field framework which optimally integrates temporal-spatial smoothness with intensity and gradient related features in an unsupervised way by the Maximum Likelihood estimation of the parameters of the field. This scheme provides a flexible and robust segmentation method which has been able to generate results comparable to manually segmented images for some derived cardiac function parameters in a set of 43 patients affected in different degrees by an Acute Myocardial Infarction.


IEEE Transactions on Dependable and Secure Computing | 2011

Anomaly Detection in Network Traffic Based on Statistical Inference and \alpha-Stable Modeling

Federico Simmross-Wattenberg; Juan I. Asensio-Pérez; Pablo Casaseca-de-la-Higuera; Marcos Martín-Fernández; Ioannis A. Dimitriadis; Carlos Alberola-López

This paper proposes a novel method to detect anomalies in network traffic, based on a nonrestricted α-stable first-order model and statistical hypothesis testing. To this end, we give statistical evidence that the marginal distribution of real traffic is adequately modeled with α-stable functions and classify traffic patterns by means of a Generalized Likelihood Ratio Test (GLRT). The method automatically chooses traffic windows used as a reference, which the traffic window under test is compared with, with no expert intervention needed to that end. We focus on detecting two anomaly types, namely floods and flash-crowds, which have been frequently studied in the literature. Performance of our detection method has been measured through Receiver Operating Characteristic (ROC) curves and results indicate that our method outperforms the closely-related state-of-the-art contribution described in. All experiments use traffic data collected from two routers at our university-a 25,000 students institution-which provide two different levels of traffic aggregation for our tests (traffic at a particular school and the whole university). In addition, the traffic model is tested with publicly available traffic traces. Due to the complexity of α-stable distributions, care has been taken in designing appropriate numerical algorithms to deal with the model.


international conference on image processing | 2003

A fully automatic algorithm for contour detection of bones in hand radiographs using active contours

R. de Luis-Garcia; Marcos Martín-Fernández; Juan Ignacio Arribas; Carlos Alberola-López

This paper presents an algorithm for automatically detecting bone contours from hand radiographs using active contours. Prior knowledge is first used to locate initial contours for the snakes inside each bone of interest. Next, an adaptive snake algorithm is applied so that parameters are properly adjusted for each bone specifically. We introduce a novel truncation technique to prevent the external forces of the snake from pulling the contour outside the bones boundaries, yielding excellent results.


Medical Image Analysis | 2009

Sequential Anisotropic Multichannel Wiener Filtering with Rician Bias Correction Applied to 3D Regularization of DWI Data

Marcos Martín-Fernández; Emma Muñoz-Moreno; Leila Cammoun; Jean-Philippe Thiran; Carl-Fredrik Westin; Carlos Alberola-López

It has been shown that the tensor calculation is very sensitive to the presence of noise in the acquired images, yielding to very low quality Diffusion Tensor Images (DTI) data. Recent investigations have shown that the noise present in the Diffusion Weighted Images (DWI) causes bias effects on the DTI data which cannot be corrected if the noise characteristic is not taken into account. One possible solution is to increase the minimum number of acquired measurements (which is 7) to several tens (or even several hundreds). This has the disadvantage of increasing the acquisition time by one (or two) orders of magnitude, making the process inconvenient for a clinical setting. We here proposed a turn-around procedure for which the number of acquisitions is maintained but, the DWI data are filtered prior to determining the DTI. We show a significant reduction on the DTI bias by means of a simple and fast procedure which is based on linear filtering; well-known drawbacks of such filters are circumvented by means of anisotropic neighborhoods and sequential application of the filter itself. Information of the first order probability density function of the raw data, namely, the Rice distribution, is also included. Results are shown both for synthetic and real datasets. Some error measurements are determined in the synthetic experiments, showing how the proposed scheme is able to reduce them. It is worth noting a 50% increase in the linear component for real DTI data, meaning that the bias in the DTI is considerably reduced. A novel fiber smoothness measure is defined to evaluate the resulting tractography for real DWI data. Our findings show that after filtering, fibers are considerably smoother on the average. Execution times are very low as compared to other reported approaches which allows for a real-time implementation.


IEEE Transactions on Image Processing | 2015

Anisotropic Diffusion Filter With Memory Based on Speckle Statistics for Ultrasound Images

Gabriel Ramos-Llordén; Gonzalo Vegas-Sánchez-Ferrero; Marcos Martín-Fernández; Carlos Alberola-López; Santiago Aja-Fernández

Ultrasound (US) imaging exhibits considerable difficulties for medical visual inspection and for development of automatic analysis methods due to speckle, which negatively affects the perception of tissue boundaries and the performance of automatic segmentation methods. With the aim of alleviating the effect of speckle, many filtering techniques are usually considered as a preprocessing step prior to automatic analysis methods or visual inspection. Most of the state-of-the-art filters try to reduce the speckle effect without considering its relevance for the characterization of tissue nature. However, the speckle phenomenon is the inherent response of echo signals in tissues and can provide important features for clinical purposes. This loss of information is even magnified due to the iterative process of some speckle filters, e.g., diffusion filters, which tend to produce over-filtering because of the progressive loss of relevant information for diagnostic purposes during the diffusion process. In this paper, we propose an anisotropic diffusion filter with a probabilistic-driven memory mechanism to overcome the over-filtering problem by following a tissue selective philosophy. In particular, we formulate the memory mechanism as a delay differential equation for the diffusion tensor whose behavior depends on the statistics of the tissues, by accelerating the diffusion process in meaningless regions and including the memory effect in regions where relevant details should be preserved. Results both in synthetic and real US images support the inclusion of the probabilistic memory mechanism for maintaining clinical relevant structures, which are removed by the state-of-the-art filters.


medical image computing and computer assisted intervention | 2004

3D Bayesian Regularization of Diffusion Tensor MRI Using Multivariate Gaussian Markov Random Fields

Marcos Martín-Fernández; Carl-Fredrik Westin; Carlos Alberola-López

3D Bayesian regularization applied to diffusion tensor MRI is presented here. The approach uses Markov Random Field ideas and is based upon the definition of a 3D neighborhood system in which the spatial interactions of the tensors are modeled. As for the prior, we model the behavior of the tensor fields by means of a 6D multivariate Gaussian local characteristic. As for the likelihood, we model the noise process by means of conditionally independent 6D multivariate Gaussian variables. Those models include inter-tensor correlations, intra-tensor correlations and colored noise. The solution tensor field is obtained by using the simulated annealing algorithm to achieve the maximum a posteriori estimation. Several experiments both on synthetic and real data are presented, and performance is assessed with mean square error measure.

Collaboration


Dive into the Marcos Martín-Fernández's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carl-Fredrik Westin

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Juan Ruiz-Alzola

University of Las Palmas de Gran Canaria

View shared research outputs
Researchain Logo
Decentralizing Knowledge