Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rodrigo Moreno is active.

Publication


Featured researches published by Rodrigo Moreno.


Medical Image Analysis | 2013

Standardized evaluation framework for evaluating coronary artery stenosis detection, stenosis quantification and lumen segmentation algorithms in computed tomography angiography

Hortense A. Kirisli; Michiel Schaap; Coert Metz; Anoeshka S. Dharampal; W. B. Meijboom; S. L. Papadopoulou; Admir Dedic; Koen Nieman; M. A. de Graaf; M. F. L. Meijs; M. J. Cramer; Alexander Broersen; Suheyla Cetin; Abouzar Eslami; Leonardo Flórez-Valencia; Kuo-Lung Lor; Bogdan J. Matuszewski; I. Melki; B. Mohr; Ilkay Oksuz; Rahil Shahzad; Chunliang Wang; Pieter H. Kitslaar; Gözde B. Ünal; Amin Katouzian; Maciej Orkisz; Chung-Ming Chen; Frédéric Precioso; Laurent Najman; S. Masood

Though conventional coronary angiography (CCA) has been the standard of reference for diagnosing coronary artery disease in the past decades, computed tomography angiography (CTA) has rapidly emerged, and is nowadays widely used in clinical practice. Here, we introduce a standardized evaluation framework to reliably evaluate and compare the performance of the algorithms devised to detect and quantify the coronary artery stenoses, and to segment the coronary artery lumen in CTA data. The objective of this evaluation framework is to demonstrate the feasibility of dedicated algorithms to: (1) (semi-)automatically detect and quantify stenosis on CTA, in comparison with quantitative coronary angiography (QCA) and CTA consensus reading, and (2) (semi-)automatically segment the coronary lumen on CTA, in comparison with experts manual annotation. A database consisting of 48 multicenter multivendor cardiac CTA datasets with corresponding reference standards are described and made available. The algorithms from 11 research groups were quantitatively evaluated and compared. The results show that (1) some of the current stenosis detection/quantification algorithms may be used for triage or as a second-reader in clinical practice, and that (2) automatic lumen segmentation is possible with a precision similar to that obtained by experts. The framework is open for new submissions through the website, at http://coronary.bigr.nl/stenoses/.


international conference on image processing | 2009

A new methodology for evaluation of edge detectors

Rodrigo Moreno; Domenec Puig; Carme Julià; Miguel Angel Garcia

This paper defines a new methodology for evaluating edge detectors through measurements on edginess maps instead of on binary edge maps as previous methodologies do. These measurements avoid possible bias introduced by the application-dependent process of generating binary edge maps from edginess maps. The features of completeness, discriminability, precision and robustness, which a general-purpose edge detector must comply with, are introduced. The R, DS, P and FAR-measurements in addition to PSNR applied to the edginess maps are defined to assess the performance of edge detection. The R, DS, P and FAR-measurements can be seen as generalizations of previously proposed measurements on binary edge maps. Well-known and state-of-the-art edge detectors have been compared by means of the new proposed metrics. Results show that it is difficult for an edge detector to comply with all the proposed features.


Skeletal Radiology | 2014

Trabecular bone structure parameters from 3D image processing of clinical multi-slice and cone-beam computed tomography data

Eva Klintström; Örjan Smedby; Rodrigo Moreno; Torkel B. Brismar

ObjectiveBone strength depends on both mineral content and bone structure. The aim of this in vitro study was to develop a method of quantitatively assessing trabecular bone structure by applying three-dimensional image processing to data acquired with multi-slice and cone-beam computed tomography using micro-computed tomography as a reference.Materials and MethodsFifteen bone samples from the radius were examined. After segmentation, quantitative measures of bone volume, trabecular thickness, trabecular separation, trabecular number, trabecular nodes, and trabecular termini were obtained.ResultsThe clinical machines overestimated bone volume and trabecular thickness and underestimated trabecular nodes and number, but cone-beam CT to a lesser extent. Parameters obtained from cone beam CT were strongly correlated with μCT, with correlation coefficients between 0.93 and 0.98 for all parameters except trabecular termini.ConclusionsThe high correlation between cone-beam CT and micro-CT suggest the possibility of quantifying and monitoring changes of trabecular bone microarchitecture in vivo using cone beam CT.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2011

On Improving the Efficiency of Tensor Voting

Rodrigo Moreno; Miguel Angel Garcia; Domenec Puig; Luis Pizarro; Bernhard Burgeth; Joachim Weickert

This paper proposes two alternative formulations to reduce the high computational complexity of tensor voting, a robust perceptual grouping technique used to extract salient information from noisy data. The first scheme consists of numerical approximations of the votes, which have been derived from an in-depth analysis of the plate and ball voting processes. The second scheme simplifies the formulation while keeping the same perceptual meaning of the original tensor voting: The stick tensor voting and the stick component of the plate tensor voting must reinforce surfaceness, the plate components of both the plate and ball tensor voting must boost curveness, whereas junctionness must be strengthened by the ball component of the ball tensor voting. Two new parameters have been proposed for the second formulation in order to control the potentially conflictive influence of the stick component of the plate vote and the ball component of the ball vote. Results show that the proposed formulations can be used in applications where efficiency is an issue since they have a complexity of order O(1). Moreover, the second proposed formulation has been shown to be more appropriate than the original tensor voting for estimating saliencies by appropriately setting the two new parameters.


Medical Physics | 2012

Generalizing the mean intercept length tensor for gray-level images

Rodrigo Moreno; Magnus Borga; Örjan Smedby

PURPOSE The mean intercept length tensor is the most used technique to estimate microstructure orientation and anisotropy of trabecular bone. This paper proposes an efficient extension of this technique to gray-scale images based on a closed formulation of the mean intercept length tensor and a generalization using different angular convolution kernels. METHODS First, the extended Gaussian image is computed for the binary or gray-scale image. Second, the intercepts are computed for all possible orientations through an angular convolution with the half-cosine function. Finally, the tensor is computed by means of the covariance matrix. The complexity of the method isO(n + m) in contrast with O(nm) of traditional implementations, where n is the number of voxels in the image and m is the number of orientations used in the computations. The method is generalized by applying other angular convolution kernels instead of the half-cosine function. As a result, the anisotropy of the tensor can be controlled while keeping the eigenvectors intact. RESULTS The proposed extension to gray-scale yields accurate results for reliable computations of the extended Gaussian image and, unlike the traditional methodology, is not affected by artifacts generated by discretizations during the sampling of different orientations. CONCLUSIONS Experiments show that the computations on both binary and gray-scale images are correlated, and that computations in gray-scale are more robust, enabling the use of the mean intercept length tensor to clinical examinations of trabecular bone. The use of kernels based on the von Mises-Fisher distribution is promising as the anisotropy can be adjusted with a parameter in order to improve its power to predict mechanical properties of trabecular bone.PURPOSE The mean intercept length tensor is the most used technique to estimate microstructure orientation and anisotropy of trabecular bone. This paper proposes an efficient extension of this technique to gray-scale images based on a closed formulation of the mean intercept length tensor and a generalization using different angular convolution kernels. METHODS First, the extended Gaussian image is computed for the binary or gray-scale image. Second, the intercepts are computed for all possible orientations through an angular convolution with the half-cosine function. Finally, the tensor is computed by means of the covariance matrix. The complexity of the method is O(n + m) in contrast with O(nm) of traditional implementations, where n is the number of voxels in the image and m is the number of orientations used in the computations. The method is generalized by applying other angular convolution kernels instead of the half-cosine function. As a result, the anisotropy of the tensor can be controlled while keeping the eigenvectors intact. RESULTS The proposed extension to gray-scale yields accurate results for reliable computations of the extended Gaussian image and, unlike the traditional methodology, is not affected by artifacts generated by discretizations during the sampling of different orientations. CONCLUSIONS Experiments show that the computations on both binary and gray-scale images are correlated, and that computations in gray-scale are more robust, enabling the use of the mean intercept length tensor to clinical examinations of trabecular bone. The use of kernels based on the von Mises-Fisher distribution is promising as the anisotropy can be adjusted with a parameter in order to improve its power to predict mechanical properties of trabecular bone.


Visualization and Processing of Tensors and Higher Order Descriptors for Multi-Valued Data | 2014

Techniques for Computing Fabric Tensors: A Review

Rodrigo Moreno; Magnus Borga; Örjan Smedby

The aim of this chapter is to review different approaches that have been proposed to compute fabric tensors with emphasis on trabecular bone research. Fabric tensors aim at modeling through tensors both anisotropy and orientation of a material with respect to another one. Fabric tensors are widely used in fields such as trabecular bone research, mechanics of materials and geology. These tensors can be seen as semi-global measurements since they are computed in relatively large neighborhoods, which are assumed quasi-homogeneous. Many methods have been proposed to compute fabric tensors. We propose to classify fabric tensors into two categories: mechanics-based and morphology-based. The former computes fabric tensors from mechanical simulations, while the latter computes them by analyzing the morphology of the materials. In addition to pointing out advantages and drawbacks for each method, current trends and challenges in this field are also summarized.


international conference on image processing | 2009

Robust color edge detection through tensor voting

Rodrigo Moreno; Miguel Angel Garcia; Domenec Puig; Carme Julià

This paper presents a new method for color edge detection based on the tensor voting framework, a robust perceptual grouping technique used to extract salient information from noisy data. The tensor voting framework is adapted to encode color information via tensors in order to propagate them into a neighborhood through a voting process specifically designed for color edge detection by taking into account perceptual color differences, region uniformity and edginess according to a set of intuitive perceptual criteria. Perceptual color differences are estimated by means of an optimized version of the CIEDE2000 formula, while uniformity and edginess are estimated by means of saliency maps obtained from the tensor voting process. Experiments show that the proposed algorithm is more robust and has a similar performance in precision when compared with the state-of-the-art.


Proceedings of SPIE | 2012

Estimation of trabecular thickness in gray-scale images through granulometric analysis

Rodrigo Moreno; Magnus Borga; Örjan Smedby

This paper extends to gray-scale the method proposed by Hildebrand and Rüegsegger for estimating thickness of trabecular bone, which is the most used in trabecular bone research, where local thickness at a point is defined as the diameter of the maximum inscribed ball that includes that point. The proposed extension takes advantage of the equivalence between this method and the opening function computed for the granulometry generated by the opening operation of mathematical morphology with ball-shaped structuring elements of different diameter. The proposed extension (a) uses gray-scale instead of binary mathematical morphology, (b) uses all values of the pattern spectrum of the granulometry instead of the maximum peak as used for binary images, (c) corrects bias on local thickness estimations generated by partial volume effects, and (d) uses the gray-scale as a weighting function for global thickness estimation. The proposed extension becomes equivalent to the original method when it is applied to binary images. A new non-flat structuring element is also proposed in order to reduce the discretization errors generated by traditional flat structuring elements. Translation invariance can be attained by up-sampling the images through interpolation by a factor of two. Results for synthetic and real images show that the quality of the measurements obtained through the original method strongly depends on the binarization process, whereas the measurements obtained through the proposed extension do not. Consequently, the proposed extension is more appropriate for images with limited resolution where binarization is not trivial.


GbRPR'07 Proceedings of the 6th IAPR-TC-15 international conference on Graph-based representations in pattern recognition | 2007

Graph-based perceptual segmentation of stereo vision 3D images at multiple abstraction levels

Rodrigo Moreno; Miguel Angel Garcia; Domenec Puig

This paper presents a new technique based on perceptual information for the robust segmentation of noisy 3D scenes acquired by stereo vision. A low-pass geometric filter is first applied to the given cloud of 3D points to remove noise. The tensor voting algorithm is then applied in order to extract perceptual geometric information. Finally, a graph-based segmenter is utilized for extracting the different geometric structures present in the scene through a region-growing procedure that is applied hierarchically. The proposed algorithm is evaluated on real 3D scenes acquired with a trinocular camera.


Computer Vision and Image Understanding | 2011

Edge-preserving color image denoising through tensor voting

Rodrigo Moreno; Miguel Angel Garcia; Domenec Puig; Carme Julií

This paper presents a new method for edge-preserving color image denoising based on the tensor voting framework, a robust perceptual grouping technique used to extract salient information from noisy data. The tensor voting framework is adapted to encode color information through tensors in order to propagate them in a neighborhood by using a specific voting process. This voting process is specifically designed for edge-preserving color image denoising by taking into account perceptual color differences, region uniformity and edginess according to a set of intuitive perceptual criteria. Perceptual color differences are estimated by means of an optimized version of the CIEDE2000 formula, while uniformity and edginess are estimated by means of saliency maps obtained from the tensor voting process. Measurements of removed noise, edge preservation and undesirable introduced artifacts, additionally to visual inspection, show that the proposed method has a better performance than the state-of-the-art image denoising algorithms for images contaminated with CCD camera noise.

Collaboration


Dive into the Rodrigo Moreno's collaboration.

Top Co-Authors

Avatar

Örjan Smedby

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Domenec Puig

Rovira i Virgili University

View shared research outputs
Top Co-Authors

Avatar

Miguel Angel Garcia

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Torkel B. Brismar

Karolinska University Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chunliang Wang

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Daniel Jörgens

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge