Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joaquim Cezar Felipe is active.

Publication


Featured researches published by Joaquim Cezar Felipe.


computer based medical systems | 2003

Retrieval by content of medical images using texture for tissue identification

Joaquim Cezar Felipe; Agma J. M. Traina; Caetano Traina

This work aims at supporting the retrieval and indexing of medical images by extracting and organizing intrinsic features of them, more specifically texture attributes from images. A tool for obtaining the relevant textures was implemented This tool retrieves and classifies images using the extracted values, and allows the user to issue similarity queries. The application of the proposed method on images has given encouraging results that motivate to apply the method as a basis to more experiments, at diversified contexts. The accuracy degree obtained from the precision and recall plots was always over 90% for queries asking for similar images for up to 20% of the database.


Computers in Biology and Medicine | 2015

Computer-aided diagnosis system based on fuzzy logic for breast cancer categorization

Gisele Helena Barboni Miranda; Joaquim Cezar Felipe

BACKGROUND Fuzzy logic can help reduce the difficulties faced by computational systems to represent and simulate the reasoning and the style adopted by radiologists in the process of medical image analysis. The study described in this paper consists of a new method that applies fuzzy logic concepts to improve the representation of features related to image description in order to make it semantically more consistent. Specifically, we have developed a computer-aided diagnosis tool for automatic BI-RADS categorization of breast lesions. The user provides parameters such as contour, shape and density and the system gives a suggestion about the BI-RADS classification. METHODS Initially, values of malignancy were defined for each image descriptor, according to the BI-RADS standard. When analyzing contour, for example, our method considers the matching of features and linguistic variables. Next, we created the fuzzy inference system. The generation of membership functions was carried out by the Fuzzy Omega algorithm, which is based on the statistical analysis of the dataset. This algorithm maps the distribution of different classes in a set. RESULTS Images were analyzed by a group of physicians and the resulting evaluations were submitted to the Fuzzy Omega algorithm. The results were compared, achieving an accuracy of 76.67% for nodules and 83.34% for calcifications. CONCLUSIONS The fit of definitions and linguistic rules to numerical models provided by our method can lead to a tighter connection between the specialist and the computer system, yielding more effective and reliable results.


Mining Complex Data | 2009

Mining Statistical Association Rules to Select the Most Relevant Medical Image Features

Marcela Xavier Ribeiro; André G. R. Balan; Joaquim Cezar Felipe; Agma J. M. Traina; Caetano Traina

In this chapter we discuss how to take advantage of association rule mining to promote feature selection from low-level image features. Feature selection can significantly improve the precision of content-based queries in image databases by removing noisy and redundant features. A new algorithm named StARMiner is presented. StARMiner aims at finding association rules relating low-level image features to high-level knowledge about the images. Such rules are employed to select the most relevant features. We present a case study in order to highlight how the proposed algorithm performs in different situations, regarding its ability to select the most relevant features that properly distinguish the images. We compare the StARMiner algorithm with other well-known feature selection algorithms, showing that StARMiner reaches higher precision rates. The results obtained corroborate the assumption that association rule mining can effectively support dimensionality reduction in image databases.


BMC Bioinformatics | 2013

Computational framework to support integration of biomolecular and clinical data within a translational approach

Newton Shydeo Brandão Miyoshi; Daniel G. Pinheiro; Wilson A. Silva; Joaquim Cezar Felipe

BackgroundThe use of the knowledge produced by sciences to promote human health is the main goal of translational medicine. To make it feasible we need computational methods to handle the large amount of information that arises from bench to bedside and to deal with its heterogeneity. A computational challenge that must be faced is to promote the integration of clinical, socio-demographic and biological data. In this effort, ontologies play an essential role as a powerful artifact for knowledge representation. Chado is a modular ontology-oriented database model that gained popularity due to its robustness and flexibility as a generic platform to store biological data; however it lacks supporting representation of clinical and socio-demographic information.ResultsWe have implemented an extension of Chado - the Clinical Module - to allow the representation of this kind of information. Our approach consists of a framework for data integration through the use of a common reference ontology. The design of this framework has four levels: data level, to store the data; semantic level, to integrate and standardize the data by the use of ontologies; application level, to manage clinical databases, ontologies and data integration process; and web interface level, to allow interaction between the user and the system. The clinical module was built based on the Entity-Attribute-Value (EAV) model. We also proposed a methodology to migrate data from legacy clinical databases to the integrative framework. A Chado instance was initialized using a relational database management system. The Clinical Module was implemented and the framework was loaded using data from a factual clinical research database. Clinical and demographic data as well as biomaterial data were obtained from patients with tumors of head and neck. We implemented the IPTrans tool that is a complete environment for data migration, which comprises: the construction of a model to describe the legacy clinical data, based on an ontology; the Extraction, Transformation and Load (ETL) process to extract the data from the source clinical database and load it in the Clinical Module of Chado; the development of a web tool and a Bridge Layer to adapt the web tool to Chado, as well as other applications.ConclusionsOpen-source computational solutions currently available for translational science does not have a model to represent biomolecular information and also are not integrated with the existing bioinformatics tools. On the other hand, existing genomic data models do not represent clinical patient data. A framework was developed to support translational research by integrating biomolecular information coming from different “omics” technologies with patient’s clinical and socio-demographic data. This framework should present some features: flexibility, compression and robustness. The experiments accomplished from a use case demonstrated that the proposed system meets requirements of flexibility and robustness, leading to the desired integration. The Clinical Module can be accessed in http://dcm.ffclrp.usp.br/caib/pg=iptrans.


computer-based medical systems | 2009

Including the perceptual parameter to tune the retrieval ability of pulmonary CBIR systems

Marcelo Ponciano-Silva; Agma J. M. Traina; Paulo M. Azevedo-Marques; Joaquim Cezar Felipe; Caetano Traina

The research on Content-Based Image Retrieval (CBIR) is growing in relevance at a fast pace. Algorithms and tools for CBIR can help decision-making processes, for example allowing the specialist to retrieve cases similar to the one under evaluation. However, the main reservation about using CBIR is the semantic gap, which is the divergence among automatic results and what the user is expecting. We propose the “perceptual parameter”, which allows changing the relationship between the feature extraction algorithms and the distance functions, aimed at finding the best integration of both from the specialists point of view. This work integrates the three main elements of similarity queries: the extracted features from the images, the distance function employed to quantify the similarity and the similarity perception from the user. These three elements allowed to build the Ȝsimilarity operatorsȝ. The experiments performed show that the new perceptual parameter can narrow the semantic gap between what the system retrieves and what the specialist expects.


acm symposium on applied computing | 2006

Effective shape-based retrieval and classification of mammograms

Joaquim Cezar Felipe; Marcela Xavier Ribeiro; Elaine P. M. de Sousa; Agma J. M. Traina; Caetano Traina

This paper presents a new approach to support Computer-aided Diagnosis (CAD) aiming at assisting the task of classification and similarity retrieval of mammographic mass lesions, based on shape content. We have tested classical algorithms for automatic segmentation of this kind of image, but usually they are not precise enough to generate accurate contours to allow lesion classification based on shape analyses. Thus, in this work, we have used Zernike moments for invariant pattern recognition within regions of interest (ROIs), without previous segmentation of images. A new data mining algorithm that generates statistical-based association rules is used to identify representative features that discriminate the disease classes of images. In order to minimize the computational effort, an algorithm based on fractal theory is applied to reduce the dimension of feature vectors. K-nearest neighbor retrieval was applied to a database containing images excerpted from previously classified digitalized mammograms presenting breast lesions. The results reveal that our approach allows fast and effective feature extraction and is robust and suitable for analyzing this kind of image.


Journal of Digital Imaging | 2009

A New Family of Distance Functions for Perceptual Similarity Retrieval of Medical Images

Joaquim Cezar Felipe; Caetano Traina; Agma J. M. Traina

A long-standing challenge of content-based image retrieval (CBIR) systems is the definition of a suitable distance function to measure the similarity between images in an application context which complies with the human perception of similarity. In this paper, we present a new family of distance functions, called attribute concurrence influence distances (AID), which serve to retrieve images by similarity. These distances address an important aspect of the psychophysical notion of similarity in comparisons of images: the effect of concurrent variations in the values of different image attributes. The AID functions allow for comparisons of feature vectors by choosing one of two parameterized expressions: one targeting weak attribute concurrence influence and the other for strong concurrence influence. This paper presents the mathematical definition and implementation of the AID family for a two-dimensional feature space and its extension to any dimension. The composition of the AID family with Lp distance family is considered to propose a procedure to determine the best distance for a specific application. Experimental results involving several sets of medical images demonstrate that, taking as reference the perception of the specialist in the field (radiologist), the AID functions perform better than the general distance functions commonly used in CBIR.


acm symposium on applied computing | 2006

A new similarity measure for histograms applied to content-based retrieval of medical images

Joaquim Cezar Felipe; Agma J. M. Traina; Caetano Traina

This paper presents a new similarity measure to compare gray-level histograms, aiming at reducing both false positive and false negative results, in the context of medical images.


brazilian symposium on computer graphics and image processing | 2012

Structural Analysis of Histological Images to Aid Diagnosis of Cervical Cancer

Gisele Helena Barboni Miranda; Junior Barrera; Edson Garcia Soares; Joaquim Cezar Felipe

The use of computational techniques in the processing of histopathological images allows the study of the structural organization of tissues and their pathological changes. The overall objective of this work includes the proposal, the implementation and the evaluation of a methodology for the analysis of cervical intraepithelial neoplasia (CIN) from histopathological images. For this purpose, a pipeline of morphological operators were implemented for the segmentation of cell nuclei and the Delaunay Triangulation were used in order to represent the tissue architecture. Also, clustering algorithms and graph morphology were used to automatically obtain the boundary between the histological layers of the epithelial tissue. Similarity criteria and adjacency relations between the triangles of the network were explored. The proposed method was evaluated concerning the detection of the presence of lesions in the tissue as well as the their malignancy grading.


Biomedical Physics & Engineering Express | 2016

Two-dimensional sample entropy: assessing image texture through irregularity

Luiz Eduardo Virgilio Silva; A C S Senra Filho; Valéria Paula Sassoli Fazan; Joaquim Cezar Felipe; L O Murta Junior

Image texture analysis is a key task in computer vision. Although various methods have been applied to extract texture information, none of them are based on the principles of sample entropy, which is a measurement of entropy rate. This paper proposes a two-dimensional sample entropy method, namely SampEn2D, in order to measure irregularity in pixel patterns. We evaluated the proposed method in three different situations: a set of simulated images generated by a deterministic function corrupted with different levels of a stochastic influence; the Brodatz public texture database; and a real biological image set of rat sural nerve. Evaluation with simulations showed SampEn2D as a robust irregularity measure, closely following sample entropy properties. Results with Brodatz dataset testified superiority of SampEn2D to separate different image categories compared to conventional Haralick and wavelet descriptors. SampEn2D was also capable of discriminating rat sural nerve images by age groups with high accuracy (AUROC = 0.844). No significant difference was found between SampEn2D AUROC and those obtained with the best performed Haralick descriptors, i.e. entropy (AUROC = 0.828), uniformity (AUROC = 0.833), homogeneity (AUROC = 0.938) and Wavelet descriptors, i.e. Haar energy/entropy (AUROC = 0.932) and Daubechies energy/entropy (AUROC = 0.859). In addition, it was shown that SampEn2D computation time increases with image size, being around 1400 s for a 600 × 600 pixels image. In conclusion, SampEn2D showed to be stable and robust enough to be applied as texture feature quantifier and irregularity properties, as measured by SampEn2D, seem to be an important feature for image characterization in biomedical image analysis.

Collaboration


Dive into the Joaquim Cezar Felipe's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Caetano Traina

University of São Paulo

View shared research outputs
Top Co-Authors

Avatar

Marcela Xavier Ribeiro

Federal University of São Carlos

View shared research outputs
Top Co-Authors

Avatar

Luana Peixoto Annibal

Federal University of São Carlos

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Domingos Alves

University of São Paulo

View shared research outputs
Researchain Logo
Decentralizing Knowledge