Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tiago Jose de Carvalho is active.

Publication


Featured researches published by Tiago Jose de Carvalho.


IEEE Transactions on Biomedical Engineering | 2012

Points of Interest and Visual Dictionaries for Automatic Retinal Lesion Detection

Anderson Rocha; Tiago Jose de Carvalho; Herbert F. Jelinek; Siome Goldenstein; Jacques Wainer

In this paper, we present an algorithm to detect the presence of diabetic retinopathy (DR)-related lesions from fundus images based on a common analytical approach that is capable of identifying both red and bright lesions without requiring specific pre- or postprocessing. Our solution constructs a visual word dictionary representing points of interest (PoIs) located within regions marked by specialists that contain lesions associated with DR and classifies the fundus images based on the presence or absence of these PoIs as normal or DR-related pathology. The novelty of our approach is in locating DR lesions in the optic fundus images using visual words that combines feature information contained within the images in a framework easily extendible to different types of retinal lesions or pathologies and builds a specific projection space for each class of interest (e.g., white lesions such as exudates or normal regions) instead of a common dictionary for all classes. The visual words dictionary was applied to classifying bright and red lesions with classical cross validation and cross dataset validation to indicate the robustness of this approach. We obtained an area under the curve (AUC) of 95.3% for white lesion detection and an AUC of 93.3% for red lesion detection using fivefold cross validation and our own data consisting of 687 images of normal retinae, 245 images with bright lesions, 191 with red lesions, and 109 with signs of both bright and red lesions. For cross dataset analysis, the visual dictionary also achieves compelling results using our images as the training set and the RetiDB and Messidor images as test sets. In this case, the image classification resulted in an AUC of 88.1% when classifying the RetiDB dataset and in an AUC of 89.3% when classifying the Messidor dataset, both cases for bright lesion detection. The results indicate the potential for training with different acquisition images under different setup conditions with a high accuracy of referral based on the presence of either red or bright lesions or both. The robustness of the visual dictionary against image quality (blurring), resolution, and retinal background, makes it a strong candidate for DR screening of large, diverse communities with varying cameras and settings and levels of expertise for image capture.


Journal of Visual Communication and Image Representation | 2015

Going deeper into copy-move forgery detection

Ewerton Almeida Silva; Tiago Jose de Carvalho; Anselmo Ferreira; Anderson Rocha

We have developed a new and effective approach to detect copy-move forgeries.Our method is able to deal with rotations, resize and compression simultaneously.Our method report a small false positive rate detection.We have developed a dataset comprising 216 realistic copy-move forgeries.Method tested against 15 state-of-the-art methods and using different datasets. This work presents a new approach toward copy-move forgery detection based on multi-scale analysis and voting processes of a digital image. Given a suspicious image, we extract interest points robust to scale and rotation finding possible correspondences among them. We cluster correspondent points into regions based on geometric constraints. Thereafter, we construct a multi-scale image representation and for each scale, we examine the generated groups using a descriptor strongly robust to rotation, scaling and partially robust to compression, which decreases the search space of duplicated regions and yields a detection map. The final decision is based on a voting process among all detection maps. We validate the method using various datasets comprising original and realistic image clonings. We compare the proposed method to 15 others from the literature and report promising results.


international conference of the ieee engineering in medicine and biology society | 2011

Machine learning and pattern classification in identification of indigenous retinal pathology

Herbert F. Jelinek; Anderson Rocha; Tiago Jose de Carvalho; Siome Goldenstein; Jacques Wainer

Diabetic retinopathy (DR) is a complication of diabetes, which if untreated leads to blindness. DR early diagnosis and treatment improve outcomes. Automated assessment of single lesions associated with DR has been investigated for sometime. To improve on classification, especially across different ethnic groups, we present an approach using points-of-interest and visual dictionary that contains important features required to identify retinal pathology. Variation in images of the human retina with respect to differences in pigmentation and presence of diverse lesions can be analyzed without the necessity of preprocessing and utilizing different training sets to account for ethnic differences for instance.


IEEE Transactions on Information Forensics and Security | 2016

Illuminant-Based Transformed Spaces for Image Forensics

Tiago Jose de Carvalho; Fábio Augusto Faria; Helio Pedrini; Ricardo da Silva Torres; Anderson Rocha

In this paper, we explore transformed spaces, represented by image illuminant maps, to propose a methodology for selecting complementary forms of characterizing visual properties for an effective and automated detection of image forgeries. We combine statistical telltales provided by different image descriptors that explore color, shape, and texture features. We focus on detecting image forgeries containing people and present a method for locating the forgery, specifically, the face of a person in an image. Experiments performed on three different open-access data sets show the potential of the proposed method for pinpointing image forgeries containing people. In the two first data sets (DSO-1 and DSI-1), the proposed method achieved a classification accuracy of 94% and 84%, respectively, a remarkable improvement when compared with the state-of-the-art methods. Finally, when evaluating the third data set comprising questioned images downloaded from the Internet, we also present a detailed analysis of target images.


PLOS ONE | 2015

Automated Multi-Lesion Detection for Referable Diabetic Retinopathy in Indigenous Health Care

Ramon Pires; Tiago Jose de Carvalho; Geoffrey Spurling; Siome Goldenstein; Jacques Wainer; Alan Luckie; Herbert F. Jelinek; Anderson Rocha

Diabetic Retinopathy (DR) is a complication of diabetes mellitus that affects more than one-quarter of the population with diabetes, and can lead to blindness if not discovered in time. An automated screening enables the identification of patients who need further medical attention. This study aimed to classify retinal images of Aboriginal and Torres Strait Islander peoples utilizing an automated computer-based multi-lesion eye screening program for diabetic retinopathy. The multi-lesion classifier was trained on 1,014 images from the São Paulo Eye Hospital and tested on retinal images containing no DR-related lesion, single lesions, or multiple types of lesions from the Inala Aboriginal and Torres Strait Islander health care centre. The automated multi-lesion classifier has the potential to enhance the efficiency of clinical practice delivering diabetic retinopathy screening. Our program does not necessitate image samples for training from any specific ethnic group or population being assessed and is independent of image pre- or post-processing to identify retinal lesions. In this Aboriginal and Torres Strait Islander population, the program achieved 100% sensitivity and 88.9% specificity in identifying bright lesions, while detection of red lesions achieved a sensitivity of 67% and specificity of 95%. When both bright and red lesions were present, 100% sensitivity with 88.9% specificity was obtained. All results obtained with this automated screening program meet WHO standards for diabetic retinopathy screening.


international conference on image processing | 2011

Eye specular highlights telltales for digital forensics: A machine learning approach

Priscila Saboia; Tiago Jose de Carvalho; Anderson Rocha

Among the possible forms of photographic fabrication and manipulation, there is an increasing number of composite pictures containing people. With such compositions, it is very common to see politicians depicted side-by-side with criminals during election campaigns, or even Hollywood superstars relationships being wrecked by allegedly affairs depicted in gossip magazines. Thinking about this problem, in this paper we analyze telltales obtained from highlights in the eyes of every person standing in a picture in order to decide whether or not those people were really together at the moment of such image acquisition. We validate our approach with a data set containing realistic photographic compositions, as well as authentic unchanged pictures. As a result, our proposed extension improves the classification accuracy of the state-of-art solution in more than 20%.


brazilian symposium on computer graphics and image processing | 2009

A Comparative Study among Pattern Classifiers in Interactive Image Segmentation

Thiago Vallin Spina; Javier A. Montoya-Zegarra; Fábio Andrijauskas; Fábio Augusto Faria; Carlos E. A. Zampieri; Sheila M. Pinto-Cáceres; Tiago Jose de Carvalho; Alexandre X. Falcão

Edition of natural images usually asks for considerable userinvolvement, being segmentation one of the main challenges. This paper describes an unified graph-based framework for fast, precise and accurate interactive image segmentation. The method divides segmentation into object recognition, enhancement and extraction. Recognition is done by the user when markers are selected inside and outside the object. Enhancement increases the dissimilarities between object and background and Extraction separates them. Enhancement is done by a fuzzy pixel classifier and it has a great impact in the number of markers required for extraction. In view of minimizing user involvement, we focus this paper on a comparative study among popular classifiers for enhancement, conducting experiments with several natural images and seven users.


Archive | 2018

Malicious Software Classification Using VGG16 Deep Neural Network’s Bottleneck Features

Edmar Rezende; Guilherme C. S. Ruppert; Tiago Jose de Carvalho; Antonio Theophilo; Fabio Ramos; Paulo Licio de Geus

Malicious software (malware) has been extensively employed for illegal purposes and thousands of new samples are discovered every day. The ability to classify samples with similar characteristics into families makes possible to create mitigation strategies that work for a whole class of programs. In this paper, we present a malware family classification approach using VGG16 deep neural network’s bottleneck features. Malware samples are represented as byteplot grayscale images and the convolutional layers of a VGG16 deep neural network pre-trained on the ImageNet dataset is used for bottleneck features extraction. These features are used to train a SVM classifier for the malware family classification task. The experimental results on a dataset comprising 10,136 samples from 20 different families showed that our approach can effectively be used to classify malware families with an accuracy of 92.97%, outperforming similar approaches proposed in the literature which require feature engineering and considerable domain expertise.


brazilian symposium on computer graphics and image processing | 2017

Detecting Computer Generated Images with Deep Convolutional Neural Networks

Edmar Roberto Santana de Rezende; Guilherme C. S. Ruppert; Tiago Jose de Carvalho

Computer graphics techniques for image generation are living an era where, day after day, the quality of produced content is impressing even the more skeptical viewer. Although it is a great advance for industries like games and movies, it can become a real problem when the application of such techniques is applied for the production of fake images. In this paper we propose a new approach for computer generated images detection using a deep convolutional neural network model based on ResNet-50 and transfer learning concepts. Unlike the state-of-the-art approaches, the proposed method is able to classify images between computer generated or photo generated directly from the raw image data with no need for any pre-processing or hand-crafted feature extraction whatsoever. Experiments on a public dataset comprising 9700 images show an accuracy higher than 94%, which is comparable to the literature reported results, without the drawback of laborious and manual step of specialized features extraction and selection.


Revista De Informática Teórica E Aplicada | 2015

Visual Computing and Machine Learning Techniques for Digital Forensics

Tiago Jose de Carvalho; Helio Pedrini; Anderson Rocha

It is impressive how fast science has improved day by day in so many different fields. In special, technology advances are shocking so many people bringing to their reality facts that previously were beyond their imagination. Inspired by methods earlier presented in scientific fiction shows, the computer science community has created a new research area named Digital Forensics, which aims at developing and deploying methods for fighting against digital crimes such as digital image forgery.This work presents some of the main concepts associated with Digital Forensics and, complementarily, presents some recent and powerful techniques relying on Computer Graphics, Image Processing, Computer Vision and Machine Learning concepts for detecting forgeries in photographs. Some topics addressed in this work include: source attribution, spoofing detection, pornography detection, multimedia phylogeny, and forgery detection. Finally, this work highlights the challenges and open problems in Digital Image Forensics to provide the readers with the myriad opportunities available for research.

Collaboration


Dive into the Tiago Jose de Carvalho's collaboration.

Top Co-Authors

Avatar

Anderson Rocha

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar

Siome Goldenstein

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar

Jacques Wainer

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar

Helio Pedrini

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fábio Augusto Faria

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thiago Vallin Spina

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar

Alexandre X. Falcão

State University of Campinas

View shared research outputs
Researchain Logo
Decentralizing Knowledge