Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adilson Gonzaga is active.

Publication


Featured researches published by Adilson Gonzaga.


international conference on medical imaging and augmented reality | 2001

Segmentation and analysis of leg ulcers color images

Andres Anobile Perez; Adilson Gonzaga; José Marcos Alves

This paper presents a methodology for the segmentation and analysis of the tissues in color images of leg ulcers. The segmentation is obtained through an automatic analysis made in the RGB and SI channels by changing from the RGB color space to the HSI color space. The aim is to determine which of the five channels have the characteristic that will make the segmentation process more efficient. After the analysis, the selected channel is segmented and used as a mask over the original image, allowing that only the inner tissues of the wound be analyzed. The analysis is done through predetermined functions that attribute membership grade for each processed pixel as well as its level of engagement in a specified tissue class. The article also shows the results obtained from both the segmentation and analysis of tissues using the proposed method.


brazilian symposium on computer graphics and image processing | 2006

Hand Image Segmentation in Video Sequence by GMM: a comparative analysis

Hebert Luchetti Ribeiro; Adilson Gonzaga

This paper describes different approaches of realtime GMM (Gaussian mixture method) background subtraction algorithm using video sequences for hand image segmentation. In each captured image, the segmentation takes place where pixels belonging to the hands are separated from the background based on background extraction and skin-color segmentation. A time-adaptive mixture of Gaussians is used to model the distribution of each pixel color value. For an input image, every new pixel value is checked, deciding if it matches with one of the existing Gaussians based on the distance from the mean in terms of the standard deviation. The best matching distribution parameters are updated and its weight is increased. It is assumed that the values of the background pixels have low variance and large weight. These matched pixels, considered as foreground, are compared based on skin color thresholds. The hands position and other attributes are tracked by frame. That enables us to distinguish the hand movement from the background and other objects in movement, as well as to extract the information from the movement for dynamic hand gesture recognition


systems man and cybernetics | 2012

Dynamic Features for Iris Recognition

R. M. da Costa; Adilson Gonzaga

The human eye is sensitive to visible light. Increasing illumination on the eye causes the pupil of the eye to contract, while decreasing illumination causes the pupil to dilate. Visible light causes specular reflections inside the iris ring. On the other hand, the human retina is less sensitive to near infra-red (NIR) radiation in the wavelength range from 800 nm to 1400 nm, but iris detail can still be imaged with NIR illumination. In order to measure the dynamic movement of the human pupil and iris while keeping the light-induced reflexes from affecting the quality of the digitalized image, this paper describes a device based on the consensual reflex. This biological phenomenon contracts and dilates the two pupils synchronously when illuminating one of the eyes by visible light. In this paper, we propose to capture images of the pupil of one eye using NIR illumination while illuminating the other eye using a visible-light pulse. This new approach extracts iris features called “dynamic features (DFs).” This innovative methodology proposes the extraction of information about the way the human eye reacts to light, and to use such information for biometric recognition purposes. The results demonstrate that these features are discriminating features, and, even using the Euclidean distance measure, an average accuracy of recognition of 99.1% was obtained. The proposed methodology has the potential to be “fraud-proof,” because these DFs can only be extracted from living irises.


Multimedia Tools and Applications | 2011

Human gait recognition using extraction and fusion of global motion features

Milene Arantes; Adilson Gonzaga

This paper proposes a novel computer vision approach that processes video sequences of people walking and then recognises those people by their gait. Human motion carries different information that can be analysed in various ways. The skeleton carries motion information about human joints, and the silhouette carries information about boundary motion of the human body. Moreover, binary and gray-level images contain different information about human movements. This work proposes to recover these different kinds of information to interpret the global motion of the human body based on four different segmented image models, using a fusion model to improve classification. Our proposed method considers the set of the segmented frames of each individual as a distinct class and each frame as an object of this class. The methodology applies background extraction using the Gaussian Mixture Model (GMM), a scale reduction based on the Wavelet Transform (WT) and feature extraction by Principal Component Analysis (PCA). We propose four new schemas for motion information capture: the Silhouette-Gray-Wavelet model (SGW) captures motion based on grey level variations; the Silhouette-Binary-Wavelet model (SBW) captures motion based on binary information; the Silhouette–Edge-Binary model (SEW) captures motion based on edge information and the Silhouette Skeleton Wavelet model (SSW) captures motion based on skeleton movement. The classification rates obtained separately from these four different models are then merged using a new proposed fusion technique. The results suggest excellent performance in terms of recognising people by their gait.


machine vision applications | 2013

A new approach for color image segmentation based on color mixture

Osvaldo Severino; Adilson Gonzaga

The aim of this paper is to propose a new methodology for color image segmentation. We have developed an image processing technique, based on color mixture, considering how painters do to overlap layers of various hues of paint on creating oil paintings. We also have evaluated the distribution of cones in the human retina for the interpretation of these colors, and we have proposed a schema for the color mixture weight. This method expresses the mixture of black, blue, green, cyan, red, magenta, yellow and white colors quantified by the binary weight of the color that makes up the pixels of an RGB image with 8 bits per channel. The color mixture generates planes that intersect the RGB cube, defining the HSM (Hue, Saturation, Mixture) color space. The position of these planes inside the RGB cube is modeled, based on the distribution of r, g and b cones of the human retina. To demonstrate the applicability of the proposed methodology, we present in this paper, the segmentation of “human skin” or “non-skin” pixels in digital color images. The performance of the color mixture was analyzed by a Gaussian distribution in the HSM, HSV and YCbCr color spaces. The method is compared with other skin/non-skin classifiers. The results demonstrate that our approach surpassed the performance of all compared methodologies. The main contributions of this paper are related to a new way for interpreting color of binary images, taking into account the bit-plane levels and the application in image processing techniques.


International Journal of Pattern Recognition and Artificial Intelligence | 2014

OBJECT RECOGNITION BASED ON BAG OF FEATURES AND A NEW LOCAL PATTERN DESCRIPTOR

Carolina Toledo Ferraz; Osmando Pereira Junior; Marcos Verdini Rosa; Adilson Gonzaga

Bag of Features (BoF) has gained a lot of interest in computer vision. Visual codebook based on robust appearance descriptors extracted from local image patches is an effective means of texture analysis and scene classification. This paper presents a new method for local feature description based on gray-level difference mapping called Mean Local Mapped Pattern (M-LMP). The proposed descriptor is robust to image scaling, rotation, illumination and partial viewpoint changes. The training set is composed of rotated and scaled images, with changes in illumination and view points. The test set is composed of rotated and scaled images. The proposed descriptor more effectively captures smaller differences of the image pixels than similar ones. In our experiments, we implemented an object recognition system based on the M-LMP and compared our results to the Center-Symmetric Local Binary Pattern (CS-LBP) and the Scale-Invariant Feature Transform (SIFT). The results for object classification were analyzed in a BoF methodology and show that our descriptor performs better compared to these two previously published methods.


IEEE Transactions on Computers | 2002

Remote device command and resource sharing over the Internet: a new approach based on a distributed layered architecture

Francisco José Monaco; Adilson Gonzaga

In addition to the remote access and computer-augmented functionality brought about by the earliest modalities of distance operation, technical advances in the form of telematics have opened up a whole new range of applications, one of which, resource sharing, deserves special attention. Nonetheless, while distance operations through computer networks, and particularly over the Internet, have attracted a great deal of attention in recent years, there is still a noticeable lack of important acquisitions regarding the systemic treatment of essential issues in this field. This paper presents an overview of the current trends in this emerging interdisciplinary area and briefly comments on the fundamentals of telematics-supported distance operation. A case study is used to report on an experience involving methodological investigations in this area.


acm symposium on applied computing | 2014

Feature description based on center-symmetric local mapped patterns

Carolina Toledo Ferraz; Osmando Pereira Jr.; Adilson Gonzaga

Local feature description has gained a lot of interest in many applications, such as texture recognition, image retrieval and face recognition. This paper presents a novel method for local feature description based on gray-level difference mapping, called Center-Symmetric Local Mapped Pattern (CS-LMP). The proposed descriptor is invariant to image scale, rotation, illumination and partial viewpoint changes. Furthermore, this descriptor more effectively captures the nuances of the image pixels. The training set is composed of rotated and scaled images, with changes in illumination and view points. The test set is composed of rotated and scaled images. In our experiments, the descriptor is compared to the Center-Symmetric Local Binary Pattern (CS-LBP). The results show that our descriptor performs favorably compared to the CS-LBP.


International Journal of Innovative Computing and Applications | 2009

Edge detection in digital images using fuzzy numbers

Inês Aparecida Gasparotto Boaventura; Adilson Gonzaga

The purpose of this paper is to introduce a new approach for edge detection in grey shaded images. The proposed approach is based on the fuzzy number theory. The idea is to deal with the uncertainties concerning the grey shades making up the image and, thus, calculate the appropriateness of the pixels in relation to a homogeneous region around them. The pixels not belonging to the region are then classified as border pixels. The results have shown that the technique is simple, computationally efficient and with good results when compared with both the traditional border detectors and the fuzzy edge detectors.


intelligent systems design and applications | 2007

Border Detection in Digital Images: An Approach by Fuzzy Numbers

G. Boaventura; Adilson Gonzaga

The purpose of this paper is to introduce a new approach for edge detection in gray shaded images. The proposed approach is based on the fuzzy number theory. The idea is to deal with the uncertainties concerning the gray shades making up the image, and thus calculate the appropriateness of the pixels in relation to an homogeneous region around them. The pixels not belonging to the region are then classified as border pixels. The results have shown that the technique is simple, computationally efficient and with good results when compared with both the traditional border detectors and the fuzzy edge detectors.

Collaboration


Dive into the Adilson Gonzaga's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luis Carlos Trevelin

Federal University of São Carlos

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Milene Arantes

University of São Paulo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge