Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mas Rina Mustaffa is active.

Publication


Featured researches published by Mas Rina Mustaffa.


2012 International Conference on Information Retrieval & Knowledge Management | 2012

Multi-resolution Joint Auto Correlograms: Determining the distance function

Mas Rina Mustaffa; Fatimah Ahmad; Ramlan Mahmod; Shyamala Doraisamy

Distance function plays a role in content-based image retrieval where the ideal distance function will be able to close the gap between computerised image interpretation and similarity judgment by humans. In this paper, few distance functions in relation to the advancement of Colour Auto Correlogram are studied and compared in order to determine the most suitable distance function for the proposed Multi-resolution Joint Auto Correlograms descriptor. An experiment has been conducted on the SIMPLIcity image database consisting of 1000 images where the precision, recall, and rank of various distance functions are measured. Retrieval results have shown that the L1-norm has achieved higher precision rate of 78.52% and has able to rank similar images better (a rank of 199) compared to the Generalised Tversky Index distance function.


Multimedia Tools and Applications | 2017

Visual and semantic context modeling for scene-centric image annotation

Mohsen Zand; Shyamala Doraisamy; Alfian Abdul Halin; Mas Rina Mustaffa

Automatic image annotation enables efficient indexing and retrieval of the images in the large-scale image collections, where manual image labeling is an expensive and labor intensive task. This paper proposes a novel approach to automatically annotate images by coherent semantic concepts learned from image contents. It exploits sub-visual distributions from each visually complex semantic class, disambiguates visual descriptors in a visual context space, and assigns image annotations by modeling image semantic context. The sub-visual distributions are discovered through a clustering algorithm, and probabilistically associated with semantic classes using mixture models. The clustering algorithm can handle the inner-category visual diversity of the semantic concepts with the curse of dimensionality of the image descriptors. Hence, mixture models that formulate the sub-visual distributions assign relevant semantic classes to local descriptors. To capture non-ambiguous and visual-consistent local descriptors, the visual context is learned by a probabilistic Latent Semantic Analysis (pLSA) model that ties up images and their visual contents. In order to maximize the annotation consistency for each image, another context model characterizes the contextual relationships between semantic concepts using a concept graph. Therefore, image labels are finally specialized for each image in a scene-centric view, where images are considered as unified entities. In this way, highly consistent annotations are probabilistically assigned to images, which are closely correlated with the visual contents and true semantics of the images. Experimental validation on several datasets shows that this method outperforms state-of-the-art annotation algorithms, while effectively captures consistent labels for each image.


2010 International Conference on Information Retrieval & Knowledge Management (CAMP) | 2010

Invariant Generalised Ridgelet-Fourier for shape-based image retrieval

Mas Rina Mustaffa; Fatimah Ahmad; Ramlan Mahmod; Shyamala Doraisamy

A new shape descriptor called the Invariant Generalised Ridgelet-Fourier is defined for the application of Content-based Image Retrieval (CBIR). The proposed spectral-based method is invariant to rotation, scaling, and translation (RST) as well as able to handle images of arbitrary size. The implementation of Ridgelet transform on the ellipse containing the shape and the normalisation of the Radon transform is introduced. The 1D Wavelet transform is then applied to the Radon slices. In order to extract the rotation invariant feature, Fourier transform is implemented in the Ridgelet domain. The performance of the proposed method is accessed on a standard MPEG-7 CE-1 B dataset in terms of few objective evaluation criteria. From the experiments, it is shown that the proposed method provides promising results compared to several previous methods.


Multimedia Tools and Applications | 2018

An effective fusion model for image retrieval

Leila Mansourian; Muhamad Taufik Abdullah; Lili Nurliyana Abdullah; Azreen Azman; Mas Rina Mustaffa

In the past decade, the popular Bag of Visual Words approach has been applied to many computer vision tasks, including image classification, video search, robot localization, and texture recognition. Unfortunately, most approaches use intensity features and discard color information, an important characteristic of any image that is motivated by human vision. Besides, if background colors are higher than foreground ones, Dominant Color Descriptor (DCD) retrieves images that contain similar background colors correctly. On the other hand, just color feature extraction is not sufficient for similar objects with different color descriptors (e.g. white dog vs. black dog). To solve these problems, a new Salient DCD (SDCD) color descriptor is proposed to extract foreground color and add semantic information into DCD based on the color distances and salient object extraction methods. Besides, a new fusion model is presented to fuse SDCD histogram and PHOW MSDSIFT histogram. Performance evaluation on several datasets proves that the new approach outperforms other existing, state-of-the-art methods.


international visual informatics conference | 2015

BoVW Model for Animal Recognition: An Evaluation on SIFT Feature Strategies

Leila Mansourian; Muhamad Taufik Abdullah; Lilli Nurliyana Abdullah; Azreen Azman; Mas Rina Mustaffa

Nowadays classifying images into categories have taken a lot of interests in both research and practice. Content Based Image Retrieval (CBIR) was not successful in solving semantic gap problem. Therefore, Bag of Visual Words (BoVW) model was created for quantizing different visual features into words. SIFT detector is invariant and robust to translation, rotations, scaling and partially invariant to affine distortion and illumination changes. The aim of this paper is to investigate the potential usage of BoVW Word model in animal recognition. The better SIFT feature extraction method for pictures of the animal was also specified. The performance evaluation on several SIFT feature strategies validates that MSDSIFT feature extraction will get better results.


international conference on advanced computer science applications and technologies | 2015

A Review on Content-Based Image Retrieval Representation and Description for Fish

Noorul Shuhadah Osman; Mas Rina Mustaffa

There is an increasing interest in the description and representation of fish species images. For that purpose, Content-based Image Retrieval (CBIR) is applied. Due to the uncontrolled deep sea underwater environment, it is very hard to accurately estimate the similarities between the fishes and retrieves them according to its species due to ineffective visual features extraction for fish image representation. In this paper, CBIR for representation and description of fish is reviewed. Shape is one of the most important features to describe fish. This paper considers the combination of global and local shape features. Existing combination is carefully studied and the importances of global and local shape features are presented. The focus of possible future works is also suggested.


asia information retrieval symposium | 2014

Multi-resolution Shape-Based Image Retrieval Using Ridgelet Transform

Mas Rina Mustaffa; Fatimah Ahmad; Shyamala Doraisamy

Complicated shapes can be effectively characterized using multi-resolution descriptors. One popular method is the Ridgelet transform which has enjoyed very little exposure in describing shapes for Content-based Image Retrieval (CBIR). Many of the existing Ridgelet transforms are only applied on images of size M×M. For M×N sized images, they need to be segmented into M×M sub-images prior to processing. A different number of orientations and cut-off points for the Radon transform parameters also need to be utilized according to the image size. This paper presents a new shape descriptor for CBIR based on Ridgelet transform which is able to handle images of various sizes. The utilization of the ellipse template for better image coverage and the normalization of the Ridgelet transform are introduced. For better retrieval, a template-option scheme is also introduced. Retrieval effectiveness obtained by the proposed method has shown to be higher compared to several previous descriptors.


international conference on signal and image processing applications | 2009

Generalized Ridgelet-Fourier for M×N images: Determining the normalization criteria

Mas Rina Mustaffa; Fatimah Ahmad; Ramlan Mahmod; Shyamala Doraisamy

Ridgelet transform (RT) has gained its popularity due to its capability in dealing with line singularities effectively. Many of the existing RT however is only applied to images of size M×M or the M×N images will need to be pre-segmented into M×M sub-images prior to processing. The research presented in this article is aimed at the development of a generalized RT for content-based image retrieval so that it can be applied easily to any images of various sizes. This article focuses on comparing and determining the normalization criteria for Radon transform, which will aid in achieving the aim. The Radon transform normalization criteria sets are compared and evaluated on an image database consisting of 216 images, where the precision and recall and Averaged Normalized Modified Retrieval Rank (ANMRR) are measured.


international symposium on information technology | 2008

Dominant colour descriptor with spatial information for Content-based Image Retrieval

Mas Rina Mustaffa; Fatimah Ahmad; Rahmita Wirza O. K. Rahmat; Ramlan Mahmod

an important problem in colour content-based image retrieval (CBIR) is the lack of an effective way to represent both the colour and spatial information of an image. In order to solve this problem, a new dominant colour descriptor that employs spatial information of image is proposed. A maximum of three dominant colour regions in an image together with their respective coordinates of the minimum-bounding rectangle (mbr) are first extracted using the colour-based dominant region segmentation. The Improved sub-block technique is then used to determine the location of the dominant colour regions by taking into consideration the total horizontal and vertical distances of a region at each location where it overlaps. A query-by-example CBIR system implementing the colour-spatial technique is developed. Experimental studies on an image database consisting of 900 images are conducted. From the experiments, it is evident that retrieval effectiveness has significantly improved by 85.86%.


Journal of Fisheriessciences.com | 2017

Gonad Ultrasonography Image Preprocessing for Mahseer (Tor Tombroides)

Nurul Asmaa Abd Razak; Hizmawati Madzin; Fatimah Khalid; Mas Rina Mustaffa

In the context of breeding and seed production of Mahseer species, understanding and control of the Mahseer gonad maturation level have given strong interest for scientific and commercial purposes. Possible use of ultrasonography for monitoring of gonad maturation in Mahseer fish is investigated. From previous studies, ultrasonography image of gonad can be affected by many speckle noise. Subtle differences between speckle noise and the Mahseer eggs lead to difficulties in identifying the eggs in ultrasonography gonad’s image. To eliminate these speckle noise, preprocessing image method is required. Filtering despeckling techniques is initially applied to remove the noise. There will be experimenting in comparing which despeckling technique is suitable for the gonad’s ultrasonography image. From the result, the best despeckling technique will be chosen and a framework of preprocessing method will be introduced for identifying the eggs in the gonad ultrasonography image. This noninvasive tool can then obviously be utilized to improve and monitor the maturation level of Mahseer fish. Ultrasonography thus has great potential for use in Mahseer fish both for conservation and aquaculture field. To our knowledge, this is the first article on the preprocessing of ultrasonography image on Mahseer or any fish species.

Collaboration


Dive into the Mas Rina Mustaffa's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fatimah Ahmad

Universiti Putra Malaysia

View shared research outputs
Top Co-Authors

Avatar

Azreen Azman

Universiti Putra Malaysia

View shared research outputs
Top Co-Authors

Avatar

Ramlan Mahmod

Information Technology University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mohsen Zand

Universiti Putra Malaysia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fatimah Khalid

Universiti Putra Malaysia

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge