Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shintami Chusnul Hidayati is active.

Publication


Featured researches published by Shintami Chusnul Hidayati.


OncoTargets and Therapy | 2015

Computer-aided classification of lung nodules on computed tomography images via deep learning technique

Kai-Lung Hua; Che-Hao Hsu; Shintami Chusnul Hidayati; Wen-Huang Cheng; Yu-Jen Chen

Lung cancer has a poor prognosis when not diagnosed early and unresectable lesions are present. The management of small lung nodules noted on computed tomography scan is controversial due to uncertain tumor characteristics. A conventional computer-aided diagnosis (CAD) scheme requires several image processing and pattern recognition steps to accomplish a quantitative tumor differentiation result. In such an ad hoc image analysis pipeline, every step depends heavily on the performance of the previous step. Accordingly, tuning of classification performance in a conventional CAD scheme is very complicated and arduous. Deep learning techniques, on the other hand, have the intrinsic advantage of an automatic exploitation feature and tuning of performance in a seamless fashion. In this study, we attempted to simplify the image analysis pipeline of conventional CAD with deep learning techniques. Specifically, we introduced models of a deep belief network and a convolutional neural network in the context of nodule classification in computed tomography images. Two baseline methods with feature computing steps were implemented for comparison. The experimental results suggest that deep learning methods could achieve better discriminative results and hold promise in the CAD application domain.


Pattern Recognition Letters | 2016

A comparative study of data fusion for RGB-D based visual recognition ☆

Jordi Sanchez-Riera; Kai-Lung Hua; Yuan-Sheng Hsiao; Tekoing Lim; Shintami Chusnul Hidayati; Wen-Huang Cheng

Abstract Data fusion from different modalities has been extensively studied for a better understanding of multimedia contents. On one hand, the emergence of new devices and decreasing storage costs cause growing amounts of data being collected. Though bigger data makes it easier to mine information, methods for big data analytics are not well investigated. On the other hand, new machine learning techniques, such as deep learning, have been shown to be one of the key elements in achieving state-of-the-art inference performances in a variety of applications. Therefore, some of the old questions in data fusion are in need to be addressed again for these new changes. These questions are: What is the most effective way to combine data for various modalities? Does the fusion method affect the performance with different classifiers? To answer these questions, in this paper, we present a comparative study for evaluating early and late fusion schemes with several types of SVM and deep learning classifiers on two challenging RGB-D based visual recognition tasks: hand gesture recognition and generic object recognition. The findings from this study provide useful policy and practical guidance for the development of visual recognition systems.


acm multimedia | 2012

Clothing genre classification by exploiting the style elements

Shintami Chusnul Hidayati; Wen-Huang Cheng; Kai-Lung Hua

This paper presents a novel approach to automatically classify the upperwear genre from a full-body input image with no restrictions of model poses, image backgrounds, and image resolutions. Five style elements, that are crucial for clothing recognition, are identified based on the clothing design theory. The corresponding features of each of these style elements are also designed. We illustrate the effectiveness of our approach by showing that the proposed algorithm achieved overall precision of 92.04%, recall of 92.45%, and F score of 92.25% with 1,077 clothing images crawled from popular online stores.


conference on multimedia modeling | 2016

Locality Constrained Sparse Representation for Cat Recognition

Yu-Chen Chen; Shintami Chusnul Hidayati; Wen-Huang Cheng; Min Chun Hu; Kai-Lung Hua

Cat (Felis catus) plays an important social role within our society and can provide considerable emotional support for their owners. Missing, swapping, theft, and false insurance claims of cat have become global problem throughout the world. Reliable cat identification is thus an essential factor in the effective management of the owned cat population. The traditional cat identification methods by permanent (e.g., tattoos, microchip, ear tips/notches, and freeze branding), semi-permanent (e.g., identification collars and ear tags), or temporary (e.g., paint/dye and radio transmitters) procedures are not robust to provide adequate level of security. Moreover, these methods might have adverse effects on the cats. Though the work on animal identification based on their phenotype appearance (face and coat patterns) has received much attention in recent years, however none of them specifically targets cat. In this paper, we therefore propose a novel biometrics method to recognize cat by exploiting their noses that are believed to be a unique identifier by cat professionals. As the pioneer of this research topic, we first collect a Cat Database that contains 700 cat nose images from 70 different cats. Based on this dataset, we design a representative dictionary with data locality constraint for cat identification. Experimental results well demonstrate the effectiveness of the proposed method compared to several state-of-the-art feature-based algorithms.


IEEE Transactions on Systems, Man, and Cybernetics | 2018

Learning and Recognition of Clothing Genres From Full-Body Images

Shintami Chusnul Hidayati; Chuang-Wen You; Wen-Huang Cheng; Kai-Lung Hua

According to the theory of clothing design, the genres of clothes can be recognized based on a set of visually differentiable style elements, which exhibit salient features of visual appearance and reflect high-level fashion styles for better describing clothing genres. Instead of using less-discriminative low-level features or ambiguous keywords to identify clothing genres, we proposed a novel approach for automatically classifying clothing genres based on the visually differentiable style elements. A set of style elements, that are crucial for recognizing specific visual styles of clothing genres, were identified based on the clothing design theory. In addition, the corresponding salient visual features of each style element were identified and formulated with variables that can be computationally derived with various computer vision algorithms. To evaluate the performance of our algorithm, a dataset containing 3250 full-body shots crawled from popular online stores was built. Recognition results show that our proposed algorithms achieved promising overall precision, recall, and


acm multimedia | 2017

Popularity Meter: An Influence- and Aesthetics-aware Social Media Popularity Predictor

Shintami Chusnul Hidayati; Yi-Ling Chen; Chao-Lung Yang; Kai-Lung Hua

{F}


2017 International Conference on Signals and Systems (ICSigSys) | 2017

3D model retrieval based on deep Autoencoder neural networks

Zhao-Ming Liu; Yung-Yao Chen; Shintami Chusnul Hidayati; Shih-Che Chien; Feng-Chia Chang; Kai-Lung Hua

-score of 88.76%, 88.53%, and 88.64% for recognizing upperwear genres, and 88.21%, 88.17%, and 88.19% for recognizing lowerwear genres, respectively. The effectiveness of each style element and its visual features on recognizing clothing genres was demonstrated through a set of experiments involving different sets of style elements or features. In summary, our experimental results demonstrate the effectiveness of the proposed method in clothing genre recognition.


ieee international conference on multimedia big data | 2016

A Spatial-Pyramid Scene Categorization Algorithm based on Locality-aware Sparse Coding

Dang Duy Thang; Shintami Chusnul Hidayati; Yung-Yao Chen; Wen-Huang Cheng; Shih-Wei Sun; Kai-Lung Hua

Social media websites have become an important channel for content sharing and communication between users on social networks. The shared images on the websites, even the ones from the same user, tend to receive a quite diverse distribution of views. This raises the problem of image popularity prediction on social media. To address this important research topic, we explore three essential components that have considerable impact of the image popularity, which are user profile, post metadata, and photo aesthetics. Moreover, we make use of state-of-the-art predictive modeling approaches to demonstrate the effectiveness of our proposed features in predicting image popularity. We then evaluate the proposed method through a large number of real image posts from Flickr. The experimental results show significant statistical evidence that incorporating the proposed features with ensemble learning method that combines predictions from support vector regression (SVR) and classification and regression tree (CART) models offers a satisfactory popularity prediction. By understanding the social behavior and the underlying structure of content popularity, our research results can also contribute to designing better algorithms for important applications like content recommendation and advertisement placement.


Journal of Visual Communication and Image Representation | 2016

Context-aware joint dictionary learning for color image demosaicking

Kai-Lung Hua; Shintami Chusnul Hidayati; Fang-Lin He; Chia-Po Wei; Yu-Chiang Frank Wang

The rapid growth of 3D model resources for 3D printing has created an urgent need for 3D model retrieval systems. Benefiting from the evolution of hardware devices, visualized 3D models can be easily rendered using a tablet computer or handheld mobile device. In this paper, we present a novel 3D model retrieval method involving view-based features and deep learning. Because 2D images are highly distinguishable, constructing a 3D model from multiple 2D views is one of the most common methods of 3D model retrieval. Normalization is typically challenging and time-consuming for view-based retrieval methods; however, this work utilized an unsupervised deep learning technique, called Autoencoder, to refine compact view-based features. Therefore, the proposed method is rotation-invariant, requiring only the normalization of the translation and the scale of the 3D models in the dataset. For robustness, we applied Fourier descriptors and Zernike moments to represent the 2D features. The experimental results testing our method on the online Princeton Shape Benchmark Dataset demonstrate more accurate retrieval performance than other existing methods.


conference on multimedia modeling | 2014

Who's the Best Charades Player? Mining Iconic Movement of Semantic Concepts

Yung-Huan Hsieh; Shintami Chusnul Hidayati; Wen-Huang Cheng; Min Chun Hu; Kai-Lung Hua

Scene recognition has a wide range of applications, such as object recognition and detection, content-based image indexing and retrieval, and intelligent vehicle and robot navigation. In particular, natural scene images tend to be very complex and are difficult to analyze due to changes of illumination and transformation. In this study, we investigate a novel model to learn and recognize scenes in nature by combining locality constrained sparse coding (LCSP), Spatial Pyramid Pooling, and linear SVM in end-to-end model. First, interesting points for each image in the training set are characterized by a collection of local features, known as codewords, obtained using dense SIFT descriptor. Each codeword is represented as part of a topic. Then, we employ LCSP algorithm to learn the codeword distribution of those local features from the training images. Next, a modified Spatial Pyramid Pooling model is employed to encode the spatial distribution of the local features. For the final stage, a linear SVM is employed to classify local features encoded by Spatial Pyramid Pooling. Experimental evaluations on several benchmarks well demonstrate the effectiveness and robustness of the proposed method compared to several state-of-the-art visual descriptors.

Collaboration


Dive into the Shintami Chusnul Hidayati's collaboration.

Top Co-Authors

Avatar

Kai-Lung Hua

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Wen-Huang Cheng

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Min Chun Hu

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Shih-Wei Sun

Taipei National University of the Arts

View shared research outputs
Top Co-Authors

Avatar

Che-Hao Hsu

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yung-Yao Chen

National Taipei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Tekoing Lim

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Chao-Lung Yang

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Cheng-Chun Hsu

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Chuang-Wen You

National Taiwan University

View shared research outputs
Researchain Logo
Decentralizing Knowledge