Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Arthur Niswar is active.

Publication


Featured researches published by Arthur Niswar.


IEEE Transactions on Multimedia | 2013

A Mixed Reality Virtual Clothes Try-On System

Miaolong Yuan; Ishtiaq Rasool Khan; Farzam Farbiz; Susu Yao; Arthur Niswar; Min-Hui Foo

Virtual try-on of clothes has received much attention recently due to its commercial potential. It can be used for online shopping or intelligent recommendation to narrow down the selections to a few designs and sizes. In this paper, we present a mixed reality system for 3D virtual clothes try-on that enables a user to see herself wearing virtual clothes while looking at a mirror display, without taking off her actual clothes. The user can select various virtual clothes for trying-on. The system physically simulates the selected virtual clothes on the users body in real-time and the user can see virtual clothes fitting on the her mirror image from various angles as she moves. The major contribution of this paper is that we automatically customize an invisible (or partially visible) avatar based on the users body size and the skin color and use it for proper clothes fitting, alignment and clothes simulation in our virtual try-on system. We present three scenarios: i) virtual clothes on the avatar, ii) virtual clothes on the users image and iii) virtual clothes on the avatar blended with the users face image. We have conducted a user study to evaluate the effectiveness of these three solutions from the end users perception of quality attributes, cognitive attributes and attitude towards using. The user study shows that among these three scenarios, the second one is most preferred by the users and for 50% of them the experience they had with our system was sufficient to make the purchase decision for the outfits they virtually tried-on.


virtual reality continuum and its applications in industry | 2011

A mixed reality system for virtual glasses try-on

Miaolong Yuan; Ishtiaq Rasool Khan; Farzam Farbiz; Arthur Niswar; Zhiyong Huang

In this paper we present an augmented reality system for automatic try-on of 3D virtual eyeglasses. The user can select from various virtual models of eyeglasses for trying-on and the system will automatically fit the selected virtual glasses on the users face. The user can see his/her face as in a mirror with the 3D virtual glasses fitted on it. We also propose a method for handling the occlusion problem, to display only those parts of the glasses that are not occluded by the face. This system can be used for online shopping, or short listing a large set of available models to a few before physical try-on at a retailers site.


virtual reality continuum and its applications in industry | 2011

Virtual try-on of eyeglasses using 3D model of the head

Arthur Niswar; Ishtiaq Rasool Khan; Farzam Farbiz

This work presents a system for virtual try-on of eyeglasses using a 3D model of users face and head. The 3D head model is reconstructed using only one image of the user. The 3D glasses model are then fitted onto this head model, and the users head movement is tracked in real-time to rotate the 3D head model with glasses accordingly.


International Conference on Mobile Web and Information Systems | 2014

SARA: Singapore’s Automated Responsive Assistant, A Multimodal Dialogue System for Touristic Information

Andreea I. Niculescu; Ridong Jiang; Seokhwan Kim; Kheng Hui Yeo; Luis Fernando D’haro; Arthur Niswar; Rafael E. Banchs

In this paper we describe SARA, a multimodal dialogue system offering touristic assistance for visitors coming to Singapore. The system is implemented as an Android mobile phone application and provides information about local attractions, restaurants, sightseeing, direction and transportation services. SARA is able to detect the user’s location on a map by using a GPS integrated module and accordingly can provide real-time orientation and direction help. To communicate with SARA users can use speech, text or scanned QR code. Input/output modalities for SARA include natural language in form of speech or text. A short video about the main features of our Android application can be seen at: http://vimeo.com/91620644. Currently, the system supports only English, but we are working towards a multi-lingual input/output support. For test purposes we also created a web version of SARA that can be tested for Chinese and English text input/output at: http://iris.i2r.a-star.edu.sg/StatTour/.


virtual reality continuum and its applications in industry | 2009

Real-time 3D talking head from a synthetic viseme dataset

Arthur Niswar; Ee Ping Ong; Hong Thai Nguyen; Zhiyong Huang

In this paper, we describe a simple and fast way to build a 3D talking head which can be used in many applications requiring audiovisual speech animation system. The talking head is constructed from a synthetic 3D viseme dataset, which is realistic enough and can be generated with 3D modeling software. To build the talking head, at first the viseme dataset is analyzed statistically to obtain the optimal linear parameters to control the movements of the lips and jaw of the 3D head model. These parameters correspond to some of the low-level MPEG-4 FAPs, hence our method can be used to extract the speech-relevant MPEG-4 FAPs from a dataset of phonemes/visemes. The parameterized head model is eventually combined with a Text-to-Speech (TTS) system to synthesize audiovisual speech from a given text. To make the talking head looks more realistic, eye-blink and movements are also animated during the speech. We implemented this work in an interactive text-to-audio-visual speech system.


international conference on computer graphics and interactive techniques | 2010

Pose-invariant 3D face reconstruction from a single image

Arthur Niswar; Ee Ping Ong; Zhiyong Huang

This technical sketch presents a novel method to reconstruct 3D face model from only a single image. Different from other methods, ours does not require the face in the image to be in a specific pose. This method deforms a generic 3D face model to fit the shape of the face in the image. The reconstructed 3D face model is then textured using the image. There are many practical applications to this method. For example, it provides a photo-editing tool to change the face pose in the picture as required. It can also be used by the police to investigate the picture of a suspect where only 1 picture is available and it is necessary to have the picture from different poses. Another possible application is entertainment: the 3D face model can be used to personalize character in 3D games.


international conference on computer graphics and interactive techniques | 2012

Face replacement in video from a single image

Arthur Niswar; Ee Ping Ong; Zhiyong Huang

We present a novel system to replace the face in a video with the face of a different person from a single image, where the face is not limited to be in a specific pose, e.g. exactly frontal. There are some systems to replace the face in a video with the face from another image/video, but usually the source face is restricted to certain poses. For example, [Cheng et al. 2009] developed a system to replace the face in a video with another face from a frontal and a profile images. [Dale et al. 2011] replaced the face in a video with a face from another video, where the face pose and lighting in the source and target video must be sufficiently similar. Our system is able to replace the target face with another face of an entirely different pose and animate the new face based on the original speech in the video.


international conference on computer graphics and interactive techniques | 2012

Avatar customization based on human body measurements

Arthur Niswar; Ishtiaq Rasool Khan; Farzam Farbiz

Customization of a virtual human plays an important role in human body modeling and simulation for various applications. Rather than having to create models with different body shape and size for a specific application, by using customization method a generic model can be modified based on some parameters. This paper presents a novel method to modify a generic model (avatar) based on standard human body measurements. Different from [Kasap and Magnenat-Thalmann 2009] and [Seo and Magnenat-Thalmann 2003] which use Free-Form Deformation and RBF-based deformation, our method modifies the avatar by scaling the vertices globally and locally, hence it is faster than those methods, and nevertheless is able to produce a natural-looking result.


Archive | 2010

METHOD AND SYSTEM FOR SINGLE VIEW IMAGE 3 D FACE SYNTHESIS

Hong Thai Nguyen; Ee Ping Ong; Arthur Niswar; Zhiyong Huang; Susanto Rahardja


annual meeting of the special interest group on discourse and dialogue | 2013

AIDA: Artificial Intelligent Dialogue Agent

Rafael E. Banchs; Ridong Jiang; Seokhwan Kim; Arthur Niswar; Kheng Hui Yeo

Collaboration


Dive into the Arthur Niswar's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Farzam Farbiz

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Seokhwan Kim

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge