Jose Sepulveda
Singapore Polytechnic
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jose Sepulveda.
conference on computability in europe | 2011
Jeffrey Tzu Kwan Valino Koh; Kasun Karunanayaka; Jose Sepulveda; Mili John Tharakan; Manoj N. Krishnan; Adrian David Cheok
We present a new methodology based on ferromagnetic fluids in which the user can have direct interaction (input/output) through a tangible and malleable interface. Liquid Interfaces uses the physical qualities of ferromagnetic fluids in combination with capacitive, multi-touch technology, to produce a 3D, multi-touch interface where actuation, representation, and self-configuration occur through the malleable liquid, ferromagnetic fluid. This, combined with the ability to produce sound, enables users to create musical sculptures that can be morphed in real time by interacting directly with the ferromagnetic fluid.
IEEE Transactions on Circuits and Systems for Video Technology | 2015
Tam V. Nguyen; Canyi Lu; Jose Sepulveda; Shuicheng Yan
In this paper, we present an adaptive nonparametric solution to the image parsing task, namely, annotating each image pixel with its corresponding category label. For a given test image, first, a locality-aware retrieval set is extracted from the training data based on superpixel matching similarities, which are augmented with feature extraction for better differentiation of local superpixels. Then, the category of each superpixel is initialized by the majority vote of the k -nearest-neighbor superpixels in the retrieval set. Instead of fixing k as in traditional nonparametric approaches, here, we propose a novel adaptive nonparametric approach that determines the sample-specific k for each test image. In particular, k is adaptively set to be the number of the fewest nearest superpixels that the images in the retrieval set can use to get the best category prediction. Finally, the initial superpixel labels are further refined by contextual smoothing. Extensive experiments on challenging data sets demonstrate the superiority of the new solution over other state-of-the-art nonparametric solutions.
Archive | 2013
Edy Portmann; Tam V. Nguyen; Jose Sepulveda; Adrian David Cheok
In the Social Semantic Web an organization, a brand, the name of a high-profile executive, or a particular product can be defined as the hodgepodge of all online conversations taking place around it, and this is happening regardless of whether or not an organization participates in the conversationscape’s dialogue. Long story short, organizations in the first place are forced to listen to the Social Web so in order to take part in and, in this way, improve their online reputation. To do that intuitively, the FORA framework is conceptualized as a pertinent listening application. So, the term FORA originates from the plural form of forum, the Latin word for marketplaces (Portmann, Nguyen, Sepulveda, & Cheok, 2012). Thus, the framework allows organizations’ communication operatives a fuzzy exploration of reputation in online marketplaces. Listening and then increasing engagement within social media elements is a hard task. There is a constant flow of information and many organizations do not know how to harness and gain actionable insights from this rich source of customer conversations. The idea beyond the conceptualization of the framework is to listen and in doing so automatically identify key social media elements 24/7 to simplify online reputation analysis and, by that, impart onto communication operatives insightful information on which they can actually act upon. To make this system reality, a design science approach is pursued.
international symposium on mixed and augmented reality | 2010
Yongsoon Choi; Adrian David Cheok; Veronica Halupka; Jose Sepulveda; Roshan Peris; Jeffrey Tzu Kwan Valino Koh; Wang Xuan; Wei Jun; Abeyrathne Dilrukshi; Yamaguchi Tomoharu; Maiko Kamata; Daishi Kato; Keiji Yamada
Currently we are developing a co-cooking system that helps users to make similar tasting dishes, even though users may all be in remote locations and potentially cooking at different times.
Neurocomputing | 2018
Shuya Ding; Bilal Mirza; Zhiping Lin; Jiuwen Cao; Xiaoping Lai; Tam V. Nguyen; Jose Sepulveda
In this paper, we propose a weighted online sequential extreme learning machine with kernels (WOS-ELMK) for class imbalance learning (CIL). The existing online sequential extreme learning machine (OS-ELM) methods for CIL use random feature mapping. WOS-ELMK is the first OS-ELM method which uses kernel mapping for online class imbalance learning. The kernel mapping avoids the non-optimal hidden node problem associated with weighted OS-ELM (WOS-ELM) and other existing OS-ELM methods for CIL. WOS-ELMK tackles both the binary class and multiclass imbalance problems in one-by-one as well as chunk-by-chunk learning modes. For imbalanced big data streams, a fixed size window scheme is also implemented for WOS-ELMK. We empirically show that WOS-ELMK obtains superior performance in general than some recently proposed CIL approaches on 17 binary class and 8 multiclass imbalanced datasets.
ambient intelligence | 2011
Mili John Tharakan; Jose Sepulveda; Wendy Thun; Adrian David Cheok
Recent research in Human Computer studies have shown that smart and efficient technology alone is not what people desire for in their homes. The Interactive Carpet project aims to produce a new kind of interaction - Poetic Communication, enabling remote communication through the creation of a sense of sharing, co-presence, and connectedness. This technology connects two carpets in remote locations enhancing communication through more meaningful aesthetic interactions.
acm multimedia | 2016
Tam V. Nguyen; Dorothy Tan; Bilal Mirza; Jose Sepulveda
In this work, we present a practical system which uses mobile devices for interactive manuals. In particular, there are two modes provided in the system, namely, expert/trainer and trainee modes. Given the expert/trainer editor, experts design the step-by-step interactive manuals. For each step, the experts capture the images by using phones/tablets and provide visual instructions such as interest regions, text, and action animations. In the trainee mode, the system utilizes the existing object detection and tracking algorithms to identify the step scene and retrieve the respective instruction to be displayed on the mobile device. The trainee then follows the displayed instruction. Once each step is performed, the trainee commands the devices to proceed to the next step.
Proceedings of the 2nd ACM International Workshop on Immersive Media Experiences | 2014
Tam V. Nguyen; Yong He Tan; Jose Sepulveda
This paper presents a lightweight game framework that provides real-time integration of human appearance and gesture-guided control within the game. It augments a new immersive experience since it allows game users to see their personal appearance interacting in real-time with other computer graphical characters in the game. With the goal to make the system easily realizable, we address the challenges in the whole pipeline of video processing, gesture recognition, and communication. To this end, we introduce the game framework, Gesture and Appearance Cutout Embedding (GACE), which runs the human appearance cutout algorithm and connects with game components by using memory mapped files. We also introduce the gesture-based support to enhance the immersion. Extensive experiments have shown that the proposed system runs reliably and comfortably in real-time with a commodity setting.
international conference on ubiquitous information management and communication | 2018
Tam V. Nguyen; Bilal Mirza; Dorothy Tan; Jose Sepulveda
In this paper, we introduce a practical system which uses mobile devices for interactive manuals. In particular, there are two modes provided in the system, namely, expert/trainer and trainee modes. Given the expert/trainer editor, experts design the step-by-step interactive manuals. For each step, the experts capture the images by using phones/tablets and provide visual instructions such as interest regions, text, and action animations. Here, we integrate the region of interest selection into the creation of markers from the input images. In the trainee mode, the system utilizes the existing object detection and tracking algorithms to identify the step scene and retrieve the respective instruction to be displayed on the mobile device. The trainee then follows the displayed instruction. Once each step is performed, the trainee commands the devices to proceed to the next step. Evaluational results demonstrate that our system is highly preferred by end users.
international conference on digital signal processing | 2016
Bilal Mirza; Stanley Kok; Zhiping Lin; Yong Kiang Yeo; Xiaoping Lai; Jiuwen Cao; Jose Sepulveda
In this paper, a multi-layer weighted extreme learning machine (ML-WELM) is proposed for high-dimensional datasets with class imbalance. The recently proposed single hidden layer WELM method effectively tackles class imbalance but it may not capture high level abstractions in image datasets. ML-WELM provides efficient representation learning for big image data using multiple hidden layers and at the same time tackles the class imbalance problem using cost-sensitive weighting. Weighted ELM auto-encoder (WELM-AE) is also proposed for layer-by-layer class imbalance feature learning in ML-WELM. We used four imbalance image datasets in our experiments; ML-WELM performs better than the WELM method on all of them.