Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where TaeChoong Chung is active.

Publication


Featured researches published by TaeChoong Chung.


Applied Intelligence | 2013

BA*: an online complete coverage algorithm for cleaning robots

Hoang Huu Viet; Viet-Hung Dang; Nasir Uddin Laskar; TaeChoong Chung

This paper presents a novel approach to solve the online complete coverage task of autonomous cleaning robots in unknown workspaces based on the boustrophedon motions and the A* search algorithm (BA*). In this approach, the robot performs a single boustrophedon motion to cover an unvisited region until it reaches a critical point. To continue covering the next unvisited region, the robot wisely detects backtracking points based on its accumulated knowledge, determines the best backtracking point as the starting point of the next boustrophedon motion, and applies an intelligent backtracking mechanism based on the proposed A* search with smoothed path on tiling so as to reach the starting point with the shortest collision-free path. The robot achieves complete coverage when no backtracking point is detected. Computer simulations and experiments in real workspaces prove that our proposed BA* is efficient for the complete coverage task of cleaning robots.


Information Sciences | 2011

Hessian matrix distribution for Bayesian policy gradient reinforcement learning

Ngo Anh Vien; Hwanjo Yu; TaeChoong Chung

Bayesian policy gradient algorithms have been recently proposed for modeling the policy gradient of the performance measure in reinforcement learning as a Gaussian process. These methods were known to reduce the variance and the number of samples needed to obtain accurate gradient estimates in comparison to the conventional Monte-Carlo policy gradient algorithms. In this paper, we propose an improvement over previous Bayesian frameworks for the policy gradient. We use the Hessian matrix distribution as a learning rate schedule to improve the performance of the Bayesian policy gradient algorithm in terms of the variance and the number of samples. As in computing the policy gradient distributions, the Bayesian quadrature method is used to estimate the Hessian matrix distributions. We prove that the posterior mean of the Hessian distribution estimate is symmetric, one of the important properties of the Hessian matrix. Moreover, we prove that with an appropriate choice of kernel, the computational complexity of Hessian distribution estimate is equal to that of the policy gradient distribution estimates. Using simulations, we show encouraging experimental results comparing the proposed algorithm to the Bayesian policy gradient and the Bayesian policy natural gradient algorithms described in Ghavamzadeh and Engel [10].


Expert Systems With Applications | 2015

Semi-supervised learning using frequent itemset and ensemble learning for SMS classification

Ishtiaq Ahmed; Rahman Ali; Donghai Guan; Young-Koo Lee; Sungyoung Lee; TaeChoong Chung

We have used semi-supervised learning with the help of frequent itemset and ensemble learning to classify SMS data into ham and spam.We have used UCI publicly available SMS spam collection, SMS spam collection corpus v.0.1 small and big data set for experimenting our result.We have compared our result with existing semi-supervised learning methods PEBL and SpyEM.We have obtained good results on very low amount of positive dataset and different amount of unlabeled dataset. Short Message Service (SMS) has become one of the most important media of communications due to the rapid increase of mobile users and its easy to use operating mechanism. This flood of SMS goes with the problem of spam SMS that are generated by spurious users. The detection of spam SMS has gotten more attention of researchers in recent times and is treated with a number of different machine learning approaches. Supervised machine learning approaches, used so far, demands a large amount of labeled data which is not always available in real applications. The traditional semi-supervised methods can alleviate this problem but may not produce good results if they are provided with only positive and unlabeled data. In this paper, we have proposed a novel semi-supervised learning method which makes use of frequent itemset and ensemble learning ( FIEL ) to overcome this limitation. In this approach, Apriori algorithm has been used for finding the frequent itemset while Multinomial Naive Bayes, Random Forest and LibSVM are used as base learners for ensemble learning which uses majority voting scheme. Our proposed approach works well with small number of positive data and different amounts of unlabeled dataset with higher accuracy. Extensive experiments have been conducted over UCI SMS spam collection data set, SMS spam collection Corpus v.0.1 Small and Big which show significant improvements in accuracy with very small amount of positive data. We have compared our proposed FIEL approach with the existing SPY-EM and PEBL approaches and the results show that our approach is more stable than the compared approaches with minimum support.


congress on evolutionary computation | 2001

An effective dynamic weighted rule for ant colony system optimization

Seung Gwan Lee; Tae Ung Jung; TaeChoong Chung

The ant colony system (ACS) algorithm is new metaheuristic for hard combinational optimization problems. It is a population-based approach that exploits positive feedback as well as greedy search. It was first proposed for tackling the well known traveling salesman problem (TSP). We introduce a new version of the ACS based on a dynamic weighted updating rule. Implementation to solve TSP and the performance results under various conditions are conducted, and the comparison between the original ACS and the proposed method is shown. It turns out that our proposed method can compete with the original ACS in terms of solution quality and computation speed for these problem.


Applied Intelligence | 2013

Learning via human feedback in continuous state and action spaces

Ngo Anh Vien; Wolfgang Ertel; TaeChoong Chung

This paper considers the problem of extending Training an Agent Manually via Evaluative Reinforcement (TAMER) in continuous state and action spaces. Investigative research using the TAMER framework enables a non-technical human to train an agent through a natural form of human feedback (negative or positive). The advantages of TAMER have been shown on tasks of training agents by only human feedback or combining human feedback with environment rewards. However, these methods are originally designed for discrete state-action, or continuous state-discrete action problems. This paper proposes an extension of TAMER to allow both continuous states and actions, called ACTAMER. The new framework utilizes any general function approximation of a human trainer’s feedback signal. Moreover, a combined capability of ACTAMER and reinforcement learning is also investigated and evaluated. The combination of human feedback and reinforcement learning is studied in both settings: sequential and simultaneous. Our experimental results demonstrate the proposed method successfully allowing a human to train an agent in two continuous state-action domains: Mountain Car and Cart-pole (balancing).


Applied Intelligence | 2013

Monte-Carlo tree search for Bayesian reinforcement learning

Ngo Anh Vien; Wolfgang Ertel; Viet-Hung Dang; TaeChoong Chung

Bayesian model-based reinforcement learning can be formulated as a partially observable Markov decision process (POMDP) to provide a principled framework for optimally balancing exploitation and exploration. Then, a POMDP solver can be used to solve the problem. If the prior distribution over the environment’s dynamics is a product of Dirichlet distributions, the POMDP’s optimal value function can be represented using a set of multivariate polynomials. Unfortunately, the size of the polynomials grows exponentially with the problem horizon. In this paper, we examine the use of an online Monte-Carlo tree search (MCTS) algorithm for large POMDPs, to solve the Bayesian reinforcement learning problem online. We will show that such an algorithm successfully searches for a near-optimal policy. In addition, we examine the use of a parameter tying method to keep the model search space small, and propose the use of nested mixture of tied models to increase robustness of the method when our prior information does not allow us to specify the structure of tied models exactly. Experiments show that the proposed methods substantially improve scalability of current Bayesian reinforcement learning methods.


autonomic and trusted computing | 2012

Performance Analysis of LTE Smartphones-Based Vehicle-to-Infrastrcuture Communication

Hassan Abid; TaeChoong Chung; Sungyoung Lee; Saad B. Qaisar

With an increase in number of vehicles on road, applications and services offered by Intelligent Transportation System are growing in demand. However, there are still coverage and capacity limitation of wireless links that support such services. In this work, we address a new and novel concept that leverages 4G network for vehicle-to-infrastructure communications. Instead of considering on-board units communication interface, we make a case for using smart-phones as economical alternative given that an interface exists between vehicle navigation system and smart phone. The applications considered in this paper are broadband internet access and entertainment applications (video streaming etc.) We provide performance analysis results using LTE-simulator. Our high-level conclusion is that smart phones enriched with LTE capabilities are feasible for vehicular communications given that fourth generation network penetrates market rapidly.


Sensors | 2016

On curating multimodal sensory data for personalized health and wellness services

Muhammad Bilal Amin; Oresti Banos; Wajahat Ali Khan; Hafiz Syed Muhammad Bilal; Jingyuk Gong; Dinh-Mao Bui; Shujaat Hussain; Taqdir Ali; Usman Akhtar; TaeChoong Chung; Sungyoung Lee

In recent years, the focus of healthcare and wellness technologies has shown a significant shift towards personal vital signs devices. The technology has evolved from smartphone-based wellness applications to fitness bands and smartwatches. The novelty of these devices is the accumulation of activity data as their users go about their daily life routine. However, these implementations are device specific and lack the ability to incorporate multimodal data sources. Data accumulated in their usage does not offer rich contextual information that is adequate for providing a holistic view of a user’s lifelog. As a result, making decisions and generating recommendations based on this data are single dimensional. In this paper, we present our Data Curation Framework (DCF) which is device independent and accumulates a user’s sensory data from multimodal data sources in real time. DCF curates the context of this accumulated data over the user’s lifelog. DCF provides rule-based anomaly detection over this context-rich lifelog in real time. To provide computation and persistence over the large volume of sensory data, DCF utilizes the distributed and ubiquitous environment of the cloud platform. DCF has been evaluated for its performance, correctness, ability to detect complex anomalies, and management support for a large volume of sensory data.


advances in computer-human interaction | 2008

Obstacle Avoidance Path Planning for Mobile Robot Based on Multi Colony Ant Algorithm

Nguyen Hoang Viet; Ngo Anh Vien; SeungGwan Lee; TaeChoong Chung

The task of planning trajectories for a mobile robot has received considerable attention in the research literature. The problem involves computing a collision-free path between a start point and a target point in environment of known obstacles. In this paper, we study an obstacle avoidance path planning problem using multi ant colony system, in which several colonies of ants cooperate in finding good solution by exchanging good information. In the simulation, we experimentally investigate the behaviour of multi colony ant algorithm with different kinds of information among the colonies. At last we will compare the behaviour of different number of colonies with a multi start single colony ant algorithm to show the good improvement.


pacific rim conference on multimedia | 2003

Modified ant colony system for coloring graphs

SangHyuck Ahn; SeungGwan Lee; TaeChoong Chung

Ant colony system (ACS) algorithm is new meta-heuristic for hard combinational optimization problem. It is a population-based approach that uses exploitation of positive feedback as well as greedy search. Recently, various methods and solutions are proposed to solve optimal solution of graph coloring problem that assign different colors for adjacency node (vi, vj). This paper introduces ANTCOL algorithm to solve solution by ant colony system algorithm that is not known well as the solution of existent graph coloring problem. After introducing the ACS algorithm and the assignment type problem, it shows the way on how to apply ACS to solve ATP. We propose ANT/spl I.bar/XRLF method which uses XRLF to solve the ANTCOL. Graph coloring result and the execution time of our method are compared with existent generating functions (ANT Random, ANT/spl I.bar/LF, ANT/spl I.bar/SL, ANT/spl I.bar/DSATUR, and ANT/spl I.bar/RLF method) Also we compare the existing generating functions with the method ANT/spl I.bar/XRLF (ANT/spl I.bar/XRLF/spl I.bar/R) where re-search is added.

Collaboration


Dive into the TaeChoong Chung's collaboration.

Top Co-Authors

Avatar

Ngo Anh Vien

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge