Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ronghua Luo is active.

Publication


Featured researches published by Ronghua Luo.


IEEE Transactions on Cognitive and Developmental Systems | 2016

Affordance Research in Developmental Robotics: A Survey

Huaqing Min; Chang'an Yi; Ronghua Luo; Jinhui Zhu; Sheng Bi

Affordances capture the relationships between a robot and the environment in terms of the actions that the robot is able to perform. The notable characteristic of affordance-based perception is that an object is perceived by what it affords (e.g., graspable and rollable), instead of identities (e.g., name, color, and shape). Affordances play an important role in basic robot capabilities such as recognition, planning, and prediction. The key challenges in affordance research are: (1) how to automatically discover the distinctive features that specify an affordance in an online and incremental manner and (2) how to generalize these features to novel environments. This survey provides an entry point for interested researchers, including: (1) a general overview; (2) classification and critical analysis of existing work; (3) discussion of how affordances are useful in developmental robotics; (4) some open questions about how to use the affordance concept; and (5) a few promising research directions.


IEEE Transactions on Knowledge and Data Engineering | 2017

A Unified Framework for Metric Transfer Learning

Yonghui Xu; Sinno Jialin Pan; Hui Xiong; Qingyao Wu; Ronghua Luo; Huaqing Min; Hengjie Song

Transfer learning has been proven to be effective for the problems where training data from a source domain and test data from a target domain are drawn from different distributions. To reduce the distribution divergence between the source domain and the target domain, many previous studies have been focused on designing and optimizing objective functions with the Euclidean distance to measure dissimilarity between instances. However, in some real-world applications, the Euclidean distance may be inappropriate to capture the intrinsic similarity or dissimilarity between instances. To deal with this issue, in this paper, we propose a metric transfer learning framework (MTLF) to encode metric learning in transfer learning. In MTLF, instance weights are learned and exploited to bridge the distributions of different domains, while Mahalanobis distance is learned simultaneously to maximize the intra-class distances and minimize the inter-class distances for the target domain. Unlike previous work where instance weights and Mahalanobis distance are trained in a pipelined framework that potentially leads to error propagation across different components, MTLF attempts to learn instance weights and a Mahalanobis distance in a parallel framework to make knowledge transfer across domains more effective. Furthermore, we develop general solutions to both classification and regression problems on top of MTLF, respectively. We conduct extensive experiments on several real-world datasets on object recognition, handwriting recognition, and WiFi location to verify the effectiveness of MTLF compared with a number of state-of-the-art methods.


Neurocomputing | 2016

Laplacian regularized locality-constrained coding for image classification

Huaqing Min; Mingjie Liang; Ronghua Luo; Jinhui Zhu

Feature coding, which encodes local features extracted from an image with a codebook and generates a set of codes for efficient image representation, has shown very promising results in image classification. Vector quantization is the most simple but widely used method for feature coding. However, it suffers from large quantization errors and leads to dissimilar codes for similar features. To alleviate these problems, we propose Laplacian Regularized Locality-constrained Coding (LapLLC), wherein a locality constraint is used to favor nearby bases for encoding, and Laplacian regularization is integrated to preserve the code consistency of similar features. By incorporating a set of template features, the objective function used by LapLLC can be decomposed, and each feature is encoded by solving a linear system. Additionally, k nearest neighbor technique is employed to construct a much smaller linear system, so that fast approximated coding can be achieved. Therefore, LapLLC provides a novel way for efficient feature coding. Our experiments on a variety of image classification tasks demonstrated the effectiveness of this proposed approach.


Industrial Robot-an International Journal | 2016

Goal-directed affordance prediction at the subtask level

Huaqing Min; Chang'an Yi; Ronghua Luo; Jinhui Zhu

Purpose – This paper aims to present a hybrid control approach that combines learning-based reactive control and affordance-based deliberate control for autonomous mobile robot navigation. Unlike many current navigation approaches which only use learning-based paradigms, the authors focus on how to utilize the machine learning methods for reactive control together with the affordance knowledge that is simultaneously inherent in natural environments to gain advantages from both local and global optimization. Design/methodology/approach – The idea is to decompose the complex and large-scale robot navigation task into multiple sub-tasks and use the hierarchical reinforcement learning (HRL) algorithm, which is well-studied in the learning and control algorithm domains, to decompose the overall task into sub-tasks and learn a grid-topological map of the environment. An affordance-based deliberate controller is used to inspect the affordance knowledge of the obstacles in the environment. The hybrid control arch...


robotics and biomimetics | 2015

Affordance matching from the shared information in multi-robot

Chang'an Yi; Huaqing Min; Ronghua Luo

Affordances encode the relationships between the robot and environment, in terms of actions that the robot is able to perform. The essence is that each object is perceived by its affordances (graspable, moveable, etc), instead of its properties (color, size, etc). Previous work mainly associates an affordance to a primitive or reactive action of single robot, and the whole task is finished after that action has been executed. However, robots often need to cooperate in tasks such as rescue and exploration, where the perceived information could be shared to match affordances between the robots and objects. As far as we are concerned, this paper is the first one to carry out affordance research in multi-robot. This paper proposes a new definition and formalization for the affordance in multi-robot, and describes a robot in terms of its capabilities which are more affordance-oriented. In the simulation experiment, two robots with different capabilities could share the perceived information to match affordances, and the work efficiency is higher than the non-affordance approach.


robotics and biomimetics | 2012

A novel formalization for robot cognition based on Affordance model

Chang'an Yi; Huaqing Min; Ronghua Luo; Zhipeng Zhong; Xiaowen Shen

Affordance encodes the latent “action possibilities” for a given robot to interact with the environment. In this paper, we first present a 4-tuple formalization to describe the robot and environment systematically, in which precondition and postcondition could enable each action to take place in a measureable way. Analysis functions extract functional information from the environment, and they are the basis of our formalization. Then, the key problem of Affordance learning is addressed based on analysis functions, and the robot control architecture is also presented. In the simulation experiment, the robot performed the task effectively under our framework.


international conference on machine learning and cybernetics | 2012

Label transfer for joint recognition and segmentation of 3D object

Yonghui Xu; Ronghua Luo; Huaqing Min

With the information from labeled RGB image an unsupervised method based on label transfer technology is proposed for 3D object recognition and segmentation in RGB-D images. We first use scale invariant features extracted from color space to retrieve a set of nearest neighbors of the input image from the labeled image database. Based on the projection matrix between the labeled image and the input image, the labels of the pixels in the labeled image are transferred to input image. And then a segmentation model and a clustering algorithm based on the geometric characteristics are designed to obtain the spatial and semantic consistent object regions in the RGB-D images. Compared to supervised object recognition, our method does not need to train a classifier using a lot of training images.


IEEE Transactions on Systems, Man, and Cybernetics | 2015

Simultaneous Recognition and Modeling for Learning 3-D Object Models From Everyday Scenes

Mingjie Liang; Huaqing Min; Ronghua Luo; Jinhui Zhu

Object recognition and modeling have classically been studied separately, but practically, they are two closely correlated aspects. In this paper, by exploring the interrelations, we propose a framework to address these two problems at the same time, which we call simultaneous recognition and modeling. Differing from traditional recognition process which consists of off-line object model learning and on-line recognition procedures, our method is solely online. Starting with an empty object database, we incrementally build up object models while at the same time using these models to identify newly observed object views. In the proposed framework, objects are modeled as view graphs and a probabilistic observation model is presented. Both the appearance and the spatial structure of the object are examined, and a formulation based on maximum likelihood estimation is developed. Joint object recognition and modeling are achieved by solving the optimization problem. To evaluate the framework, we have developed a method for simultaneously learning multiple 3-D object models directly from the cluttered indoor environment and tested it using several everyday scenes. Experimental results demonstrate that the framework can cope with the recognition and modeling problem together nicely.


international conference on machine learning and cybernetics | 2012

Coupled hidden semi-Markov conditional random fields based context model for semantic map building

Ronghua Luo; Huaqing Min; Yonghui Xu; Jun-Bo Li

Semantic map is the foundation for mobile robots to understand its environment. By considering the semantic mapping problem as a semi-Markov process, a new hierarchical semi-Markov random field is proposed for this task. The proposed model can use multiple contextual information to label the places and objects in map and can partition the observations into spacial and semantic consistent sub-sequences each of which corresponding to a place. The proposed model is called coupled hidden semi-Markov conditional random fields (CHSM-CRFs). According to the structure of CHSM-CRFs, a piecewise learning algorithm and an approximating online inference algorithm based on Monte Carlo sampling are proposed for it. Experimental results with a mobile robot prove that the proposed method has high precision for labeling the places and objects in sematic mapping.


international conference on machine learning and cybernetics | 2012

Action reconginiton using human pose

Cong Chen; Huaqing Min; Ronghua Luo

In this paper, we present a novel method for recognizing human actions in videos. The method applies the human pose based features to describe actions and models the conditional probability relationship between feature sequences and actions using hidden conditional random field (HCRF). Given a video, limb masks are extracted by clustering image features in human region. Limb masks are helpful to reduce the interference from background and partially address the “double-counting” problem during the pose estimation. Then, the extracted pose sequence is smoothed using the kalman smooth to remove the noise and make the pose sequence consistent. Multiple kinds of feature sequences based on poses are extracted to describe the information of actions from different views. We train one HCRF for each feature sequence and combine the confidence from different HCRFs to improve the recognition accuracy. Experiments on the benchmark dataset show different features have their own advantages in action recognition and combine them can reach a good result.

Collaboration


Dive into the Ronghua Luo's collaboration.

Top Co-Authors

Avatar

Huaqing Min

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Chang'an Yi

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jinhui Zhu

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Mingjie Liang

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Sheng Bi

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Xiaowen Shen

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Yonghui Xu

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Zhipeng Zhong

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Cong Chen

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Guofei Zheng

South China University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge