Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where M. A. Viraj J. Muthugala is active.

Publication


Featured researches published by M. A. Viraj J. Muthugala.


international conference on robotics and automation | 2016

Enhancing human-robot interaction by interpreting uncertain information in navigational commands based on experience and environment

M. A. Viraj J. Muthugala; A. G. Buddhika P. Jayasekara

Assistive robots can support activities of elderly people to uplift the living standard. The assistive robots should possess the ability to interact with the human peers in a human friendly manner because those systems are intended to be used by non-experts. Humans prefer to use voice instructions that include uncertain information and lexical symbols. Hence, the ability to understand uncertain information is mandatory for developing natural interaction capabilities in robots. This paper proposes a method to understand uncertain information such as “close”, “near” and “far” in navigational user commands based on the current environment and the experience of the robot. A robot experience model (REM) has been introduced to understand the lexical representations in user commands and to adapt the perception of the robot on uncertain information in heterogeneous domestic environments. The user commands are not bounded by a strict grammar model and this enables the users to operate the robot in a more natural way. The proposed method has been implemented on the assistive robot platform. The experiments have been carried out in an artificially created domestic environment and the results have been analyzed to identify the behaviors of the proposed concept.


ieee international conference on fuzzy systems | 2017

Deictic gesture enhanced fuzzy spatial relation grounding in natural language

P. H. D. Arjuna S. Srimal; M. A. Viraj J. Muthugala; A. G. Buddhika P. Jayasekara

In the recent past, domestic service robots have come under close scrutiny among researchers. When collaborating with humans, robots should be able to clearly understand the instructions conveyed by the human users. Voice interfaces are frequently used as a mean of interaction interface between users and robots, as it requires minimum amount of work overhead from the users. However, the information conveyed through the voice instructions are often ambiguous and cumbersome due to the inclusion of imprecise information. The voice instructions are often accompanied with gestures especially when referring objects, locations, directions etc. in the environment. However, the information conveyed solely through these gestures is also imprecise. Therefore, it is more effective to consider a multimodal interface rather than a unimodal interface in order to understand the user instructions. Moreover, the information conveyed through the gestures can be used to improve the understanding of the user instructions related to object placements. This paper proposes a method to enhance the interpretation of user instructions related to the object placements by interpreting the information conveyed through voice and gestures. Furthermore, the proposed system is capable of adapting the understanding, according to the spatial arrangement of the workspace of the robot. Fuzzy logic system is proposed in order to evaluate the information conveyed through these two modalities while considering the arrangement of the workspace. Experiments have been carried out in order to evaluate the performance of the proposed system. The experimental results validate the performance gain of the proposed multimodal system over the unimodal systems.


intelligent robots and systems | 2016

Interpretation of uncertain information in mobile service robots by analyzing surrounding spatial arrangement based on occupied density variation

M. A. Viraj J. Muthugala; A. G. Buddhika P. Jayasekara

Service robots are being developed as a supportive aid for elderly people. Those robots are operated by non-expert users in heterogeneous domestic environments. Hence, the ability of a robot to be operated in a more natural human friendly manner enhances the overall satisfaction of the user. Humans prefer to use voice in order to convey instructions. Those voice instructions often include uncertain terms such as “little” and “far”. Therefore, the robotic assistants should possess the competency to appropriately interpret the quantitative meanings of such terms. The quantitative meaning of uncertain terms related to the spatial information depends on the spatial arrangement of the environment. This paper proposes a method in order to evaluate the uncertain information in user commands by replicating the natural tendencies of humans about the spatial arrangement of the environment. A module called Occupied Density analyzer has been deployed to analyze the occupied density distribution. A function has been defined to estimate the perceptive distance based on the occupied density distribution. The perception of the uncertain terms is adjusted according to the perceptive distance of that particular scenario. Particulars on rationale behind the proposed method are explained with due attention to the natural human tendencies. Experiments have been carried out in order to evaluate the performance and the behaviors of the proposed system.


ieee international conference on fuzzy systems | 2017

Interpretation of interaction demanding of a user based on nonverbal behavior in a domestic environment

H. P. C. Sirithunge; M. A. Viraj J. Muthugala; A. G. Buddhika; P. Jayasekara; D. P. Chandima

Human-robot interaction mechanisms are being developed to cater to growing elderly and disabled population. There are still voids in achieving human-likeness before initiation of an interaction. Interaction scenario could be made interesting and effective by engraving basic cognitive skills into the robots intelligence. Skills related to human-like interaction depends on cognitive skills and interpretation of the existing situation. Most robot users encounter a common problem with their robots. That is robot trying to interact with the user when hes engaged. In robots perspective, the robot is not fully capable of deciding when to interact with the user. This paper presents a model to decide when to interact with the user, minimizing such failures. The proposed model has separate functional units for decision making on a users nonverbal interaction demanding. Users availability for interaction is deduced through extracted information. The system observes a user for his bodily movements and behavior for a specified time duration. The extracted information is analyzed and then put through a module called Interaction Demanding Pose Identifier to interpret the interaction demanding of the user. The identified pose and other calculated parameters are fed into the Fuzzy Interaction Decision Making Module in order to interpret the degree of interaction demanding of the user. Interaction demanding is taken into consideration before going for direct interaction with the user. This method is implemented and tested in a simulated domestic environment with users in a broad age gap. Implementation of the method and results of the experiment are presented.


ieee international conference on fuzzy systems | 2017

Interpreting fuzzy directional information in navigational commands based on arrangement of the surrounding environment

M. A. Viraj J. Muthugala; A. G. Buddhika P. Jayasekara

Human friendly service robots should possess human like interaction and reasoning capabilities. Humans prefer to use voice instructions in order to communicate with peers. Those voice instructions often include linguistic notions and descriptors that are fuzzy in nature. Therefore, the human friendly robots should be capable of understanding the fuzzy information in user instructions. This paper proposes a method in order to interpret directional information in navigational user commands by considering the environment dependent fuzziness associated with the directional linguistic notions. A module called Direction Interpreter has been introduced for handling the fuzzy nature of directional linguistic notions. The module has been implemented with a fuzzy logic system that is capable of modifying the perception of the robot about the directional information according to the surrounding environment of the robot. This modification is done by weighting the output membership function with the distribution of the free space around the robot. According to the experimental results, the proposed system is capable of replicating the natural directional perception of humans that depends on the environment to a greater extent than the existing approaches.


moratuwa engineering research conference | 2016

MIRob: An intelligent service robot that learns from interactive discussions while handling uncertain information in user instructions

M. A. Viraj J. Muthugala; A. G. Buddhika P. Jayasekara


moratuwa engineering research conference | 2015

Interpreting fuzzy linguistic information in user commands by analyzing movement restrictions in the surrounding environment

M. A. Viraj J. Muthugala; A. G. Buddhika P. Jayasekara


moratuwa engineering research conference | 2018

Design and Development of an Interactive Service Robot as a Conversational Companion for Elderly People

G. W. Malith Manuhara; M. A. Viraj J. Muthugala; A. G. Buddhika P. Jayasekara


international conference on robotics and automation | 2018

Enhancing Overall Object Placement by Understanding Uncertain Spatial and Qualitative Distance Information in User Commands

M. M. S. N. Edirisinghe; M. A. Viraj J. Muthugala; H. P. Chapa Sirithunge; A. G. Buddhika; P. Jayasekara


IEEE Access | 2018

A Review of Service Robots Coping With Uncertain Information in Natural Language Instructions

M. A. Viraj J. Muthugala; A. G. Buddhika P. Jayasekara

Collaboration


Dive into the M. A. Viraj J. Muthugala's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge