Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where A. G. Buddhika P. Jayasekara is active.

Publication


Featured researches published by A. G. Buddhika P. Jayasekara.


international conference on robotics and automation | 2016

Enhancing human-robot interaction by interpreting uncertain information in navigational commands based on experience and environment

M. A. Viraj J. Muthugala; A. G. Buddhika P. Jayasekara

Assistive robots can support activities of elderly people to uplift the living standard. The assistive robots should possess the ability to interact with the human peers in a human friendly manner because those systems are intended to be used by non-experts. Humans prefer to use voice instructions that include uncertain information and lexical symbols. Hence, the ability to understand uncertain information is mandatory for developing natural interaction capabilities in robots. This paper proposes a method to understand uncertain information such as “close”, “near” and “far” in navigational user commands based on the current environment and the experience of the robot. A robot experience model (REM) has been introduced to understand the lexical representations in user commands and to adapt the perception of the robot on uncertain information in heterogeneous domestic environments. The user commands are not bounded by a strict grammar model and this enables the users to operate the robot in a more natural way. The proposed method has been implemented on the assistive robot platform. The experiments have been carried out in an artificially created domestic environment and the results have been analyzed to identify the behaviors of the proposed concept.


2017 6th National Conference on Technology and Management (NCTM) | 2017

A multi-modal approach for enhancing object placement

P. H. D. Arjuna S. Srimal; A. G. Buddhika P. Jayasekara

Voice commands have been used as the basic method of interaction between humans and robots over the years. Voice interaction is natural and require no additional technical knowledge. But while using voice commands humans frequently use uncertain information. In the case of object manipulation on a table, frequently used uncertain terms “Left”, “Right”, “Middle”, “Front”…etc. These terms fail to depict an exact location on the table and the interpretation is governed by the robots point of view. Depending solely on vocal cues is not ideal as it requires the users to explain the exact location with more words and phrases making the interaction process cumbersome and less human like. However, using hand gestures to pinpoint the location is as natural as using the voice commands and frequently used when manipulating items on a surface. When compared to voice commands use of hand gestures is a more direct and less cumbersome approach. But when used alone hand gestures can result in errors while extracting the pointed location making the user dissatisfied. This paper proposes a multi-modal interaction method which uses hand gestures combined with voice commands to interpret uncertain information when placing an object on a table. Two fuzzy inference systems have been used to interpret the uncertain terms related to the two axes of the table. The proposed system has been implemented on an assistive robot platform. Experiments have been conducted to analyze the behaviour of the system.


ieee international conference on fuzzy systems | 2017

Deictic gesture enhanced fuzzy spatial relation grounding in natural language

P. H. D. Arjuna S. Srimal; M. A. Viraj J. Muthugala; A. G. Buddhika P. Jayasekara

In the recent past, domestic service robots have come under close scrutiny among researchers. When collaborating with humans, robots should be able to clearly understand the instructions conveyed by the human users. Voice interfaces are frequently used as a mean of interaction interface between users and robots, as it requires minimum amount of work overhead from the users. However, the information conveyed through the voice instructions are often ambiguous and cumbersome due to the inclusion of imprecise information. The voice instructions are often accompanied with gestures especially when referring objects, locations, directions etc. in the environment. However, the information conveyed solely through these gestures is also imprecise. Therefore, it is more effective to consider a multimodal interface rather than a unimodal interface in order to understand the user instructions. Moreover, the information conveyed through the gestures can be used to improve the understanding of the user instructions related to object placements. This paper proposes a method to enhance the interpretation of user instructions related to the object placements by interpreting the information conveyed through voice and gestures. Furthermore, the proposed system is capable of adapting the understanding, according to the spatial arrangement of the workspace of the robot. Fuzzy logic system is proposed in order to evaluate the information conveyed through these two modalities while considering the arrangement of the workspace. Experiments have been carried out in order to evaluate the performance of the proposed system. The experimental results validate the performance gain of the proposed multimodal system over the unimodal systems.


intelligent robots and systems | 2016

Interpretation of uncertain information in mobile service robots by analyzing surrounding spatial arrangement based on occupied density variation

M. A. Viraj J. Muthugala; A. G. Buddhika P. Jayasekara

Service robots are being developed as a supportive aid for elderly people. Those robots are operated by non-expert users in heterogeneous domestic environments. Hence, the ability of a robot to be operated in a more natural human friendly manner enhances the overall satisfaction of the user. Humans prefer to use voice in order to convey instructions. Those voice instructions often include uncertain terms such as “little” and “far”. Therefore, the robotic assistants should possess the competency to appropriately interpret the quantitative meanings of such terms. The quantitative meaning of uncertain terms related to the spatial information depends on the spatial arrangement of the environment. This paper proposes a method in order to evaluate the uncertain information in user commands by replicating the natural tendencies of humans about the spatial arrangement of the environment. A module called Occupied Density analyzer has been deployed to analyze the occupied density distribution. A function has been defined to estimate the perceptive distance based on the occupied density distribution. The perception of the uncertain terms is adjusted according to the perceptive distance of that particular scenario. Particulars on rationale behind the proposed method are explained with due attention to the natural human tendencies. Experiments have been carried out in order to evaluate the performance and the behaviors of the proposed system.


ieee international conference on fuzzy systems | 2017

Interpreting fuzzy directional information in navigational commands based on arrangement of the surrounding environment

M. A. Viraj J. Muthugala; A. G. Buddhika P. Jayasekara

Human friendly service robots should possess human like interaction and reasoning capabilities. Humans prefer to use voice instructions in order to communicate with peers. Those voice instructions often include linguistic notions and descriptors that are fuzzy in nature. Therefore, the human friendly robots should be capable of understanding the fuzzy information in user instructions. This paper proposes a method in order to interpret directional information in navigational user commands by considering the environment dependent fuzziness associated with the directional linguistic notions. A module called Direction Interpreter has been introduced for handling the fuzzy nature of directional linguistic notions. The module has been implemented with a fuzzy logic system that is capable of modifying the perception of the robot about the directional information according to the surrounding environment of the robot. This modification is done by weighting the output membership function with the distribution of the free space around the robot. According to the experimental results, the proposed system is capable of replicating the natural directional perception of humans that depends on the environment to a greater extent than the existing approaches.


international conference on information and automation | 2014

Sound localization: Human vs. machine

W. G. Nuwan Jayaweera; A. G. Buddhika P. Jayasekara; A. M. Harsha S. Abeykoon

The Human shows a remarkable capability in localizing a sound source and navigating towards it. In the current context of robotic applications, machinery models have been developed, so that they can be used in sound source localization. But, it is not yet quantified the accuracy of humans sound source localization in different frequencies and distances at the free field. Thus, the aim of this paper is to estimate the error of humans sound source localization at different frequencies and distances on the horizontal plane and the paper presents the characteristics of ear by taking each individuals localization ability into consideration. An experiment is conducted to investigate the individual ability to predict the sound incident direction. Ten samples of Asian young adults from the age group of 20-30 years are taken into the experiment and their responses for localization cues are recorded. The experiment is also conducted for different sound source locations such as 1 m, 2 m and 3 m and different sound source frequencies of 1 kHz and 5 kHz. The results show the individual responses for the direction prediction and they are unique from each individual to the other. The average percentage errors for direction prediction at 1 kHz frequency sound signal give 0.20, 0.93 and 5.20 for 1 m, 2 m and 3 m distances respectively. Also, the average percentage errors for direction prediction at 5 kHz frequency sound signal give 3.59, 1.68 and 0.52 for 1 m, 2 m and 3 m distances respectively.


moratuwa engineering research conference | 2016

MIRob: An intelligent service robot that learns from interactive discussions while handling uncertain information in user instructions

M. A. Viraj J. Muthugala; A. G. Buddhika P. Jayasekara


moratuwa engineering research conference | 2015

Interpreting fuzzy linguistic information in user commands by analyzing movement restrictions in the surrounding environment

M. A. Viraj J. Muthugala; A. G. Buddhika P. Jayasekara


moratuwa engineering research conference | 2016

Potential for improving green roof performance through artificial irrigation

H. K. C. B. Heendeniya; R. M. M. Ruwanthika; A. G. Buddhika P. Jayasekara


moratuwa engineering research conference | 2018

Design and Development of an Interactive Service Robot as a Conversational Companion for Elderly People

G. W. Malith Manuhara; M. A. Viraj J. Muthugala; A. G. Buddhika P. Jayasekara

Collaboration


Dive into the A. G. Buddhika P. Jayasekara's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge