Koliya Pulasinghe
Saga University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Koliya Pulasinghe.
IEEE Transactions on Industrial Electronics | 2005
Amitava Chatterjee; Koliya Pulasinghe; Keigo Watanabe; Kiyotaka Izumi
This paper shows the possible development of particle swarm optimization (PSO)-based fuzzy-neural networks (FNNs) that can be employed as an important building block in real robot systems, controlled by voice-based commands. The PSO is employed to train the FNNs that can accurately output the crisp control signals for the robot systems, based on fuzzy linguistic spoken language commands, issued by a user. The FNN is also trained to capture the user-spoken directive in the context of the present performance of the robot system. Hidden Markov model (HMM)-based automatic speech recognizers (ASRs) are developed, as part of the entire system, so that the system can identify important user directives from the running utterances. The system has been successfully employed in two real-life situations, namely: 1) for navigation of a mobile robot; and 2) for motion control of a redundant manipulator.
Artificial Life and Robotics | 2003
Koliya Pulasinghe; Keigo Watanabe; Kazuo Kiguchi; K. Izumi
In this article, a fuzzy neural network (FNN)-based approach is presented to interpret imprecise natural language (NL) commands for controlling a machine. This system, (1) interprets fuzzy linguistic information in NL commands for machines, (2) introduces a methodology to implement the contextual meaning of NL commands, and (3) recognizes machine-sensitive words from the running utterances which consist of both in-vocabulary and out-of-vocabulary words. The system achieves these capabilities through a FNN, which is used to interpret fuzzy linguistic information, a hidden Markov model-based key-word spotting system, which is used to identify machine-sensitive words among unrestricted user utterances, and a possible framework to insert the contextual meaning of words into the knowledge base employed in the fuzzy reasoning process. The system is a complete system integration which converts imprecise NL command inputs into their corresponding output signals in order to control a machine. The performance of the system specifications is examined by navigating a mobile robot in real time by unconditional speech utterances.
society of instrument and control engineers of japan | 2001
Koliya Pulasinghe; Keigo Watanabe; Kazuo Kiguchi; Kiyotaka Izumi
A method of interpreting imprecise natural language commands to machine understandable manner is presented in this paper. The proposed method tries to ease the process of man-machine interaction by combining the theoretical understanding of artificial neural networks and fuzzy logic. Both fields are very popular to mimic the human behavior in different research areas in artificial intelligence. The proposed system tries to understand the natural language command rather than mere recognition. The distinctive features of the artificial neural networks in pattern recognition and classification and the abilities of manipulating imprecise data by fuzzy systems are merged to recognize the machine sensitive words in the natural language command and then to interpret them to machine in machine identifiable manner. Modularity of the design tries to break up the complete task into manageable parts where the presence of individual part is vital to bridge the so-called man-machine gap.
Archive | 2002
Koliya Pulasinghe; Keigo Watanabe; Kazuo Kiguchi; Kiyotaka Izumi
This paper investigates the credibility of voice (especially natural language commands) as a communication medium in sharing advanced sensory capacity and knowledge of the human with a robot to perform a cooperative task. Identification of the machine sensitive words in the unconstrained speech signal and interpretation of the imprecise natural language commands for the machine has been considered. The system constituents include a hidden Markov model (HMM) based continuous automatic speech recognizer (ASR) to identify the lexical content of the user’s speech signal, a fuzzy neural network (FNN) to comprehend the natural language (NL) contained in identified lexical content, an artificial neural network (ANN) to activate the desired functional ability, and control modules to generate output signals to the actuators of the machine. The characteristic features have been tested experimentally by utilizing them to navigate a Khepera® in real time using the user’s visual information transferred by speech signals.
systems man and cybernetics | 2004
Koliya Pulasinghe; Keigo Watanabe; Kiyotaka Izumi; Kazuo Kiguchi
제어로봇시스템학회 국제학술대회 논문집 | 2001
Koliya Pulasinghe; Keigo Watanabe; Kazuo Kiguchi; Kiyotaka Izumi
society of instrument and control engineers of japan | 2003
Koliya Pulasinghe; Keigo Watanabe; Kiyotaka Izumi; Kazuo Kiguchi
제어로봇시스템학회 국제학술대회 논문집 | 2003
Keigo Watanabe; Amitava Chatterjee; Koliya Pulasinghe; Sangho Jin; Kiyotaka Izumi; Kazuo Kiguchi
한국지능시스템학회 국제학술대회 발표논문집 | 2003
Keigo Watanabe; Amitava Chatterjee; Koliya Pulasinghe; Kiyotaka Izumi; Kazuo Kiguchi
제어로봇시스템학회 국제학술대회 논문집 | 2002
Koliya Pulasinghe; Keigo Watanabe; Kiyotaka Izumi; Kazuo Kiguchi