Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dushyantha Jayatilake is active.

Publication


Featured researches published by Dushyantha Jayatilake.


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2014

Robot Assisted Physiotherapy to Support Rehabilitation of Facial Paralysis

Dushyantha Jayatilake; Takashi Isezaki; Yohei Teramoto; Kiyoshi Eguchi; Kenji Suzuki

We have been developing the Robot Mask with shape memory alloy based actuators that follows an approach of manipulating the skin through a minimally obtrusive wires, transparent strips and tapes based pulling mechanism to enhance the expressiveness of the face. For achieving natural looking facial expressions by taking the advantage of specific characteristics of the skin, the Robot Mask follows a human anatomy based criteria in selecting these manipulation points and directions. In this paper, we describe a case study of using the Robot Mask to assist physiotherapy of a hemifacial paralyzed patient. The significant differences in shape and size of the human head between different individuals demands proper customizations of the Robot Mask. This paper briefly describes the adjusting and customizing stages employed from the design level to the implementation level of the Robot Mask. We will also introduce a depth image sensor data based analysis, which can remotely evaluate dynamic characteristics of facial expressions in a continuous manner. We then investigate the effectiveness of the Robot Mask by analyzing the range sensor data. From the case study, we found that the Robot Mask could automate the physiotherapy tasks of rehabilitation of facial paralysis. We also verify that, while providing quick responses, the Robot Mask can reduce the asymmetry of a smiling face and manipulate the facial skin to formations similar to natural facial expressions.


international journal of mechatronics and automation | 2011

Robot assisted facial expressions with segmented shape memory alloy actuators

Dushyantha Jayatilake; Kenji Suzuki

This paper introduces a robotic technology based supporting device, the Robot Mask, to enhance facial expressiveness and support physiotherapy for facial paralyzed persons. The wearable device, which consists of Shape Memory Alloy (SMA) based linear actuators, functions by pulling the facial skin towards anatomically selected directions. Since facial expressions are silent, SMA were selected over electrical motors. This paper introduces a compact and fully controllable actuation unit with position feedback and a novel controlling scenario that uses the selected hybrid actuation of bidirectional multi segment SMA wires in series to pull the wires. When designing the actuators, a biomechanical analysis was conducted to find anatomical parameters of natural smiles, and the Robot Mask was evaluated for its suitability as a facial expression supporter.


international conference on information and automation | 2008

An Analysis of Facial Morphology for the Robot Assisted Smile Recovery

Dushyantha Jayatilake; Anna Gruebler; Kenji Suzuki

Expression based non verbal communication accounts for a significant amount of the information exchange in human communication. In order to recreate facial expressions artificially for conveying an exact message it is necessary to make them as natural as possible since human beings posses a remarkable ability to recognize the emotion from an expression. In this paper we try to analyze facial expressions in terms of facial skin displacements from an anatomical point of view. We have been developing a wearable device, robot mask, to support and recreate human facial expressions by using artificial muscles. We conducted several experiments involving a comparison of the skin displacement along the facial muscles for natural expressions and artificially generated expressions. The experimental results contribute significantly to design criteria of robot mask. This paper also explains how shape memory alloy based artificial muscles can be used to generate facial expressions artificially.


intelligent robots and systems | 2008

A soft actuator based expressive mask for facial paralyzed patients

Dushyantha Jayatilake; Kenji Suzuki

The face is such a salient feature of a person that it plays a crucial role in physical, psychological, and emotional makeup. Hence facial paralysis, which is the loss of voluntary muscle movement of one or both sides of the face, apart from making physical disturbances, can also be an alarming and depressing event in onepsilas life. Although therapeutically and surgery based treatment methods are available, they can work on temporary paralysis only. Facial uplifts done on permanently paralyzed patients will only attempt to reduce the facial asymmetry of a neutral faces. Despite facial paralysis been fairly common, over the years only a very limited amount of effort has been put in to the development of supporting systems for facial paralyzed patients and it is also virtually zero in the robotics field. This paper discusses a novel method; an expressive mask based on robotics technology while giving strong references to human anatomy. The developed robot mask uses artificial muscles based on soft actuators as they can closely model natural muscles while producing silent actuations. With 6 artificial muscles, tests were done on one side of the face of a healthy human. We also explain the suitability of proposed artificial muscles through preliminary experiments and conclude by underlining its suitability as a worthy candidate.


IEEE Journal of Translational Engineering in Health and Medicine | 2015

Smartphone-Based Real-time Assessment of Swallowing Ability From the Swallowing Sound

Dushyantha Jayatilake; Tomoyuki Ueno; Yohei Teramoto; Kei Nakai; Kikue Hidaka; Satoshi Ayuzawa; Kiyoshi Eguchi; Akira Matsumura; Kenji Suzuki

Dysphagia can cause serious challenges to both physical and mental health. Aspiration due to dysphagia is a major health risk that could cause pneumonia and even death. The videofluoroscopic swallow study (VFSS), which is considered the gold standard for the diagnosis of dysphagia, is not widely available, expensive and causes exposure to radiation. The screening tests used for dysphagia need to be carried out by trained staff, and the evaluations are usually non-quantifiable. This paper investigates the development of the Swallowscope, a smartphone-based device and a feasible real-time swallowing sound-processing algorithm for the automatic screening, quantitative evaluation, and the visualisation of swallowing ability. The device can be used during activities of daily life with minimal intervention, making it potentially more capable of capturing aspirations and risky swallow patterns through the continuous monitoring. It also consists of a cloud-based system for the server-side analyzing and automatic sharing of the swallowing sound. The real-time algorithm we developed for the detection of dry and water swallows is based on a template matching approach. We analyzed the wavelet transformation-based spectral characteristics and the temporal characteristics of simultaneous synchronised VFSS and swallowing sound recordings of 25% barium mixed 3-ml water swallows of 70 subjects and the dry or saliva swallowing sound of 15 healthy subjects to establish the parameters of the template. With this algorithm, we achieved an overall detection accuracy of 79.3% (standard error: 4.2%) for the 92 water swallows; and a precision of 83.7% (range: 66.6%-100%) and a recall of 93.9% (range: 72.7%-100%) for the 71 episodes of dry swallows.


international conference on robotics and automation | 2010

A multiple SMA hybrid actuator to generate expressions on the face

Dushyantha Jayatilake; Kenji Suzuki

This paper presents part of an on-going project to design a wearable supportive device to enhance facial expressiveness, in particular for the facial paralyzed patients. Earlier we introduced the SMA actuator based Robot Mask that can be used to enhance the expressiveness of the face. The basic concept of that design was pulling of the skin through wires attached to the face and we explained the human anatomy based criteria of selecting these pulling points and directions. The major reason to use SMA instead of traditional actuators such as motors was their silent acting nature. However, their dynamic properties been governed by thermal energy and their mechanical properties been affected by the hysteresis due to their metallurgy, SMA based actuators tend not to perform too well under cooling. This paper introduces a novel controlling scenario that use the actuation of only a limited number of SMA wires out of the total connected in series on either sides of a slider to control the direction and amount of movement of the slider. The advantage of this method is by keeping some SMA wires at low temperatures it was possible to achieve a high speed of actuation even when the direction of motion was changed. This paper also investigates on the amount of actuation rates that are required to generate natural looking smiles and later attempts to recreate them using the proposed actuation unit.


Archive | 2010

Robot Assisted Smile Recovery

Dushyantha Jayatilake; Anna Gruebler; Kenji Suzuki

1.1 Facial Expressions Facial expressions play a significant role in social information exchange and the physical and psychological makeup of a person because they are an essential method for non-verbal communication. Through facial expressions, human beings can show emotions, moods and information about their character. Happiness, sadness, fear, surprise, disgust, and anger, are typically identified by psychologists as basic emotions with their corresponding characteristic facial expressions (Wang and Ahuja; 2003). Further, (Batty and Taylor; 2003) reported that humans have a very fast processing speed when it comes to identifying these six expressions and noted that positive expressions (e.g. happiness, surprise) are identified faster than the negative expressions (e.g. sadness, disgust). Human beings share common characteristics in the way they express emotions through facial expressions which are independent from nationality, ethnicity, age or sex. It has been recorded that the ability to recognize the corresponding emotion in a facial expression is innate and is present very early, possibly form birth (Mandler et al.; 1997). However there is also evidence that universal expressions might be modified in social situations to create the impression of culture-specific facial expression of emotions. For example, (Ekman; 1992) noted that when an authority figure was present, the Japanese masked negative expressions with the semblance of a smile more than the Americans. If, because of accident or illness, a person looses the ability to make facial expressions this makes the face seem emotionless and leads to physical and psychological hardships.


intelligent robots and systems | 2009

An assistive mask with biorobotic control to enhance facial expressiveness

Dushyantha Jayatilake; Keisuke Takahashi; Kenji Suzuki

This paper presents part of an on-going project to design a wearable supportive device, in particular for the facial paralyzed patients to enhance facial expressiveness. As various complications can result in facial disfigurement and loss of functionality in facial muscles it is required to develop a supporting device for people with such conditions. The previously proposed robot mask, which consists of a head supporter and motor units attempts to recreate facial expressions artificially by pulling the facial skin through cables attached to the skin. Since a facial expression is the result of the full or partial activation of combination of facial muscles, it is necessary to control the amount of displacement of the artificially created skin movement. Furthermore, in order to facilitate interpersonal timing of facial expressions, it is necessary to be able to read the nerve signals and process them in real time. This paper presents a compact and fully controllable actuation unit for the earlier proposed robot mask, and analyzes the relationship between the displacement of the specifically selected areas of the face and the actuation by control unit. It also present a bioelectrical signal based real time signal processing system to determine the requirement for an artificial expression.


international conference on persuasive technology | 2013

Enhanced reach: assisting social interaction based on geometric relationships

Asaki Miura; Dushyantha Jayatilake; Kenji Suzuki

Social interaction among children plays a significant role in their social development. Some children, however, find it difficult to initiate interaction and there are only few tools that can create opportunities for children to interact with others. This study presents a small wireless device that can measure and visualize geometric relationships in a gymnasium or playground. The estimation of geometric relationships is proposed based on signal strength of wireless communication, bodily orientation and statistical geometric consistency. A light-emitting visualization method is used in real-time according to geometric relationships among devices. Several wearable interfaces were developed to facilitate communication and social interaction of children by using the developed wireless device. Several experiments were done with typically developing children and children with pervasive developmental disorders (PDD) to evaluate the proposed technology.


biomedical and health informatics | 2014

Swallowscope: A smartphone based device for the assessment of swallowing ability

Dushyantha Jayatilake; Kenji Suzuki; Yohei Teramoto; Tomoyuki Ueno; Kei Nakai; Kikue Hidaka; Satoshi Ayuzawa; Kiyoshi Eguchi; Akira Matsumura

Dysphagia can cause serious challenges to both physical and mental health. Aspiration due to dysphagia is a major health risk that could cause pneumonia, and even death. As a result, monitoring and managing dysphagia is of utmost importance. This study investigates the development of a smartphone-based device and a feasible real-time swallowing sound processing algorithm for the automatic screening for swallowing ability. The videofluoroscopic swallow study (VFSS), which is considered the gold standard for the diagnosis of dysphagia, is not widely available, expensive and causes exposure to radiation. The screening tests used for dysphagia need to be carried out by trained staff and the evaluations are often non-quantifiable. The Swallowscope we developed is a wearable device based on mobile health, and uses the swallowing sound to quantitatively evaluate swallowing ability. As swallowing sound can be captured continuously and during activities of daily life with minimal intervention, it is an ideal approach to monitor swallowing activities, and its continuous monitoring has a better probability of capturing aspirations and risky swallow patterns. This paper describes the real-time smartphone based algorithm and the application we developed to monitor swallowing activities and evaluates the recognition accuracy by comparing them with VFSS evidence.

Collaboration


Dive into the Dushyantha Jayatilake's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kei Nakai

University of Tsukuba

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge