Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Georgios Galatas is active.

Publication


Featured researches published by Georgios Galatas.


pervasive technologies related to assistive environments | 2011

eyeDog: an assistive-guide robot for the visually impaired

Georgios Galatas; Christopher McMurrough; Gian Luca Mariottini; Fillia Makedon

Visually impaired people can navigate unfamiliar areas by relying on the assistance of other people, canes, or specially trained guide dogs. Guide dogs provide the impaired person with the highest degree of mobility and independence, but require expensive training and selective breeding. In this paper we describe the design and development of a prototype assistive-guide robot (eyeDog) that provides the visually impaired person with autonomous vision-based navigation and laser-based obstacle avoidance capabilities. This kind of assistive-guide robot has several advantages, such as robust performance and reduced cost and maintenance. The main components of our system are the Create robotic platform (from iRobot), a net-book, an on-board USB webcam and a LIDAR unit. The camera is used as the primary exteroceptive sensor for the navigation task; the frames captured by the camera are processed in order to robustly estimate the position of the vanishing point associated to the road/corridor where the eyeDog needs to move. The controller will then steer the robot until the vanishing point and the image center coincide. This condition guarantees the robot to move parallel to the direction of the road/corridor. While moving, the robot uses the LIDAR for obstacle avoidance.


pervasive technologies related to assistive environments | 2010

Web based medicine intake tracking application

Jyothi K. Vinjumur; Eric Becker; Shahina Ferdous; Georgios Galatas; Fillia Makedon

One of the issues in healthcare systems or medical information systems is the reduction of medical errors to ensure patient safety. Inside an assistive environment, we apply RFID tags to monitor drug taking pattern and its consequences are reported to the care giver. This paper talks about an application which tracks the medicine intake pattern for the elderly using RFID readers and tags, motion sensors, and a wireless sensor mote. With the adoption of this ambient assistive technology in healthcare systems, the concept of heterogeneous sensor data management becomes an issue. In this paper, using a Web Based Caregiver Module makes the process of monitoring medicine intake for health-related matters of the elderly living alone simpler and easier. We also propose to use an energy efficient technique by using multiple sensor devices which employ a sequence of innetwork data fusion as needed.


International Journal of Advanced Research in Artificial Intelligence | 2013

Multi-modal Person Localization And Emergency Detection Using The Kinect

Georgios Galatas; Shahina Ferdous; Fillia Makedon

Person localization is of paramount importance in an ambient intelligence environment since it is the first step towards context-awareness. In this work, we present the development of a novel system for multi-modal person localization and emergency detection in an assistive ambient intelligence environment for the elderly. Our system is based on the depth sensor and microphone array of 2 Kinect devices. We use skeletal tracking conducted on the depth images and sound source localization conducted on the captured audio signal to estimate the location of a person. In conjunction with the location information, automatic speech recognition is used as a natural and intuitive means of communication in order to detect emergencies and accidents, such as falls. Our system attained high accuracy for both the localization and speech recognition tasks, verifying its effectiveness. Keywords-localization; multi-modal; Kinect; speech recognition; context-awareness; 3-D interaction


pervasive technologies related to assistive environments | 2011

Recognition of sleep patterns using a bed pressure mat

Vangelis Metsis; Georgios Galatas; Alexandros Papangelis; Dimitrios I. Kosmopoulos; Fillia Makedon

The monitoring of sleep patterns is of major importance for various reason such as, the detection and treatment of sleep disorders, the assessment of the effect of different medical conditions or medications on the sleep quality and the assessment of mortality risks associated with sleeping patterns in adults and children. Sleep monitoring by itself is a difficult problem due to both privacy and technical considerations. The proposed system uses a bed pressure mat to assess and report sleep patterns. To evaluate our system we used real data collected in Heracleia Labs assistive living apartment. Our method is non-invasive, as it does not disrupt the users usual sleeping behavior and it can be used both at the clinic and at home with minimal cost.


pervasive technologies related to assistive environments | 2012

Audio-visual speech recognition using depth information from the Kinect in noisy video conditions

Georgios Galatas; Gerasimos Potamianos; Fillia Makedon

In this paper we build on our recent work, where we successfully incorporated facial depth data of a speaker captured by the Microsoft Kinect device, as a third data stream in an audio-visual automatic speech recognizer. In particular, we focus our interest on whether the depth stream provides sufficient speech information that can improve system robustness to noisy audio-visual conditions, thus studying system operation beyond the traditional scenarios, where noise is applied to the audio signal alone. For this purpose, we consider four realistic visual modality degradations at various noise levels, and we conduct small-vocabulary recognition experiments on an appropriate, previously collected, audiovisual database. Our results demonstrate improved system performance due to the depth modality, as well as considerable accuracy increase, when using both the visual and depth modalities over audio only speech recognition.


pervasive technologies related to assistive environments | 2014

Safety challenges in using AR.Drone to collaborate with humans in indoor environments

Alexandros Lioulemes; Georgios Galatas; Vangelis Metsis; Gian Luca Mariottini; Fillia Makedon

This paper presents an Unmanned Aerial Vehicle (UAV), based on the AR.Drone platform, which can perform an autonomous navigation in indoor (e.g. corridor, hallway) and industrial environments (e.g. production line). It also has the ability to avoid pedestrians while they are working or walking in the vicinity of the robot. The only sensor in our system is the front camera. For the navigation part our system rely on the vanishing point algorithm, the Hough transform for the wall detection and avoidance, and the HOG descriptors for pedestrian detection using SVM classifier. Our experiments show that our vision navigation procedures are reliable and enable the aerial vehicle to fly without humans intervention and coordinate together in the same workspace. We are able to detect human motion with high confidence of 85% in a corridor and to confirm our algorithm in 80% successful flight experiments.


pervasive technologies related to assistive environments | 2011

Audio visual speech recognition in noisy visual environments

Georgios Galatas; Gerasimos Potamianos; Alexandros Papangelis; Fillia Makedon

Speech recognition is a natural means of interaction for a human with a smart assistive environment. In order for this interaction to be effective, such a system should attain a high recognition rate even under adverse conditions. Audio-visual speech recognition (AVSR) can be of help in such environments, especially under the presence of audio noise. However the impact of visual noise to its performance has not been studied sufficiently in the literature. In this paper, we examine the effects of visual noise to AVSR, reporting experiments on the relatively simple task of connected digit recognition, under moderate acoustic noise and a variety of types of visual noise. The latter can be caused by either faulty sensors or video signal transmission problems that can be found in smart assistive environments. Our AVSR system exhibits higher accuracy in comparison to an audio-only recognizer and robust performance in most cases of noisy video signals considered.


pervasive technologies related to assistive environments | 2011

A recommender system for assistive environments

Alexandros Papangelis; Georgios Galatas; Fillia Makedon

In this paper we propose a novel framework for Recommender Systems that uses weighted tagging and Natural Language Processing techniques to tag, rate, cluster and recommend items. The system is able to cluster items in a dynamic hierarchical fashion allowing for on the fly user-tailored clustering of items. It is also able to automatically extract tags and ratings from item descriptions. It is inherently Context Aware since it uses Natural Language Processing techniques and targeted for Assistive Environments, whether as part of a companion (a dialogue system whose purpose is to accompany the user) or as a standalone system that will recommend books, movies, activities, medication and others in an easy to use intuitive way.


International Journal of Advanced Computer Science and Applications | 2013

A System for Multimodal Context-Awareness

Georgios Galatas; Fillia Makedon

in this paper we present the improvement of our novel localization system, by introducing radio-frequency identification (RFID) which adds person identification capabilities and increases multi-person localization robustness. Our system aims at achieving multi-modal context-awareness in an assistive, ambient intelligence environment. The unintrusive devices used are RFID and 3-D audio-visual information from 2 Kinect sensors deployed at various locations of a simulated apartment to continuously track and identify its occupants, thus enabling activity monitoring. More specifically, we use skeletal tracking conducted on the depth images and sound source localization conducted on the audio signals captured by the Kinect sensors to accurately localize and track multiple people. RFID information is used mainly for identification purposes but also for rough location estimation, enabling mapping of the location information from the Kinect sensors to the identification events of the RFID. Our system was evaluated in a real world scenario and attained promising results exhibiting high accuracy, therefore showing the great prospect of using the RFID and Kinect sensors jointly to solve the simultaneous identification and localization problem.


international conference on universal access in human computer interaction | 2014

A Dialogue System for Ensuring Safe Rehabilitation

Alexandros Papangelis; Georgios Galatas; Konstantinos Tsiakas; Alexandros Lioulemes; Dimitrios Zikos; Fillia Makedon

Dialogue Systems DS are intelligent user interfaces, able to provide intuitive and natural interaction with their users, through a variety of modalities. We present, here, a DS whose purpose is to ensure that patients are consistently and correctly performing rehabilitative exercises, in a tele-rehabilitation scenario. More specifically, our DS operates in collaboration with a remote rehabilitation system, where users suffering from injuries, degenerative disorders and others, perform exercises at home under the remote supervision of a therapist. The DS interacts with the users and makes sure that they perform their prescribed exercises correctly and according to the specified, by the therapist, protocol. To this end, various sensors are utilized, such as Microsofts Kinect, the Wi-Patch and others.

Collaboration


Dive into the Georgios Galatas's collaboration.

Top Co-Authors

Avatar

Fillia Makedon

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Alexandros Papangelis

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Shahina Ferdous

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Gerasimos Potamianos

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Dimitrios Zikos

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexandros Lioulemes

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Christopher McMurrough

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Dimitrios I. Kosmopoulos

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Eric Becker

University of Texas at Arlington

View shared research outputs
Researchain Logo
Decentralizing Knowledge