Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexiei Dingli is active.

Publication


Featured researches published by Alexiei Dingli.


International Journal of Ambient Computing and Intelligence | 2012

Turning Homes into Low-Cost Ambient Assisted Living Environments

Alexiei Dingli; Daniel Attard; Ruben Mamo

Today motion recognition has become more popular in areas like health care. In real-time environments, the amount of information and data required to compute the users motion is substantial, while the time to collect and process this information are crucial parameters in the performance of a motion recognition system. The nature of the data determines the design of the system. One important aspect of this system is reducing the delay between sensing and recognising a motion, while achieving acceptable levels of accuracy. The detection of humans in images is a challenging problem. In this paper, the authors present a solution using the Kinect, a motion sensing input device by Microsoft designed for the Xbox 360 console, to create an Ambient Assisted Living (AAL) application which monitors a persons position, labels objects around a room, takes voice input, and raises alerts in case of falls. The authors present a number of modules like converting Kinect Skeletal Data to allow mouse control via hand movement, building a Finite State Machine (FSM), obtaining pose information, voice commands to allow interaction with the application, and face detection and recognition. The authors use different algorithms to achieve the required outcome.


international conference on information and communication technologies | 2015

Event detection using social sensors

Alexiei Dingli; Loui Mercieca; Ronald Spina; Marco Galea

Social media, such as Facebook and twitter, received much attention recently especially due to their real-time nature. For example, when an earthquake occurs, people immediately post information related to the earthquake, which enables detection of earthquake occurrence promptly, simply by observing these posts. As described in this paper, we investigate the real-time interaction of events such as earthquakes in Twitter, Facebook and other social media, and propose an algorithm to monitor tweets and to detect a target event. We devised a filter of data based on features such as the keywords, the number of times they are present, and their context. We consider each user feed as a sensor and the collection of such sensors creates a system, which can be used to promptly warn registered users. The posts triggering the detections also provided very short first-impression narratives from people who experienced the shaking. We will also show that the validity of such a process is not bound to a particular context or language but can be used on a variety of other subjects.


text speech and dialogue | 2013

Building a Hybrid: Chatterbot – Dialog System

Alexiei Dingli; Darren Scerri

Generic conversational agents often use hard-coded stimulus-response data to generate responses, for which little to no effort is attributed to effectively understand and comprehend the input. The limitation of these types of systems is obvious: the general and linguistic knowledge of the system is limited to what the developer of the system explicitly defined. Therefore, a system which analyses user input at a deeper level of abstraction which backs its knowledge with common sense information will essentially result in a system that is capable of providing more adequate responses which in turn result in a better overall user experience.


The Visual Computer | 2017

Webcam-based detection of emotional states

Alexiei Dingli; Andreas Giordimaina

Game designers have to deal with the complex task of monitoring the emotional state of players in games. There are different elements with the game, which effect the player’s emotional status. Since the game play experience occurs almost unconsciously, traditional methods such as think aloud may disrupt the playing experience, thus skewing the results obtained. Other methods include fitting cables and electrodes to the player to monitor biological information. Although such devices can offer significant accurate results, they are not commonly found and may cause discomfort while playing games. Because of this, we propose a webcam-based heart rate monitoring method that can be used to predict the player’s emotional state. We first analyzed the change in heart rate with respect to the players emotional state. This allowed us to find a correlation between emotional states, such as frustration, fun, challenge, and boredom. The second objective was to create a webcam-based method to monitor the heart rate. This was performed by extracting the RGB channels from the face region and then retrieving the underlying components using a dimensionality-reduction method. The results obtained from the webcam-based method were far from perfect, but this was expected, since we were performing the tests under realistic conditions. The last objective was to predict the player’s emotional state using the heart rate obtained from the webcam-based method. The accuracy of the prediction was up to 76 %, which exceeded our initial aim. This system will be implemented in Unity 3D to make its integration and adoption easier.


Archive | 2017

Using Holograms to Increase Interaction in Museums

Alexiei Dingli; Nicholas Mifsud

Holographic Technology has made huge strides over the past few years. The range of applications is practically endless and we envisage seeing major investments in the coming years. The main aim of this project was to create virtual 3D agents capable of behaving in a believable manner and display them within a real 3D model of a megalithic temple called “Hagar Qim” (http://heritagemalta.org/museums-sites/hagar-qim-temples/). These holographic humans are not only visually appealing with clear animations but must also behave in a psychologically sound and autonomous manner, meaning that they would be their own beings, not controlled by a user and their actions relate to the context of the world they are situated in. In order to achieve a high degree of autonomy and believability, the holographic humans developed in this work are self-determined with their own reactive plan of actions to organize their Neolithic daily routines, just like our ancestors did. In order to produce such believable behaviour, computational motivation models based on human psychological theories were explored. Each holographic human is also self-aware and adheres to its own biological needs. Furthermore, visitors are able to interact and communicate with the holographic humans via a mobile device. The system was tested by a number of people in order to test the subjective concept of believability of the system as a whole. On the whole we were extremely satisfied with the positive feedback obtained whereby 96 % of respondents found the exhibit believable. There was also a 90 % agreement that this platform would be suitable in a museum context since it would immerse visitors within this context whilst helping them learn in a fun and interactive way.


computational intelligence and games | 2016

Platformer level design for player believability

Elizabeth Camilleri; Georgios N. Yannakakis; Alexiei Dingli

Player believability is often defined as the ability of a game playing character to convince an observer that it is being controlled by a human. The agents behavior is often assumed to be the main contributor to the characters believability. In this paper we reframe this core assumption and instead focus on the impact of the game environment and aspects of game design (such as level design) on the believability of the game character. To investigate the relationship between game content and believability we crowdsource rank-based annotations from subjects that view playthrough videos of various AI and human controlled agents in platformer levels of dissimilar characteristics. For this initial study we use a variant of the well-known Super Mario Bros game. We build support vector machine models of reported believability based on gameplay and level features which are extracted from the videos. The highest performing model predicts perceived player believability of a character with an accuracy of 73.31%, on average, and implies a direct relationship between level features and player believability.


international conference on human-computer interaction | 2013

Dialog Systems and Their Inputs

Darren Scerri; Alexiei Dingli

One of the main limitations in existent domain-independent conversational agents is that the general and linguistic knowledge of these agents is limited to what the agents’ developers explicitly defined. Therefore, a system which analyses user input at a deeper level of abstraction which backs its knowledge with common sense information will essentially result in a system that is capable of providing more adequate responses which in turn result in a better overall user experience. From this premise, a framework was proposed, and a working prototype was implemented upon this framework. The prototype makes use of various natural language processing tools, online and offline knowledge bases, and other information sources, to enable it to comprehend and construct relevant responses.


ICMMI | 2011

Home butler : creating a virtual home assistant

Alexiei Dingli; Stefan Lia

Virtual butlers, or virtual companions, try to imitate the behaviour of human beings in a believable way. They interact with the user through speech, understand spoken requests, are able to converse with the user, and show some form of emotion and personality. Virtual companions are also able to remember past conversations, and build some sensible knowledge about the user. One major problem with virtual companions is the need to manually create dialogues. We shall introduce a system which automatically creates dialogues using television series scripts.


International Cross-Domain Conference for Machine Learning and Knowledge Extraction | 2018

Recognition of Handwritten Characters Using Google Fonts and Freeman Chain Codes.

Alexiei Dingli; Mark Bugeja; Dylan Seychell; Simon Mercieca

In this study, a unique dataset of a scanned seventeenth-century manuscript is presented which up to now has never been read or analysed. The aim of this research is to be able to transcribe this dataset into machine readable text. The approach used in this study is able to convert the document image without any prior knowledge of the text. In fact, the training set used in this study is a synthetic dataset built on the Google Fonts database. A feed forward Deep Neural Network is trained on a set of different features extracted from the Google Font character images. Well established features such as ratio of character width and height as well as pixel count and Freeman Chain Code is used, with the latter being normalised using Fast Fourier Normalisation that has yielded excellent results in other areas but never been used in Handwritten Character Recognition. In fact, the final results show that this particular Freeman Chain Code feature normalisation yielded the best results achieving an accuracy of 55.1% which is three times higher then the standard Freeman Chain Code normalisation method.


Archive | 2016

Multimedia Interfaces for People Visually Impaired

Alexiei Dingli; Isaac Mercieca

In our society, there is a substantial number of visually impaired individuals. However many social mechanisms are not designed with these people in mind thus making the development of electronic assistive tools essential in order to perform basic day-to-day activities. Due to the penetration of capabilities of mobile devices, such devices have become an ideal candidate for designing solutions to aid the visually impaired. The objective of this research is to develop a multimedia user interface whose scope is to aid the visually challenged. We propose and design a product recognition system utilizing computer vision and machine learning techniques. Our system allows visually impaired individuals to identify products in grocery stores and supermarkets without any additional assistance, thus encouraging them to perform daily activities without requiring any additional help thus further promoting their independence within society. Our approach is composed of two main modules one capable of classifying grocery products using an unsupervised feature extraction methods posed by deep learning techniques while the other module is capable of recognizing products in an image using the traditionally handcrafted feature extraction algorithms. We considered multiple robust approaches to identify the one most suited for our task. Through evaluation we determined that the best approach for classification is to fine-tune a convolutional neural network pre-trained on a larger dataset. We were successful in not only surpassing our base accuracy but also obtaining an accuracy of 63 %.

Collaboration


Dive into the Alexiei Dingli's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge