Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aleksandar Jevtic is active.

Publication


Featured researches published by Aleksandar Jevtic.


IEEE Transactions on Human-Machine Systems | 2015

Comparison of Interaction Modalities for Mobile Indoor Robot Guidance: Direct Physical Interaction, Person Following, and Pointing Control

Aleksandar Jevtic; Guillaume Doisy; Yisrael Parmet; Yael Edan

Three advanced natural interaction modalities for mobile robot guidance in an indoor environment were developed and compared using two tasks and quantitative metrics to measure performance and workload. The first interaction modality is based on direct physical interaction requiring the human user to push the robot in order to displace it. The second and third interaction modalities exploit a 3-D vision-based human-skeleton tracking allowing the user to guide the robot by either walking in front of it or by pointing toward a desired location. In the first task, the participants were asked to guide the robot between different rooms in a simulated physical apartment requiring rough movement of the robot through designated areas. The second task evaluated robot guidance in the same environment through a set of waypoints, which required accurate movements. The three interaction modalities were implemented on a generic differential drive mobile platform equipped with a pan-tilt system and a Kinect camera. Task completion time and accuracy were used as metrics to assess the users performance, while the NASA-TLX questionnaire was used to evaluate the users workload. A study with 24 participants indicated that choice of interaction modality had significant effect on completion time (F (2, 61) = 84.874, p <; 0.001), accuracy (F (2, 29) = 4.937, p = 0.016), and workload (F (2, 68) = 11.948, p <; 0.001). The direct physical interaction required less time, provided more accuracy and less workload than the two contactless interaction modalities. Between the two contactless interaction modalities, the person-following interaction modality was systematically better than the pointing-control one: The participants completed the tasks faster with less workload.


human-robot interaction | 2013

Spatially unconstrained, gesture-based human-robot interaction

Guillaume Doisy; Aleksandar Jevtic; Sasa Bodiroza

For a human-robot interaction to take place, a robot needs to perceive humans. The space where a robot can perceive humans is restrained by the limitations of robots sensors. These restrictions can be circumvented by the use of external sensors, like in intelligent environments; otherwise humans have to ensure that they can be perceived. With the robotic platform presented here, the roles are reversed and the robot autonomously ensures that the human is within the area perceived by the robot. This is achieved by a combination of hardware and algorithms capable of autonomously tracking the person, estimating their position and following them, while recognizing their gestures and navigating through environment.


Frontiers in Robotics and AI | 2017

A Quantitative Analysis of Dressing Dynamics for Robotic Dressing Assistance

Greg Chance; Aleksandar Jevtic; Praminda Caleb-Solly; Sanja Dogramadzi

Assistive robots have a great potential to address issues related to an ageing population and an increased demand for caregiving. Successful deployment of robots working in close proximity with people requires consideration of both safety and human-robot interaction. One of the established activities of daily living where robots could play an assistive role is dressing. Using the correct force profile for robot control will be essential in this application of human-robot interaction requiring careful exploration of factors related to the user’s pose and the type of garments involved. In this paper a Baxter robot was used to dress a jacket onto a mannequin and human participants considering several combinations of user pose and clothing type (base layers), whilst recording dynamic data from the robot, a load cell and an IMU. We also report on suitability of these sensors for identifying dressing errors, e.g. fabric snagging. Data was analyzed by comparing the overlap of confidence intervals to determine sensitivity to dressing. We expand the analysis to include classification techniques such as decision tree and support vector machines using k-fold cross-validation. The 6-axis load cell successfully discriminated between clothing types with predictive model accuracies between 72-97%. Used independently, the IMU and Baxter sensors were insufficient to discriminate garment types with the IMU showing 40-72% accuracy, but when used in combination this pair of sensors achieved an accuracy similar to the more expensive load cell (98%). When observing dressing errors (snagging) Baxter’s sensors and the IMU data demonstrated poor sensitivity but applying machine learning methods resulted in model with high predicative accuracy and low false negative rates (≤5%). The results show that the load cell could be used independently for this application with good accuracy but a combination of the lower cost sensors could also be used without a significant loss in precision which will be a key element in the robot control architecture for safe human-robot interaction.


human-robot interaction | 2014

Human-robot interaction through 3D vision and force control

Aleksandar Jevtic; Guillaume Doisy; Saša Bodiroža; Yael Edan; Verena V. Hafner

The video shows the interaction with a customized Kompai robot. The robot consists of the Robosofts robuLAB10 platform, tablet PC, and a Microsoft Kinect camera mounted on a pan-tilt system. A visual control algorithm provides continuous person tracking. The newly developed robot features include gesture recognition, person following, navigation with pointing, and force control, which were integrated with the Robosofts robuBOX SDK and the Karto SLAM algorithms. The video demonstrates all the features and puts the robot in use in an everyday home scenario.


Sensors | 2018

A Modified Distributed Bees Algorithm for Multi-Sensor Task Allocation

Itshak Tkach; Aleksandar Jevtic; Shimon Y. Nof; Yael Edan

Multi-sensor systems can play an important role in monitoring tasks and detecting targets. However, real-time allocation of heterogeneous sensors to dynamic targets/tasks that are unknown a priori in their locations and priorities is a challenge. This paper presents a Modified Distributed Bees Algorithm (MDBA) that is developed to allocate stationary heterogeneous sensors to upcoming unknown tasks using a decentralized, swarm intelligence approach to minimize the task detection times. Sensors are allocated to tasks based on sensors’ performance, tasks’ priorities, and the distances of the sensors from the locations where the tasks are being executed. The algorithm was compared to a Distributed Bees Algorithm (DBA), a Bees System, and two common multi-sensor algorithms, market-based and greedy-based algorithms, which were fitted for the specific task. Simulation analyses revealed that MDBA achieved statistically significant improved performance by 7% with respect to DBA as the second-best algorithm, and by 19% with respect to Greedy algorithm, which was the worst, thus indicating its fitness to provide solutions for heterogeneous multi-sensor systems.


international conference on agents and artificial intelligence | 2017

On interaction quality in human-robot interaction

Suna Bensch; Aleksandar Jevtic; Thomas Hellström

In many complex robotics systems, interaction takes place in all directions between human, robot, and environment. Performance of such a system depends on this interaction, and a proper evaluation ...


international conference on social robotics | 2016

User Evaluation of an Interactive Learning Framework for Single-Arm and Dual-Arm Robots

Aleksandar Jevtic; Adrià Colomé; Guillem Alenyà; Carme Torras

Social robots are expected to adapt to their users and, like their human counterparts, learn from the interaction. In our previous work, we proposed an interactive learning framework that enables a user to intervene and modify a segment of the robot arm trajectory. The framework uses gesture teleoperation and reinforcement learning to learn new motions. In the current work, we compared the user experience with the proposed framework implemented on the single-arm and dual-arm Barrett’s 7-DOF WAM robots equipped with a Microsoft Kinect camera for user tracking and gesture recognition. User performance and workload were measured in a series of trials with two groups of 6 participants using two robot settings in different order for counterbalancing. The experimental results showed that, for the same task, users required less time and produced shorter robot trajectories with the single-arm robot than with the dual-arm robot. The results also showed that the users who performed the task with the single-arm robot first experienced considerably less workload in performing the task with the dual-arm robot while achieving a higher task success rate in a shorter time.


systems, man and cybernetics | 2013

Automatic Multi-sensor Task Allocation Using Modified Distributed Bees Algorithm

Itshak Tkach; Yael Edan; Aleksandar Jevtic; Shimon Y. Nof

In this paper, we propose a Modified Distributed Bees Algorithm (MDBA) for multi-sensor task allocation in a supply chain security scenario. The MDBA assigns sensors to the upcoming tasks using a decentralized, probabilistic approach to maximize information gain while minimizing costs. Tasks are allocated based on sensors performance, tasks priorities and the mutual sensor-task distances. Simulation analysis compared different algorithms and indicated improved performance of 15% by using MDBA with respect to the second-best algorithm.


human robot interaction | 2018

Adaptable Multimodal Interaction Framework for Robot-Assisted Cognitive Training

Aleksandar Taranović; Aleksandar Jevtic; Carme Torras

The size of the population with cognitive impairment is increasing worldwide, and socially assistive robotics offers a solution to the growing demand for professional carers. Adaptation to users generates more natural, human-like behavior that may be crucial for a wider robot acceptance. The focus of this work is on robot-assisted cognitive training of the patients that suffer from mild cognitive impairment (MCI) or Alzheimer. We propose a framework that adjusts the level of robot assistance and the way the robot actions are executed, according to the user input. The actions can be performed using any of the following modalities: speech, gesture, and display, or their combination. The choice of modalities depends on the availability of the required resources. The memory state of the user was implemented as a Hidden Markov Model, and it was used to determine the level of robot assistance. A pilot user study was performed to evaluate the effects of the proposed framework on the quality of interaction with the robot.


human robot interaction | 2018

Resource-Based Modality Selection in Robot-Assisted Cognitive Training

Aleksandar Taranović; Aleksandar Jevtic; Joan Hernández-Farigola; Natalia Tantinya; Carla Abdelnour; Carme Torras

The majority of socially assistive robots interact with their users using multiple modalities. Multimodality is an important feature that can enable them to adapt to the user behavior and the environment. In this work, we propose a resource-based modality-selection algorithm that adjusts the use of the robot interaction modalities taking into account the available resources to keep the interaction with the user comfortable and safe. For example, the robot should not enter the board space while the user is occupying it, or speak while the user is speaking. We performed a pilot study in which the robot acted as a caregiver in cognitive training. We compared a system with the proposed algorithm to a baseline system that uses all modalities for all actions unconditionally. Results of the study suggest that a reduced complexity of interaction does not significantly affect the user experience, and may improve task performance.

Collaboration


Dive into the Aleksandar Jevtic's collaboration.

Top Co-Authors

Avatar

Carme Torras

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Yael Edan

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Guillem Alenyà

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Greg Chance

University of the West of England

View shared research outputs
Top Co-Authors

Avatar

Praminda Caleb-Solly

University of the West of England

View shared research outputs
Top Co-Authors

Avatar

Sanja Dogramadzi

University of the West of England

View shared research outputs
Top Co-Authors

Avatar

Guillaume Doisy

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Itshak Tkach

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Adrià Colomé

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Aleksandar Taranović

Spanish National Research Council

View shared research outputs
Researchain Logo
Decentralizing Knowledge