Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrey Kiselev is active.

Publication


Featured researches published by Andrey Kiselev.


Remote Sensing | 2016

Classification and Segmentation of Satellite Orthoimagery Using Convolutional Neural Networks

Martin Längkvist; Andrey Kiselev; Marjan Alirezaie; Amy Loutfi

The availability of high-resolution remote sensing (HRRS) data has opened up the possibility for new interesting applications, such as per-pixel classification of individual objects in greater detail. This paper shows how a convolutional neural network (CNN) can be applied to multispectral orthoimagery and a digital surface model (DSM) of a small city for a full, fast and accurate per-pixel classification. The predicted low-level pixel classes are then used to improve the high-level segmentation. Various design choices of the CNN architecture are evaluated and analyzed. The investigated land area is fully manually labeled into five categories (vegetation, ground, roads, buildings and water), and the classification accuracy is compared to other per-pixel classification works on other land areas that have a similar choice of categories. The results of the full classification and segmentation on selected segments of the map show that CNNs are a viable tool for solving both the segmentation and object recognition task for remote sensing data.


human-robot interaction | 2014

The effect of field of view on social interaction in mobile robotic telepresence systems

Andrey Kiselev; Annica Kristoffersson; Amy Loutfi

One goal of mobile robotic telepresence for social interaction is to design robotic units that are easy to operate for novice users and promote good interaction between people.This paper presents an exploratory study on the effect of camera orientation and field of view on the interaction between a remote and local user.Our findings suggest that limiting the width of the field of view can lead to better interaction quality as it encourages remote users to orient the robot towards local users. Categories and Subject Descriptors I.2.9 [Robotics]: Commercial robots and applications; H.5.2 [User interfaces]: Graphical user interfaces (GUI)


Paladyn: Journal of Behavioral Robotics | 2013

Using augmented reality to improve usability of the user interface for driving a telepresence robot

Giovanni Mosiello; Andrey Kiselev; Amy Loutfi

Abstract Mobile Robotic Telepresence (MRP) helps people to communicate in natural ways despite being physically located in different parts of the world. User interfaces of such systems are as critical as the design and functionality of the robot itself for creating conditions for natural interaction. This article presents an exploratory study analysing different robot teleoperation interfaces. The goals of this paper are to investigate the possible effect of using augmented reality as the means to drive a robot, to identify key factors of the user interface in order to improve the user experience through a driving interface, and to minimize interface familiarization time for non-experienced users. The study involved 23 participants whose robot driving attempts via different user interfaces were analysed. The results show that a user interface with an augmented reality interface resulted in better driving experience.


Presence: Teleoperators & Virtual Environments | 2016

Excite project: A review of forty-two months of robotic telepresence technology evolution

Andrea Orlandini; Annica Kristoffersson; Lena Almquist; Patrik Björkman; Amedeo Cesta; Gabriella Cortellessa; Cipriano Galindo; Javier Gonzalez-Jimenez; Kalle Gustafsson; Andrey Kiselev; Amy Loutfi; Francisco Melendez; Malin Nilsson; Lasse Odens Hedman; Eleni Odontidou; J.R. Ruiz-Sarmiento; Mårten Scherlund; Lorenza Tiberio; Stephen Von Rump; Silvia Coradeschi

This article reports on the EU project ExCITE with specific focus on the technical development of the telepresence platform over a period of 42 months. The aim of the project was to assess the robustness and validity of the mobile robotic telepresence (MRP) system Giraff as a means to support elderly people and to foster their social interaction and participation. Embracing the idea of user-centered product refinement, the robot was tested over long periods of time in real homes. As such, the system development was driven by a strong involvement of elderly people and their caregivers but also by technical challenges associated with deploying the robot in real-world contexts. The results of the 42-months’ long evaluation is a system suitable for use in homes rather than a generic system suitable, for example, in office environments.


robotics automation and mechatronics | 2015

Evaluation of using semi-autonomy features in mobile robotic telepresence systems

Andrey Kiselev; Annica Kristoffersson; Francisco Melendez; Cipriano Galindo; Amy Loutfi; Javier Gonzalez-Jimenez; Silvia Coradeschi

Mobile robotic telepresence systems used for social interaction scenarios require that users steer robots in a remote environment. As a consequence, a heavy workload can be put on users if they are unfamiliar with using robotic telepresence units. One way to lessen this workload is to automate certain operations performed during a telepresence session in order to assist remote drivers in navigating the robot in new environments. Such operations include autonomous robot localization and navigation to certain points in the home and automatic docking of the robot to the charging station. In this paper we describe the implementation of such autonomous features along with user evaluation study. The evaluation scenario is focused on the first experience on using the system by novice users. Importantly, that the scenario taken in this study assumed that participants have as little as possible prior information about the system. Four different use-cases were identified from the user behaviour analysis.


Ai & Society | 2011

Toward incorporating emotions with rationality into a communicative virtual agent

Andrey Kiselev; Benjamin Alexander Hacker; Thomas Wankerl; Niyaz Abdikeev; Toyoaki Nishida

This paper addresses the problem of human–computer interactions when the computer can interpret and express a kind of human-like behavior, offering natural communication. A conceptual framework for incorporating emotions with rationality is proposed. A model of affective social interactions is described. The model utilizes the SAIBA framework, which distinguishes among several stages of processing of information. The SAIBA framework is extended, and a model is realized in human behavior detection, human behavior interpretation, intention planning, attention tracking behavior planning, and behavior realization components. Two models of incorporating emotions with rationality into a virtual artifact are presented. The first one uses an implicit implementation of emotions. The second one has an explicit realization of a three-layered model of emotions, which is highly interconnected with other components of the system. Details of the model with implicit implementation of emotional behavior are shown as well as evaluation methodology and results. Discussions about the extended model of an agent are given in the final part of the paper.


human-robot interaction | 2014

Semi-autonomous cooperative driving for mobile robotic telepresence systems

Andrey Kiselev; Giovanni Mosiello; Annica Kristoffersson; Amy Loutfi

Mobile robotic telepresence (MRP) has been introduced to allow communication from remote locations. Modern MRP systems offer rich capabilities for human-human interactions. However, simply driving a telepresence robot can become a burden especially for novice users, leaving no room for interaction at all. In this video we introduce a project which aims to incorporate advanced robotic algorithms into manned telepresence robots in a natural way to allow human-robot cooperation for safe driving. It also shows a very first implementation of cooperative driving based on extracting a safe drivable area in real time using the image stream received from the robot. Categories and Subject Descriptors I.2.9 [Robotics]: Commercial robots and applications; H.5.2 [User interfaces]: Graphical user interfaces (GUI)


european conference on computer vision | 2014

Combining Semi-autonomous Navigation with Manned Behaviour in a Cooperative Driving System for Mobile Robotic Telepresence

Andrey Kiselev; Annica Kristoffersson; Amy Loutfi

This paper presents an image-based cooperative driving system for telepresence robot, which allows safe operation in indoor environments and is meant to minimize the burden on novice users operating the robot. The paper focuses on one emerging telepresence robot, namely, mobile remote presence systems for social interaction. Such systems brings new opportunities for applications in healthcare and elderly care by allowing caregivers to communicate with patients and elderly from remote locations. However, using such systems can be a difficult task particularly for caregivers without proper training. The paper presents a first implementation of a vision-based cooperative driving enhancement to a telepresence robot. A preliminary evaluation in the laboratory environment is presented.


Archive | 2010

Integrating the Emotional Intelligence into the Virtual Technical Support Engineer

Andrey Kiselev; Benjamin Alexander Hacker; Thomas Wankerl; Yashimasa Ohmoto; Niyaz Abdikeev; Toyoaki Nishida

This chapter addresses the problem of modeling of emotions which is important for integrating emotional intelligence into virtual agents. In this chapter we present a virtual agent which plays a role of a technical support engineer and interacts with human user to answer questions about a complex device. We show some theoretical background and basic models of formalizing emotions along with comments about their usability in our work. We present a multi-layered system of processing of emotional information which is developed according to the modern theories and allows us to realize concurrent and mutual causal processing of different types of emotional information. We also show an evaluation methodology which we use.


human robot interaction | 2018

Enhancing Social Human-Robot Interaction with Deep Reinforcement Learning.

Neziha Akalin; Andrey Kiselev; Annica Kristoffersson; Amy Loutfi

This research aims to develop an autonomous social robot for elderly individuals. The robot will learn from the interaction and change its behaviors in order to enhance the interaction and improve ...

Collaboration


Dive into the Andrey Kiselev's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge