Oskar Palinko
Istituto Italiano di Tecnologia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Oskar Palinko.
human robot interaction | 2015
Alessandra Sciutti; Lars Schillingmann; Oskar Palinko; Yukie Nagai; Giulio Sandini
In this paper we describe a human-robot interaction scenario designed to evaluate the role of gaze as implicit signal for turn-taking in a robotic teaching context. In particular we propose a protocol to assess the impact of different timing strategies in a common teaching task (English dictation). The task is designed to compare the effects of a teaching behavior whose timing is dependent on the students gaze with the more standard fixed timing approach. An initial validation indicates that this scenario could represent a functional tool for investigating the positive and negative impacts that personalized timing might have on different subjects.
intelligent robots and systems | 2016
Oskar Palinko; Francesco Rea; Giulio Sandini; Alessandra Sciutti
Robots are at the position to become our everyday companions in the near future. Still, many hurdles need to be cleared to achieve this goal. One of them is the fact that robots are still not able to perceive some important communication cues naturally used by humans, e.g. gaze. In the recent past, eye gaze in robot perception was substituted by its proxy, head orientation. Such an approach is still adopted in many applications today. In this paper we introduce performance improvements to an eye tracking system we previously developed and use it to explore if this approximation is appropriate. More precisely, we compare the impact of the use of eye- or head-based gaze estimation in a human robot interaction experiment with the iCub robot and naïve subjects. We find that the possibility to exploit the richer information carried by eye gaze has a significant impact on the interaction. As a result, our eye tracking system allows for a more efficient human-robot collaboration than a comparable head tracking approach, according to both quantitative measures and subjective evaluation by the human participants.
robot and human interactive communication | 2015
Oskar Palinko; Alessandra Sciutti; Lars Schillingmann; Francesco Rea; Yukie Nagai; Giulio Sandini
It is generally accepted that a robot should exhibit a contingent behavior, adaptable to the needs of each individual user, to achieve a more natural and pleasant interaction. In this paper we have evaluated whether this general rule applies also when the robot plays a leading role and needs to motivate the human partner to keep a certain pace, as during training or teaching. Also among humans, in schools or factories, structured interaction is often guided by a predefined rhythm, which facilitates the coordination of the partners involved and is thought to maximize their efficiency. On the other hand, a pre-established timing forces all participants to adjust their natural speed to the external, sometimes not appropriate, timing requirement. Where does the optimal trade-off between these two paradigms lie? We have addressed this question in a dictation scenario where the humanoid robot iCub plays the role of a teacher and dictates brief English or Italian sentences to the participants. In particular we compare a condition in which the dictation is performed at a fixed timing with a condition in which iCub monitors subjects gaze to adjust its dictation speed. The results are discussed both in terms of participants subjective evaluation and their objective performance, by highlighting the advantages and drawbacks of the choice of contingent robot behavior.
ieee-ras international conference on humanoid robots | 2015
Oskar Palinko; Francesco Rea; Giulio Sandini; Alessandra Sciutti
Humans use eye gaze in their daily interaction with other humans. Humanoid robots, on the other hand, have not yet taken full advantage of this form of implicit communication. In this paper we present a passive monocular gaze tracking system implemented on the iCub humanoid robot. The validation of the system proved that it is a viable low-cost, calibration-free gaze tracking solution for humanoid platforms, with a mean absolute error of about 5 degrees on horizontal angle estimates. We also demonstrated the applicability of our system to human-robot collaborative tasks, showing that the eye gaze reading ability can enable successful implicit communication between humans and the robot. Finally, in the conclusion we give generic guidelines on how to improve our system and discuss some potential applications of gaze estimation for humanoid robots.
ieee-ras international conference on humanoid robots | 2014
Oskar Palinko; Alessandra Sciutti; Laura Patanè; Francesco Rea; Francesco Nori; Giulio Sandini
Passing an object to someone else is one of the simplest collaborative actions. However, it entails a high degree of coordination between the two partners. The efficiency of the result relies heavily on the non-verbal communication associated to the passers motion. The kinematic properties of the movement convey to the receiver implicit information about when, where and what is going to be passed. In this paper we focus on the what, by proposing a simple architecture which allows a humanoid robot to autonomously plan lifting movements which implicitly inform the human partner of the weight of the lifted object. We implemented the system on the humanoid robot iCub in a robot waiter scenario and experimentally verified the readability of the robot motion. We suggest that the implementation of such human-aware motion planning could ensure a seamless and natural interaction with nonexpert users, yielding in turn to safer and more efficient object passing.
ieee-ras international conference on humanoid robots | 2016
Oskar Palinko; Alessandra Sciutti; Yujin Wakita; Yoshio Matsumoto; Giulio Sandini
Gaze plays an important role in everyday communication between humans. Eyes are not only used to perceive information during interaction, but also to control it. Humanoid robots on the other hand are not yet very proficient in understanding and using gaze. In our study we enabled two humanoid robots to perceive and exert gaze actions. We then performed a pilot experiment with the two android robots playing the “Wink Murder” game with human players. We demonstrate that the designed framework allows the robots to complete the game successfully, validating the efficacy of our gaze tracking system. Moreover, human participants exhibited a rich variety of natural behaviors in the game, suggesting that it could represent a valid scenario for a more in-depth investigation of human-humanoid interaction.
international conference on development and learning | 2014
Alessandra Sciutti; Laura Patanè; Oskar Palinko; Francesco Nori; Giulio Sandini
Humans develop already from the first years of life the ability to understand the actions and intentions of others and naturally use this skill to help others [1]. It would be important for the future of human-robot collaboration if children could easily generalize this understanding to robotic agents. In this paper we have investigated whether this is possible, at least in the context of inferring object weight from the observation of a humanoid action. Our results show that children of different ages need a different degree of human-likeness in robot motion to be able to infer which weight is being lifted. Indeed, from 10 years of age on, even non-humanlike trajectories can communicate the lifted load, if the lifting speed is appropriately varied as a function of weight. Conversely, younger children are significantly better at judging weight only in presence of a human-like trajectory. Hence, robots should adapt even the basic properties of their motion to their users, taking into account that children perception progressively changes with age.
human-agent interaction | 2014
Oskar Palinko; Alessandra Sciutti; Francesco Rea; Giulio Sandini
Passing an object between two humans is a very natural and seamless operation, mainly thanks to non-verbal cues which facilitate the process. Just from action observation, humans can easily anticipate where and when a passing movement will end and how heavy the transported object is. But how could this natural understanding be ported to non-human agents? We introduce a simple robotic architecture to enable the iCub humanoid robot to visually recognize the weight of an object and select a lift-to-pass motion which implicitly communicates such information to the action partner. In this work we mainly focus on the building and training of the procedural memory module needed to store the association between the mass of an object and its visual appearance, and we propose how such a model can be used to successively select communicative lifting motions.
Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications | 2016
Oskar Palinko; Francesco Rea; Giulio Sandini; Alessandra Sciutti
Humans use eye gaze in their daily interaction with other humans. Humanoid robots, on the other hand, have not yet taken full advantage of this form of implicit communication. We designed a passive monocular gaze tracking system implemented on the iCub humanoid robot [Metta et al. 2008]. The validation of the system proved that it is a viable low-cost, calibration-free gaze tracking solution for humanoid platforms, with a mean absolute error of about 5 degrees on horizontal angle estimates. We also demonstrated the applicability of our system to human-robot collaborative tasks, showing that the eye gaze reading ability can enable successful implicit communication between humans and the robot.
human-agent interaction | 2014
Oskar Palinko; Alessandra Sciutti; Francesco Rea; Giulio Sandini
Knowing where a person is looking is an important parameter of every human-human interaction. Detecting a persons gaze could significantly improve the interaction capabilities of todays robotic agents. But many robots visual systems are limited by data bandwidth and optical hardware. We propose a low-cost high-def pan/tilt/zoom active vision system that could significantly improve the robots eye tracking capabilities. We tested the proposed system for improving mutual gaze detection in a human-robot interaction scenario and found significant results compared to systems without zoom capability.
Collaboration
Dive into the Oskar Palinko's collaboration.
National Institute of Advanced Industrial Science and Technology
View shared research outputsNational Institute of Advanced Industrial Science and Technology
View shared research outputs