Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Frank Broz is active.

Publication


Featured researches published by Frank Broz.


robot and human interactive communication | 2012

Mutual gaze, personality, and familiarity: Dual eye-tracking during conversation

Frank Broz; Hagen Lehmann; Chrystopher L. Nehaniv; Kerstin Dautenhahn

Mutual gaze is an important aspect of face-to-face communication that arises from the interaction of the gaze behavior of two individuals. In this dual eye-tracking study, gaze data was collected from human conversational pairs with the goal of gaining insight into what characteristics of the conversation partners influence this behavior. We investigate the link between personality, familiarity and mutual gaze. The results found indicate that mutual gaze behavior depends on the characteristics of both partners rather than on either individual considered in isolation. We discuss the implications of these findings for the design of socially appropriate gaze controllers for robots that interact with people.


robot and human interactive communication | 2011

Designing POMDP models of socially situated tasks

Frank Broz; Illah R. Nourbakhsh; Reid G. Simmons

In this paper, a modelling approach is described that represents human-robot social interactions as partially observable Markov decision processes (POMDPs). In these POMDPs, the intention of the human is represented as an unobservable part of the state space, and the robots own intentions are expressed through the rewards. The state transition structure for the models is created using action rules that capture the effects of the robots actions, relate the humans behavior to their intentions, and describe the changing state of the environment. State transitions are modified using data from humans interacting with other humans. The policies obtained by solving these models are used to control a robot in a socially situated task with a human partner. These interactions are compared to those of human pairs performing the same task, demonstrating that this approach produces policies that exhibit natural and socially appropriate behavior.


systems, man and cybernetics | 2014

A web based multi-modal interface for elderly users of the robot-era multi-robot services

Alessandro G. Di Nuovo; Frank Broz; Tony Belpaeme; Angelo Cangelosi; Filippo Cavallo; Raffaele Esposito; Paolo Dario

In this paper we present the design and technical implementation of a web based Multi-Modal User Interface (MMUI) tailored for elderly users of the robotic services developed by the EU FP7 Large-Scale Integration Project Robot-Era. The project partners are working to significantly enhance the performance and acceptability of technological services for ageing well by delivering a fully realized system based on the cooperation of multiple heterogeneous robots and with the support of an Ambient Assisted Living environment. To this end, elderly users were involved in the definition of the services and in the design of the hardware and software of the robotic platforms from the first stages of the development process and in real experimentation in two test sites. In particular, here we detail the interface software system for multi-modal elderly-robot interaction. The MMUI is designed to run on any device including touch-screen mobiles and tablets that are preferred by the elderly. This is obtained by integrating web based solutions with the Robot-Era middlewares and planner. Finally we present some preliminary results of ongoing experiments to show the successful evaluation of usability by potential users and to discuss the future directions to improve the proposed MMUI software system.


Intelligent Service Robotics | 2018

The multi-modal interface of Robot-Era multi-robot services tailored for the elderly

Alessandro G. Di Nuovo; Frank Broz; Ning Wang; Tony Belpaeme; Angelo Cangelosi; Ray Jones; Raffaele Esposito; Filippo Cavallo; Paolo Dario

Socially assistive robotic platforms are now a realistic option for the long-term care of ageing populations. Elderly users may benefit from many services provided by robots operating in different environments, such as providing assistance inside apartments, serving in shared facilities of buildings or guiding people outdoors. In this paper, we present the experience gained within the EU FP7 ROBOT-ERA project towards the objective of implementing easy-to-use and acceptable service robotic system for the elderly. In particular, we detail the user-centred design and the experimental evaluation in realistic environments of a web-based multi-modal user interface tailored for elderly users of near future multi-robot services. Experimental results demonstrate positive evaluation of usability and willingness to use by elderly users, especially those less experienced with technological devices who could benefit more from the adoption of robotic services. Further analyses showed how multi-modal modes of interaction support more flexible and natural elderly–robot interaction, make clear the benefits for the users and, therefore, increase its acceptability. Finally, we provide insights and lessons learned from the extensive experimentation, which, to the best of our knowledge, is one of the largest experimentation of a multi-robot multi-service system so far.


RECENT ADVANCES IN NONLINEAR SPEECH PROCESSING | 2016

A User-Centric Design of Service Robots Speech Interface for the Elderly

Ning Wang; Frank Broz; Alessandro G. Di Nuovo; Tony Belpaeme; Angelo Cangelosi

The elderly population in the Europe have quickly increased and will keep growing in the coming years. In facing the elder care challenges posed by the amount of seniors staying alone in their own homes, great efforts have been made to develop advanced robotic systems that can operate in intelligent environments, and to enable the robot to ultimately work in real conditions and cooperate with elderly end-users favoring independent living. In this paper, we describe the design and implementation of a user-centric speech interface tailored for the elderly. The speech user interface incorporating the state of the art speech technologies, is fully integrated into application contexts and facilitates the actualization of the robotic services in different scenarios. Contextual information is taken into account in the speech recognition to reduce system complexity and to improve recognition success rate. Under the framework of the EU FP7 Robot-Era Project, the usability of the speech user interface on a multi-robots service platform has been evaluated by elderly users recruited in Italy and Sweden through questionnaire interview. The quantitative analysis results show that the majority of end-users strongly agree that the speech interaction experienced during the Robot-Era services is acceptable.


human-robot interaction | 2012

Gaze in HRI: from modeling to communication

Frank Broz; Hagen Lehmann; Yukiko I. Nakano; Bilge Mutlu

The purpose of this half-day workshop is to explore the role of social gaze in human-robot interaction, both how to measure social gaze behavior by humans and how to implement it in robots that interact with them. Gaze directed at an interaction partner has become a subject of increased attention in human-robot interaction research. While traditional robotics research has focused work on robot gaze solely on the identification and manipulation of objects, researchers in HRI have come to recognize that gaze is a social behavior in addition to a way of sensing the world. This workshop will approach the problem of understanding the role of social gaze in human-robot interaction from the dual perspectives of investigating human-human gaze for design principles to apply to robots and of experimentally evaluating humanrobot gaze interaction in order to assess how humans engage in gaze behavior with robots. Computational modeling of human gaze behavior is useful for human-robot interaction in a number of different ways. Such models can enable a robot to perceive information about the state of the human in the interaction and adjust its behavior accordingly. Additionally, more humanlike gaze behavior may make a person more comfortable and engaged during an interaction. It is known the gaze pattern of a social interaction partner has a huge impact on ones own interaction behavior. Therefore, the experimental verification of robot gaze policies is extremely important. Appropriate gaze behaviour is critical for establishing joint attention, which enables humans to engage in collaborative activities and gives structure to social interactions. There is still much to be learned about which properties of human-human gaze should be transferred to human-robot gaze and how to model human-robot gaze for autonomous robots. The goal of the workshop is to exchange ideas and develop and improve methodologies for this growing area of research.


Archive | 2013

Automated Analysis of Mutual Gaze in Human Conversational Pairs

Frank Broz; Hagen Lehmann; Chrystopher L. Nehaniv; Kerstin Dautenhahn

Mutual gaze arises from the interaction of the gaze behavior of two individuals. It is an important part of all face-to-face social interactions, including verbal exchanges. In order for humanoid robots to interact more naturally with people, they need internal models that allow them to produce realistic social gaze behavior. The approach taken in this work is to collect data from human conversational pairs with the goal of learning a controller for robot gaze directly from human data. In a small initial data collection experiment, mutual gaze between pairs of people is detected and recorded in real time during conversational interaction. A Markov model representation of human gaze data is produced in order to demonstrate how this data could be used to create a controller. We also discuss how an algebraic analysis of the state transition structure of such models may reveal interesting properties of human gaze interaction. Results are also presented from a second, larger experiment in which mutual gaze is detected offline using recorded video data for greater accuracy. Trends in behavior linking gaze and speech in this data set are also discussed.


Artificial Life | 2013

Interaction and experience in enactive intelligence and humanoid robotics

Chrystopher L. Nehaniv; Frank Förster; Joe Saunders; Frank Broz; Elena Antonova; Hatice Kose; Caroline Lyon; Hagen Lehmann; Yo Sato; Kerstin Dautenhahn

We overview how sensorimotor experience can be operationalized for interaction scenarios in which humanoid robots acquire skills and linguistic behaviours via enacting a “form-of-life” in interaction games (following Wittgenstein) with humans. The enactive paradigm is introduced which provides a powerful framework for the construction of complex adaptive systems, based on interaction, habit, and experience. Enactive cognitive architectures (following insights of Varela, Thompson and Rosch) that we have developed support social learning and robot ontogeny by harnessing information-theoretic methods and raw uninterpreted sensorimotor experience to scaffold the acquisition of behaviours. The success criterion here is validation by the robot engaging in ongoing human-robot interaction with naive participants who, over the course of iterated interactions, shape the robots behavioural and linguistic development. Engagement in such interaction exhibiting aspects of purposeful, habitual recurring structure evidences the developed capability of the humanoid to enact language and interaction games as a successful participant.


Topics in Cognitive Science | 2014

The ITALK project : A developmental robotics approach to the study of individual, social, and linguistic learning

Frank Broz; Chrystopher L. Nehaniv; Tony Belpaeme; Ambra Bisio; Kerstin Dautenhahn; Luciano Fadiga; Tomassino Ferrauto; Kerstin Fischer; Frank Förster; Onofrio Gigliotta; Sascha S. Griffiths; Hagen Lehmann; Katrin Solveig Lohan; Caroline Lyon; Davide Marocco; Gianluca Massera; Giorgio Metta; Vishwanathan Mohan; Anthony F. Morse; Stefano Nolfi; Francesco Nori; Martin Peniak; Karola Pitsch; Katharina J. Rohlfing; Gerhard Sagerer; Yo Sato; Joe Saunders; Lars Schillingmann; Alessandra Sciutti; Vadim Tikhanoff

This article presents results from a multidisciplinary research project on the integration and transfer of language knowledge into robots as an empirical paradigm for the study of language development in both humans and humanoid robots. Within the framework of human linguistic and cognitive development, we focus on how three central types of learning interact and co-develop: individual learning about ones own embodiment and the environment, social learning (learning from others), and learning of linguistic capability. Our primary concern is how these capabilities can scaffold each others development in a continuous feedback cycle as their interactions yield increasingly sophisticated competencies in the agents capacity to interact with others and manipulate its world. Experimental results are summarized in relation to milestones in human linguistic and cognitive development and show that the mutual scaffolding of social learning, individual learning, and linguistic capabilities creates the context, conditions, and requisites for learning in each domain. Challenges and insights identified as a result of this research program are discussed with regard to possible and actual contributions to cognitive science and language ontogeny. In conclusion, directions for future work are suggested that continue to develop this approach toward an integrated framework for understanding these mutually scaffolding processes as a basis for language development in humans and robots.


conference towards autonomous robotic systems | 2016

Experimental evaluation of a multi-modal user interface for a robotic service

Alessandro G. Di Nuovo; Ning Wang; Frank Broz; Tony Belpaeme; Ray Jones; Angelo Cangelosi

This paper reports the experimental evaluation of a Multi-Modal User Interface (MMUI) designed to enhance the user experience in terms of service usability and to increase acceptability of assistive robot systems by elderly users. The MMUI system offers users two main modalities to send commands: they are a GUI, usually running on the tablet attached to the robot, and a SUI, with a wearable microphone on the user. The study involved fifteen participants, aged between 70 and 89 years old, who were invited to interact with a robotic platform customized for providing every-day care and services to the elderly. The experimental task for the participants was to order a meal from three different menus using any interaction modality they liked. Quantitative and qualitative data analyses demonstrate a positive evaluation by users and show that the multi-modal means of interaction can help to make elderly-robot interaction more flexible and natural.

Collaboration


Dive into the Frank Broz's collaboration.

Top Co-Authors

Avatar

Hagen Lehmann

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Tony Belpaeme

Plymouth State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ruth Aylett

Heriot-Watt University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ayan Ghosh

Heriot-Watt University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kerstin Dautenhahn

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge