Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eduardo Velloso is active.

Publication


Featured researches published by Eduardo Velloso.


user interface software and technology | 2015

Orbits: Gaze Interaction for Smart Watches using Smooth Pursuit Eye Movements

Augusto Esteves; Eduardo Velloso; Andreas Bulling; Hans Gellersen

We introduce Orbits, a novel gaze interaction technique that enables hands-free input on smart watches. The technique relies on moving controls to leverage the smooth pursuit movements of the eyes and detect whether and at which control the user is looking at. In Orbits, controls include targets that move in a circular trajectory in the face of the watch, and can be selected by following the desired one for a small amount of time. We conducted two user studies to assess the techniques recognition and robustness, which demonstrated how Orbits is robust against false positives triggered by natural eye movements and how it presents a hands-free, high accuracy way of interacting with smart watches using off-the-shelf devices. Finally, we developed three example interfaces built with Orbits: a music player, a notifications face plate and a missed call menu. Despite relying on moving controls -- very unusual in current HCI interfaces -- these were generally well received by participants in a third and final study.


brazilian symposium on artificial intelligence | 2012

Wearable computing: accelerometers' data classification of body postures and movements

Wallace Ugulino; Débora Cardador; Katia Vega; Eduardo Velloso; Ruy Luiz Milidiú; Hugo Fuks

During the last 5 years, research on Human Activity Recognition (HAR) has reported on systems showing good overall recognition performance. As a consequence, HAR has been considered as a potential technology for e-health systems. Here, we propose a machine learning based HAR classifier. We also provide a full experimental description that contains the HAR wearable devices setup and a public domain dataset comprising 165,633 samples. We consider 5 activity classes, gathered from 4 subjects wearing accelerometers mounted on their waist, left thigh, right arm, and right ankle. As basic input features to our classifier we use 12 attributes derived from a time window of 150ms. Finally, the classifier uses a committee AdaBoost that combines ten Decision Trees. The observed classifier accuracy is 99.4%.


augmented human international conference | 2013

Qualitative activity recognition of weight lifting exercises

Eduardo Velloso; Andreas Bulling; Hans Gellersen; Wallace Ugulino; Hugo Fuks

Research on activity recognition has traditionally focused on discriminating between different activities, i.e. to predict which activity was performed at a specific point in time. The quality of executing an activity, the how (well), has only received little attention so far, even though it potentially provides useful information for a large variety of applications. In this work we define quality of execution and investigate three aspects that pertain to qualitative activity recognition: specifying correct execution, detecting execution mistakes, providing feedback on the to the user. We illustrate our approach on the example problem of qualitatively assessing and providing feedback on weight lifting exercises. In two user studies we try out a sensor- and a model-based approach to qualitative activity recognition. Our results underline the potential of model-based assessment and the positive impact of real-time user feedback on the quality of execution.


ACM Computing Surveys | 2015

The Feet in Human--Computer Interaction: A Survey of Foot-Based Interaction

Eduardo Velloso; Dominik Schmidt; Jason Alexander; Hans Gellersen; Andreas Bulling

Foot-operated computer interfaces have been studied since the inception of human--computer interaction. Thanks to the miniaturisation and decreasing cost of sensing technology, there is an increasing interest exploring this alternative input modality, but no comprehensive overview of its research landscape. In this survey, we review the literature on interfaces operated by the lower limbs. We investigate the characteristics of users and how they affect the design of such interfaces. Next, we describe and analyse foot-based research prototypes and commercial systems in how they capture input and provide feedback. We then analyse the interactions between users and systems from the perspective of the actions performed in these interactions. Finally, we discuss our findings and use them to identify open questions and directions for future research.


designing interactive systems | 2016

AmbiGaze: Direct Control of Ambient Devices by Gaze

Eduardo Velloso; Markus Wirth; Christian Weichel; Augusto Esteves; Hans-Werner Gellersen

Eye tracking offers many opportunities for direct device control in smart environments, but issues such as the need for calibration and the Midas touch problem make it impractical. In this paper, we propose AmbiGaze, a smart environment that employs the animation of targets to provide users with direct control of devices by gaze only through smooth pursuit tracking. We propose a design space of means of exposing functionality through movement and illustrate the concept through four prototypes. We evaluated the system in a user study and found that AmbiGaze enables robust gaze-only interaction with many devices, from multiple positions in the environment, in a spontaneous and comfortable manner.


ubiquitous computing | 2016

TraceMatch: a computer vision technique for user input by tracing of animated controls

Christopher Clarke; Alessio Bellino; Augusto Esteves; Eduardo Velloso; Hans-Werner Gellersen

Recent works have explored the concept of movement correlation interfaces, in which moving objects can be selected by matching the movement of the input device to that of the desired object. Previous techniques relied on a single modality (e.g. gaze or mid-air gestures) and specific hardware to issue commands. TraceMatch is a computer vision technique that enables input by movement correlation while abstracting from any particular input modality. The technique relies only on a conventional webcam to enable users to produce matching gestures with any given body parts, even whilst holding objects. We describe an implementation of the technique for acquisition of orbiting targets, evaluate algorithm performance for different target sizes and frequencies, and demonstrate use of the technique for remote control of graphical as well as physical objects with different body parts.


annual symposium on computer-human interaction in play | 2016

The Emergence of EyePlay: A Survey of Eye Interaction in Games

Eduardo Velloso; Marcus Carter

As eye trackers become cheaper, smaller, more robust, and more available, they finally leave research labs and enter the home environment. In this context, gaming arises as a promising application domain for eye interaction. The goal of this survey is to categorise the different ways in which the eyes can be incorporated into games and play in general as a resource for future design. We reviewed the literature on the topic, as well as other game prototypes that employ the eyes. We compiled a list of eye-enabled game mechanics and derived a taxonomy that classifies them according to the eye movements they involve, the input type they provide, and the game mechanics that they implement. Based on our findings we articulate the value of gaming for future HCI gaze research and outline a research program around eye interaction in gaming.


ACM Transactions on Computer-Human Interaction | 2017

Motion Correlation: Selecting Objects by Matching Their Movement

Eduardo Velloso; Marcus Carter; Joshua Newn; Augusto Esteves; Christopher Clarke; Hans Gellersen

Selection is a canonical task in user interfaces, commonly supported by presenting objects for acquisition by pointing. In this article, we consider motion correlation as an alternative for selection. The principle is to represent available objects by motion in the interface, have users identify a target by mimicking its specific motion, and use the correlation between the system’s output with the user’s input to determine the selection. The resulting interaction has compelling properties, as users are guided by motion feedback, and only need to copy a presented motion. Motion correlation has been explored in earlier work but only recently begun to feature in holistic interface designs. We provide a first comprehensive review of the principle, and present an analysis of five previously published works, in which motion correlation underpinned the design of novel gaze and gesture interfaces for diverse application contexts. We derive guidelines for motion correlation algorithms, motion feedback, choice of modalities, overall design of motion correlation interfaces, and identify opportunities and challenges identified for future research and design.


annual symposium on computer human interaction in play | 2014

EyePlay: applications for gaze in games

Jayson Turner; Eduardo Velloso; Hans Gellersen; Veronica Sundstedt

What new challenges does the combination of games and eye-tracking present? The EyePlay workshop brings together researchers and industry specialists from the fields of eye-tracking and games to address this question. Eye-tracking been investigated extensively in a variety of domains in human-computer Interaction, but little attention has been given to its application for gaming. As eye-tracking technology is now an affordable commodity, its appeal as a sensing technology for games is set to become the driving force for novel methods of player-computer interaction and games evaluation. This workshop presents a forum for eye-based gaming research, with a focus on identifying the opportunities that eye-tracking brings to games design and research, on plotting the landscape of the work in this area, and on formalising a research agenda for EyePlay as a field. Possible topics are, but not limited to, novel interaction techniques and game mechanics, usability and evaluation, accessibility, learning, and serious games contexts.


international symposium on wearable computers | 2015

Orbits: enabling gaze interaction in smart watches using moving targets

Augusto Esteves; Eduardo Velloso; Andreas Bulling; Hans-Werner Gellersen

In this paper we demonstrate Orbits, a novel gaze interaction technique that accounts for both the reduced size of smart watch displays and the hands-free nature of conventional watches. Orbits combines graphical controls that display one or multiple targets moving on a circular path, with input that is provided by users as they follow any of the targets briefly with their eyes. This gaze input triggers the functionality associated with the followed target -- be it answering a call, playing a song or managing multiple notifications.

Collaboration


Dive into the Eduardo Velloso's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joshua Newn

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar

Frank Vetere

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hugo Fuks

Pontifical Catholic University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Augusto Esteves

Edinburgh Napier University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wallace Ugulino

Pontifical Catholic University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge