Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oyewole Oyekoya is active.

Publication


Featured researches published by Oyewole Oyekoya.


ieee virtual reality conference | 2009

Communicating Eye-gaze Across a Distance: Comparing an Eye-gaze enabled Immersive Collaborative Virtual Environment, Aligned Video Conferencing, and Being Together

David J. Roberts; Robin Wolff; John Rae; Anthony Steed; Rob Aspin; Moira McIntyre; Adriana Pena; Oyewole Oyekoya; William Steptoe

Eye gaze is an important and widely studied non-verbal resource in co-located social interaction. When we attempt to support tele-presence between people, there are two main technologies that can be used today: video-conferencing (VC) and collaborative virtual environments (CVEs). In VC, one can observe eye-gaze behaviour but practically the targets of eye-gaze are only correct if the participants remain relatively still. We attempt to support eye-gaze behaviour in an unconstrained manner by integrating eye-trackers into an Immersive CVE (ICVE) system. This paper aims to show that while both ICVE and VC allow people to discern being looked at and what else is looked at, when someone gazes into their space from another location, ICVE alone can continue to do this as people move. The conditions of aligned VC, ICVE, eye-gaze enabled ICVE and co-location are compared. The impact of factors of alignment, lighting, resolution, and perspective distortion are minimised through a set of pilot experiments, before a formal experiment records results for optimal settings. Results show that both VC and ICVE support eye-gaze in constrained situations, but only ICVE supports movement of the observer. We quantify the mis-judgements that are made and discuss how our findings might inform research into supporting eye-gaze through interpolated free viewpoint video based methods.


human factors in computing systems | 2012

SphereAvatar: a situated display to represent a remote collaborator

Oyewole Oyekoya; William Steptoe; Anthony Steed

An emerging form of telecollaboration utilizes situated or mobile displays at a physical destination to virtually represent remote visitors. An example is a personal telepresence robot, which acts as a physical proxy for a remote visitor, and uses cameras and microphones to capture its surroundings, which are transmitted back to the visitor. We propose the use of spherical displays to represent telepresent visitors at a destination. We suggest that the use of such 360 degree displays in a telepresence system has two key advantages: it is possible to understand the identity of the visitor from any viewpoint; and with suitable graphical representation, it is possible to tell where the visitor is looking from any viewpoint. In this paper, we investigate how to optimally represent a visitor as an avatar on a spherical display by evaluating how varying representations are able to accurately convey head gaze.


Frontiers in Neurology | 2012

A Fully Immersive Set-Up for Remote Interaction and Neurorehabilitation Based on Virtual Body Ownership

Daniel Perez-Marcos; Massimiliano Solazzi; William Steptoe; Oyewole Oyekoya; Antonio Frisoli; Tim Weyrich; Anthony Steed; Franco Tecchia; Mel Slater; Maria V. Sanchez-Vives

Although telerehabilitation systems represent one of the most technologically appealing clinical solutions for the immediate future, they still present limitations that prevent their standardization. Here we propose an integrated approach that includes three key and novel factors: (a) fully immersive virtual environments, including virtual body representation and ownership; (b) multimodal interaction with remote people and virtual objects including haptic interaction; and (c) a physical representation of the patient at the hospital through embodiment agents (e.g., as a physical robot). The importance of secure and rapid communication between the nodes is also stressed and an example implemented solution is described. Finally, we discuss the proposed approach with reference to the existing literature and systems.


Computer Animation and Virtual Worlds | 2010

Eyelid kinematics for virtual characters

William Steptoe; Oyewole Oyekoya; Anthony Steed

Realistic character animation requires elaborate rigging built on top of high quality 3D models. Sophisticated anatomically based rigs are often the choice of visual effect studios where life-like animation of CG characters is the primary objective. However, rigging a character with a muscular-skeletal system is very involving and time-consuming process, even for professionals. Although, there have been recent research efforts to automate either all or some parts of the rigging process, the complexity of anatomically based rigging nonetheless opens up new research challenges. We propose a new method to automate anatomically based rigging that transfers an existing rig of one character to another. The method is based on a data interpolation in the surface and volume domain, where various rigging elements can be transferred between different models. As it only requires a small number of corresponding input feature points, users can produce highly detailed rigs for a variety of desired character with ease. Copyright


eye tracking research & application | 2006

An eye tracking interface for image search

Oyewole Oyekoya; Fred Stentiford

Eye tracking presents an adaptive approach that can capture the users current needs and tailor the retrieval accordingly. Applying eye tracking to image retrieval requires that new strategies be devised that can use visual and algorithmic data to obtain natural and rapid retrieval of images. Recent work showed that the eye is faster than the mouse as a source of visual input in a target image identification task [Oyekoya and Stentiford 2005]. We explore the viability of using the eye to drive an image retrieval interface. In a visual search task, users are asked to find a target image in a database and the number of steps to the target image are counted. It is reasonable to believe that users will look at the objects in which they are interested during a search [Oyekoya and Stentiford 2004] and this provides the machine with the necessary information to retrieve a succession of plausible candidate images for the user.


virtual reality software and technology | 2013

Supporting interoperability and presence awareness in collaborative mixed reality environments

Oyewole Oyekoya; Ran Stone; William Steptoe; Laith Alkurdi; Stefan Klare; Angelika Peer; Tim Weyrich; Benjamin Cohen; Franco Tecchia; Anthony Steed

In the BEAMING project we have been extending the scope of collaborative mixed reality to include the representation of users in multiple modalities, including augmented reality, situated displays and robots. A single user (a visitor) uses a high-end virtual reality system (the transporter) to be virtually teleported to a real remote location (the destination). The visitor may be tracked in several ways including emotion and motion capture. We reconstruct the destination and the people within it (the locals). In achieving this scenario, BEAMING has integrated many heterogeneous systems. In this paper, we describe the design and key implementation choices in the Beaming Scene Service (BSS), which allows the various processes to coordinate their behaviour. The core of the system is a light-weight shared object repository that allows loose coupling between processes with very different requirements (e.g. embedded control systems through to mobile apps). The system was also extended to support the notion of presence awareness. We demonstrate two complex applications built with the BSS.


Presence: Teleoperators & Virtual Environments | 2015

A surround video capture and presentation system for preservation of eye-gaze in teleconferencing applications

Ye Pan; Oyewole Oyekoya; Anthony Steed

We propose a new video conferencing system that uses an array of cameras to capture a remote user and then show the video of that person on a spherical display. This telepresence system has two key advantages: (i) it can capture a near-correct image for any potential observer viewing direction because the cameras surround the user horizontally; and (ii) with view-dependent graphical representation on the spherical display, it is possible to tell where the remote user is looking from any viewpoint, whereas flat displays are visible only from the front. As a result, the display can more faithfully represent the gaze of the remote user. We evaluate this system by measuring the ability of observers to accurately judge which targets the actor is gazing at in two experiments. Results from the first experiment demonstrate the effectiveness of the camera array and spherical display system, in that it allows observers at multiple observing positions to accurately tell at which targets the remote user is looking. The second experiment further compared a spherical display with a planar display and provided detailed reasons for the improvement of our system in conveying gaze. We found two linear models for predicting the distortion introduced by misalignment of capturing cameras and the observers viewing angles in video conferencing systems. Those models might be able to enable a correction for this distortion in future display configurations.


ieee virtual reality conference | 2009

Eye Tracking for Avatar Eye Gaze Control During Object-Focused Multiparty Interaction in Immersive Collaborative Virtual Environments

William Steptoe; Oyewole Oyekoya; Alessio Murgia; Robin Wolff; John Rae; Estefania Guimaraes; David J. Roberts; Anthony Steed


international conference on pattern recognition | 2004

Exploring human eye behaviour using a model of visual attention

Oyewole Oyekoya; Fred Stentiford


Bt Technology Journal | 2004

Eye Tracking as a New Interface for Image Retrieval

Oyewole Oyekoya; Fred Stentiford

Collaboration


Dive into the Oyewole Oyekoya's collaboration.

Top Co-Authors

Avatar

Anthony Steed

University College London

View shared research outputs
Top Co-Authors

Avatar

Fred Stentiford

University College London

View shared research outputs
Top Co-Authors

Avatar

William Steptoe

University College London

View shared research outputs
Top Co-Authors

Avatar

Tim Weyrich

University College London

View shared research outputs
Top Co-Authors

Avatar

Franco Tecchia

Sant'Anna School of Advanced Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Rae

University of Roehampton

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xueni Pan

University College London

View shared research outputs
Top Co-Authors

Avatar

Mel Slater

University of Barcelona

View shared research outputs
Researchain Logo
Decentralizing Knowledge