Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert Hardy is active.

Publication


Featured researches published by Robert Hardy.


IEEE Transactions on Systems, Man, and Cybernetics | 2016

Intent Inference for Hand Pointing Gesture-Based Interactions in Vehicles

Bashar I. Ahmad; James K. Murphy; Patrick Langdon; Simon J. Godsill; Robert Hardy; Lee Skrypchuk

Using interactive displays, such as a touchscreen, in vehicles typically requires dedicating a considerable amount of visual as well as cognitive capacity and undertaking a hand pointing gesture to select the intended item on the interface. This can act as a distractor from the primary task of driving and consequently can have serious safety implications. Due to road and driving conditions, the user input can also be highly perturbed resulting in erroneous selections compromising the system usability. In this paper, we propose intent-aware displays that utilize a pointing gesture tracker in conjunction with suitable Bayesian destination inference algorithms to determine the item the user intends to select, which can be achieved with high confidence remarkably early in the pointing gesture. This can drastically reduce the time and effort required to successfully complete an in-vehicle selection task. In the proposed probabilistic inference framework, the likelihood of all the nominal destinations is sequentially calculated by modeling the hand pointing gesture movements as a destination-reverting process. This leads to a Kalman filter-type implementation of the prediction routine that requires minimal parameter training and has low computational burden; it is also amenable to parallelization. The substantial gains obtained using an intent-aware display are demonstrated using data collected in an instrumented vehicle driven under various road conditions.


automotive user interfaces and interactive vehicular applications | 2015

Touchscreen usability and input performance in vehicles under different road conditions: an evaluative study

Bashar I. Ahmad; Patrick Langdon; Simon J. Godsill; Robert Hardy; Lee Skrypchuk; Richard Donkor

With the proliferation of the touchscreen technology, interactive displays are becoming an integrated part of the modern vehicle environment. However, due to road and driving conditions, the user input on such displays can be perturbed resulting in erroneous selections. This paper describes an evaluative study of the usability and input performance of in-vehicle touchscreens. The analysis is based on data collected in instrumented cars driven under various road/driving conditions. We assess the frequency of failed selection attempts, distances by which users miss the intended on-screen target and the durations of undertaken free hand pointing gestures to accomplish the selection tasks. It is shown that the road/driving conditions can notably undermine the usability of an interactive display when the user input is perturbed, e.g. due to the experienced vibrations and lateral accelerations in the vehicle. The distance between the location of an erroneous on-screen selection and the intended endpoint on the display, is closely related to the level of present in-vehicle noise. The conducted study can advise graphical user interfaces design for the vehicle environment where the user free hand pointing gestures can be subject to varying levels of perturbations.


automotive user interfaces and interactive vehicular applications | 2014

Interactive Displays in Vehicles: Improving Usability with a Pointing Gesture Tracker and Bayesian Intent Predictors

Bashar I. Ahmad; Patrick Langdon; Simon J. Godsill; Robert Hardy; Eduardo Dias; Lee Skrypchuk

Interactive displays are becoming an integrated part of the modern vehicle environment. Their use typically entails dedicating a considerable amount of attention and undertaking a pointing gesture to select an interface item/icon displayed on a touchscreen. This can have serious safety implications for the driver. The pointing gesture can also be highly perturbed due to the road and driving conditions, resulting in erroneous selections. In this paper, we propose a probabilistic intent prediction approach that facilitates establishing the targeted icon on the interface early in the pointing gesture. It employs a 3D vision sensory device to continuously track the pointing hand/finger in conjunction with suitable Bayesian prediction algorithms. The introduced technique can significantly reduce the pointing task completion time, the necessary associated visual, cognitive and movement efforts as well as enhance the selection accuracy. The substantial furnished gains and the pointing gesture characteristics are demonstrated using data collected in an instrumented vehicle.


IEEE Signal Processing Magazine | 2017

Intelligent Interactive Displays in Vehicles with Intent Prediction: A Bayesian framework

Bashar I. Ahmad; James K. Murphy; Simon J. Godsill; Patrick Langdon; Robert Hardy

Using an in-vehicle interactive display, such as a touch screen, typically entails undertaking a freehand pointing gesture and dedicating a considerable amount of attention, that can be otherwise available for driving, with potential safety implications. Due to road and driving conditions, the users input can also be subject to high levels of perturbations resulting in erroneous selections. In this article, we give an overview of the novel concept of an intelligent predictive display in vehicles. It can infer, notably early in the pointing task and with high confidence, the item the user intends to select on the display from the tracked freehand pointing gesture and possibly other available sensory data. Accordingly, it simplifies and expedites the target acquisition (pointing and selection), thereby substantially reducing the time and effort required to interact with an in-vehicle display. As well as briefly addressing the various signal processing and human factor challenges posed by predictive displays in the automotive environment, the fundamental problem of intent inference is discussed, and a Bayesian formulation is introduced. Empirical evidence from data collected in instrumented cars is shown to demonstrate the usefulness and effectiveness of this solution.


international conference on acoustics, speech, and signal processing | 2015

Destination inference using bridging distributions

Bashar I. Ahmad; James K. Murphy; Patrick Langdon; Robert Hardy; Simon J. Godsill

We propose a novel probabilistic inference approach that permits predicting, well in advance, the intended destination of a pointing gesture aimed at selecting an icon on an in-vehicle interactive display. It models the partial 3D pointing track as a Markov bridge terminating at a nominal destination. The solution introduced leads to a low-complexity Kalman-filter-type implementation and is applicable in other areas in which early detection of the destination of a tracked object is beneficial. Data collected in an instrumented vehicle illustrate that the proposed technique can infer the intent notably early in the pointing gesture. This can drastically reduce the pointing task time and visual-cognitive-manual attention required.


automotive user interfaces and interactive vehicular applications | 2015

Intelligent in-vehicle touchscreen aware of the user intent for reducing distractions: a pilot study

Bashar I. Ahmad; Simon J. Godsill; Lee Skrypchuk; Patrick Langdon; Robert Hardy

Intent-aware displays aim to simplify and expedite the task of selecting an icon displayed on an in-vehicle touchscreen via a free hand pointing gesture, thus, minimise the incurred effort and/or distractions. This is achieved by determining the user intent, with high confidence, notably early in the pointing gesture. This paper describes a pilot evaluative study of the benefits of employing the intent-aware display solution by assessing the workload associated with using an in-vehicle interactive display and the time required to accomplish the undertaken pointing tasks, with and with-out the intent prediction capability. The presented results are based on data collected in an instrumented car for 18 participants. They demonstrate that an intent-aware dis-play significantly reduces the workload/effort of using an in-vehicle touchscreen (it is halved) and the duration of a pointing task (it is reduced by over 20%).


international conference on universal access in human-computer interaction | 2015

Intelligent Intent-Aware Touchscreen Systems Using Gesture Tracking with Endpoint Prediction

Bashar I. Ahmad; Patrick Langdon; Robert Hardy; Simon J. Godsill

Using an interactive display, such as a touchscreen, entails undertaking a pointing gesture and dedicating a considerable amount of attention to execute a selection task. In this paper, we give an overview of the concept of intent-aware interactive displays that can determine, early in the free hand pointing gesture, the icon/item the user intends to select on the touchscreen. This can notably reduce the pointing time, aid implementing effective selection facilitation routines and enhance the overall system accuracy as well as the user experience. Intent-aware displays employ a gesture tracking sensor in conjunction with novel probabilistic intent inference algorithms to predict the endpoint of a free hand pointing gesture. Real 3D pointing data is used to illustrate the usefulness and effectiveness of the proposed approach.


Archive | 2015

Control Apparatus and Related Method

Eduardo Dias; Robert Hardy; Sebastian Paszkowicz; Anna Gaszczak; Thomas Popham; George Alexander


Archive | 2015

Dynamic lighting apparatus and method

Sebastian Paszkowicz; George Alexander; Robert Hardy; Eduardo Dias; Anna Gaszczak; Thomas Popham


Archive | 2018

AN APPARATUS AND A METHOD FOR CONTROLLING A HEAD-UP DISPLAY OF A VEHICLE

Stuart White; Lee Skrypchuk; Claudia Krehl; Robert Hardy; Jim Braithwaite

Collaboration


Dive into the Robert Hardy's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge