Lee Skrypchuk
Jaguar Land Rover
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lee Skrypchuk.
IEEE Transactions on Systems, Man, and Cybernetics | 2016
Bashar I. Ahmad; James K. Murphy; Patrick Langdon; Simon J. Godsill; Robert Hardy; Lee Skrypchuk
Using interactive displays, such as a touchscreen, in vehicles typically requires dedicating a considerable amount of visual as well as cognitive capacity and undertaking a hand pointing gesture to select the intended item on the interface. This can act as a distractor from the primary task of driving and consequently can have serious safety implications. Due to road and driving conditions, the user input can also be highly perturbed resulting in erroneous selections compromising the system usability. In this paper, we propose intent-aware displays that utilize a pointing gesture tracker in conjunction with suitable Bayesian destination inference algorithms to determine the item the user intends to select, which can be achieved with high confidence remarkably early in the pointing gesture. This can drastically reduce the time and effort required to successfully complete an in-vehicle selection task. In the proposed probabilistic inference framework, the likelihood of all the nominal destinations is sequentially calculated by modeling the hand pointing gesture movements as a destination-reverting process. This leads to a Kalman filter-type implementation of the prediction routine that requires minimal parameter training and has low computational burden; it is also amenable to parallelization. The substantial gains obtained using an intent-aware display are demonstrated using data collected in an instrumented vehicle driven under various road conditions.
automotive user interfaces and interactive vehicular applications | 2015
Bashar I. Ahmad; Patrick Langdon; Simon J. Godsill; Robert Hardy; Lee Skrypchuk; Richard Donkor
With the proliferation of the touchscreen technology, interactive displays are becoming an integrated part of the modern vehicle environment. However, due to road and driving conditions, the user input on such displays can be perturbed resulting in erroneous selections. This paper describes an evaluative study of the usability and input performance of in-vehicle touchscreens. The analysis is based on data collected in instrumented cars driven under various road/driving conditions. We assess the frequency of failed selection attempts, distances by which users miss the intended on-screen target and the durations of undertaken free hand pointing gestures to accomplish the selection tasks. It is shown that the road/driving conditions can notably undermine the usability of an interactive display when the user input is perturbed, e.g. due to the experienced vibrations and lateral accelerations in the vehicle. The distance between the location of an erroneous on-screen selection and the intended endpoint on the display, is closely related to the level of present in-vehicle noise. The conducted study can advise graphical user interfaces design for the vehicle environment where the user free hand pointing gestures can be subject to varying levels of perturbations.
IEEE Transactions on Intelligent Transportation Systems | 2014
Genaro Rebolledo-Mendez; Angélica Reyes; Sebastian Paszkowicz; Mari Carmen Domingo; Lee Skrypchuk
Emerging applications using body sensor networks (BSNs) constitute a new trend in car safety. However, the integration of heterogeneous body sensors with vehicular ad hoc networks (VANETs) poses a challenge, particularly on the detection of human behavioral states that may impair driving. This paper proposes a detector of human emotions, of which tiredness and stress (tension) could be related to traffic accidents. We present an exploratory study demonstrating the feasibility of detecting one emotional state in real time using a BSN. Based on these results, we propose middleware architecture that is able to detect emotions, which can be communicated via the onboard unit of a vehicle with city emergency services, VANETs, and roadside units, aimed at improving the drivers experience and at guaranteeing better security measures for the car driver.
Advances in Human-computer Interaction | 2012
Matthew J. Pitts; Lee Skrypchuk; Tom Wellings; Alex Attridge; Mark A. Williams
Touchscreen interfaces are widely used in modern technology, from mobile devices to in-car infotainment systems. However, touchscreens impose significant visual workload demands on the user which have safety implications for use in cars. Previous studies indicate that the application of haptic feedback can improve both performance of and affective response to user interfaces. This paper reports on and extends the findings of a 2009 study conducted to evaluate the effects of different combinations of touchscreen visual, audible, and haptic feedback on driving and task performance, affective response, and subjective workload; the initial findings of which were originally published in (M. J. Pitts et al., 2009). A total of 48 non-expert users completed the study. A dual-task approach was applied, using the Lane Change Test as the driving task and realistic automotive use case touchscreen tasks. Results indicated that, while feedback type had no effect on driving or task performance, preference was expressed for multimodal feedback over visual alone. Issues relating to workload and cross-modal interaction were also identified.
automotive user interfaces and interactive vehicular applications | 2014
Bashar I. Ahmad; Patrick Langdon; Simon J. Godsill; Robert Hardy; Eduardo Dias; Lee Skrypchuk
Interactive displays are becoming an integrated part of the modern vehicle environment. Their use typically entails dedicating a considerable amount of attention and undertaking a pointing gesture to select an interface item/icon displayed on a touchscreen. This can have serious safety implications for the driver. The pointing gesture can also be highly perturbed due to the road and driving conditions, resulting in erroneous selections. In this paper, we propose a probabilistic intent prediction approach that facilitates establishing the targeted icon on the interface early in the pointing gesture. It employs a 3D vision sensory device to continuously track the pointing hand/finger in conjunction with suitable Bayesian prediction algorithms. The introduced technique can significantly reduce the pointing task completion time, the necessary associated visual, cognitive and movement efforts as well as enhance the selection accuracy. The substantial furnished gains and the pointing gesture characteristics are demonstrated using data collected in an instrumented vehicle.
automotive user interfaces and interactive vehicular applications | 2013
Gary Burnett; Elizabeth Crundall; David R. Large; Glyn Lawson; Lee Skrypchuk
Touch screens are increasingly used within modern vehicles, providing the potential for a range of gestures to facilitate interaction under divided attention conditions. This paper describes a study aiming to understand how drivers naturally make swipe gestures in a vehicle context when compared with a stationary setting. Twenty experienced drivers were requested to undertake a swipe gesture on a touch screen in a manner they felt was appropriate to execute a wide range of activate/deactivate, increase/decrease and next/previous tasks. All participants undertook the tasks when either driving within a right-hand drive, medium-fidelity simulator or whilst sitting stationary. Consensus emerged in the direction of swipes made for a relatively small number of increase/decrease and next/previous tasks, particularly related to playing music. The physical action of a swipe made in different directions was found to affect the length and speed of the gesture. Finally, swipes were typically made more slowly in the driving situation, reflecting the reduced resources available in this context and/or the handedness of the participants. Conclusions are drawn regarding the future design of swipe gestures for interacting with in-vehicle touch screens.
automotive user interfaces and interactive vehicular applications | 2016
Bashar I. Ahmad; Patrick Langdon; Simon J. Godsill; Richard Donkor; Rebecca Wilde; Lee Skrypchuk
In this paper, we first give an overview of the predictive display concept, which aims to minimise the demand associated with interacting with in-vehicle displays, such as touchscreens, via free hand pointing gestures. It determines the item the user intends to select, early in the pointing gesture, and accordingly simplifies-expedites the target acquisition. A study to evaluate the impact of using a predictive touchscreen in a car is then presented. The mid-air selection pointing facilitation scheme is applied, such that the user does not have to physically touch the interactive surface. Instead, the predictive display auto-selects the predicted interface icon on behalf of the user, once the required level of inference certainty is achieved. The study results, which are based on data collected from 20 participants under various driving-road conditions, demonstrate that a predictive display can significantly reduce the workload, effort and durations of completing on-screen selection tasks in vehicles.
International Journal of Human-computer Interaction | 2018
David R. Large; Gary Burnett; Elizabeth Crundall; Editha van Loon; Ayse Leyla Eren; Lee Skrypchuk
ABSTRACT Touchscreen human–machine interfaces (HMIs) are commonly employed as the primary control interface and touch-point of vehicles. However, there has been very little theoretical work to model the demand associated with such devices in the automotive domain. Instead, touchscreen HMIs intended for deployment within vehicles tend to undergo time-consuming and expensive empirical testing and user trials, typically requiring fully functioning prototypes, test rigs, and extensive experimental protocols. While such testing is invaluable and must remain within the normal design/development cycle, there are clear benefits, both fiscal and practical, to the theoretical modeling of human performance. We describe the development of a preliminary model of human performance that makes a priori predictions of the visual demand (total glance time, number of glances, and mean glance duration) elicited by in-vehicle touchscreen HMI designs, when used concurrently with driving. The model incorporates information theoretic components based on Hick–Hyman Law decision/search time and Fitts’ Law pointing time and considers anticipation afforded by structuring and repeated exposure to an interface. Encouraging validation results, obtained by applying the model to a real-world prototype touchscreen HMI, suggest that it may provide an effective design and evaluation tool, capable of making valuable predictions regarding the limits of visual demand/performance associated with in-vehicle HMIs, much earlier in the design cycle than traditional design evaluation techniques. Further validation work is required to explore the behavior associated with more complex tasks requiring multiple screen interactions, as well as other HMI design elements and interaction techniques. Results are discussed in the context of facilitating the design of in-vehicle touchscreen HMI to minimize visual demand.
automotive user interfaces and interactive vehicular applications | 2015
Bashar I. Ahmad; Simon J. Godsill; Lee Skrypchuk; Patrick Langdon; Robert Hardy
Intent-aware displays aim to simplify and expedite the task of selecting an icon displayed on an in-vehicle touchscreen via a free hand pointing gesture, thus, minimise the incurred effort and/or distractions. This is achieved by determining the user intent, with high confidence, notably early in the pointing gesture. This paper describes a pilot evaluative study of the benefits of employing the intent-aware display solution by assessing the workload associated with using an in-vehicle interactive display and the time required to accomplish the undertaken pointing tasks, with and with-out the intent prediction capability. The presented results are based on data collected in an instrumented car for 18 participants. They demonstrate that an intent-aware dis-play significantly reduces the workload/effort of using an in-vehicle touchscreen (it is halved) and the duration of a pointing task (it is reduced by over 20%).
SAE 2015 World Congress & Exhibition | 2015
Matthew J. Pitts; Elvir Hasedžić; Lee Skrypchuk; Alex Attridge; Mark A. Williams
The advent of 3D displays offers Human-Machine Interface (HMI) designers and engineers new opportunities to shape the users experience of information within the vehicle. However, the application of 3D displays to the in-vehicle environment introduces a number of new parameters that must be carefully considered in order to optimise the user experience. In addition, there is potential for 3D displays to increase driver inattention, either through diverting the drivers attention away from the road or by increasing the time taken to assimilate information. Manufacturers must therefore take great care in establishing the ‘do’s and ‘don’ts of 3D interface design for the automotive context, providing a sound basis upon which HMI designers can innovate. This paper describes the approach and findings of a three-part investigation into the use of 3D displays in the instrument cluster of a road car, the overall aim of which was to define the boundaries of the 3D HMI design space. A total of 73 participants were engaged over three studies. Findings indicate that users can identify depth more quickly and accurately when rendered in 3D, indicating potential for future applications using the depth dimension to relay information. Image quality was found to degrade with increasing parallax and indications of a fatigue effect with continued exposure were found. Finally, a relationship between minimum 3D offset, parallax position and object type was identified.