Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pavlo Bazilinskyy is active.

Publication


Featured researches published by Pavlo Bazilinskyy.


Applied Ergonomics | 2017

Analyzing crowdsourced ratings of speech-based take-over requests for automated driving

Pavlo Bazilinskyy; J.C.F. de Winter

Take-over requests in automated driving should fit the urgency of the traffic situation. The robustness of various published research findings on the valuations of speech-based warning messages is unclear. This research aimed to establish how people value speech-based take-over requests as a function of speech rate, background noise, spoken phrase, and speakers gender and emotional tone. By means of crowdsourcing, 2669 participants from 95 countries listened to a random 10 out of 140 take-over requests, and rated each take-over request on urgency, commandingness, pleasantness, and ease of understanding. Our results replicate several published findings, in particular that an increase in speech rate results in a monotonic increase of perceived urgency. The female voice was easier to understand than a male voice when there was a high level of background noise, a finding that contradicts the literature. Moreover, a take-over request spoken with Indian accent was found to be easier to understand by participants from India than by participants from other countries. Our results replicate effects in the literature regarding speech-based warnings, and shed new light on effects of background noise, gender, and nationality. The results may have implications for the selection of appropriate take-over requests in automated driving. Additionally, our study demonstrates the promise of crowdsourcing for testing human factors and ergonomics theories with large sample sizes.


Human Factors | 2018

Crowdsourced Measurement of Reaction Times to Audiovisual Stimuli With Various Degrees of Asynchrony

Pavlo Bazilinskyy; Joost C. F. de Winter

Objective: This study was designed to replicate past research concerning reaction times to audiovisual stimuli with different stimulus onset asynchrony (SOA) using a large sample of crowdsourcing respondents. Background: Research has shown that reaction times are fastest when an auditory and a visual stimulus are presented simultaneously and that SOA causes an increase in reaction time, this increase being dependent on stimulus intensity. Research on audiovisual SOA has been conducted with small numbers of participants. Method: Participants (N = 1,823) each performed 176 reaction time trials consisting of 29 SOA levels and three visual intensity levels, using CrowdFlower, with a compensation of US


International Conference on Applied Human Factors and Ergonomics | 2017

Blind Driving by Means of a Steering-Based Predictor Algorithm

Pavlo Bazilinskyy; Charles Beaumont; Xander van der Geest; Reinier de Jonge; Koen van der Kroft; Joost C. F. de Winter

0.20 per participant. Results were verified with a local Web-in-lab study (N = 34). Results: The results replicated past research, with a V shape of mean reaction time as a function of SOA, the V shape being stronger for lower-intensity visual stimuli. The level of SOA affected mainly the right side of the reaction time distribution, whereas the fastest 5% was hardly affected. The variability of reaction times was higher for the crowdsourcing study than for the Web-in-lab study. Conclusion: Crowdsourcing is a promising medium for reaction time research that involves small temporal differences in stimulus presentation. The observed effects of SOA can be explained by an independent-channels mechanism and also by some participants not perceiving the auditory or visual stimulus, hardware variability, misinterpretation of the task instructions, or lapses in attention. Application: The obtained knowledge on the distribution of reaction times may benefit the design of warning systems.


systems, man and cybernetics | 2016

Object-alignment performance in a head-mounted display versus a monitor

Pavlo Bazilinskyy; Natalia Kovacsova; Amir Al Jawahiri; Pieter Kapel; Joppe Mulckhuyse; Sjors Wagenaar; Joost C. F. de Winter

The aim of this work was to develop and empirically test different algorithms of a lane-keeping assistance system that supports drivers by means of a tone when the car is about to deviate from its lane. These auditory assistance systems were tested in a driving simulator with its screens shut down, so that the participants used auditory feedback only. Five participants drove with a previously published algorithm that predicted the future position of the car based on the current velocity vector, and three new algorithms that predicted the future position based on the momentary speed and steering angle. Results of a total of 5 h of driving across participants showed that, with extensive practice and knowledge of the system, it is possible to drive on a track with sharp curves for 5 min without leaving the road. Future research should aim to improve the intuitiveness of the auditory feedback.


PeerJ | 2015

Auditory interfaces in automated driving: an international survey

Pavlo Bazilinskyy; Joost C. F. de Winter

Head-mounted displays (HMDs) offer immersion and binocular disparity. This study investigated whether an HMD yields better object-alignment performance than a conventional monitor in virtual environments that are rich in pictorial depth cues. To determine the effects of immersion and disparity separately, three hardware setups were compared: 1) a conventional computer monitor, yielding low immersion, 2) an HMD with binocular-vision settings (HMD stereo), and 3) an HMD with the same image presented to both eyes (HMD mono). Two virtual environments were used: a street environment in which two cars had to be aligned (target distance of about 15 m) and an office environment in which two books had to be aligned (target distance of about 0.7 m, at which binocular depth cues were expected to be important). Twenty males (mean age = 21.2, SD age = 1.6) each completed 10 object-alignment trials for each of the six conditions. The results revealed no statistically significant differences in object-alignment performance between the three hardware setups. A self-report questionnaire showed that participants felt more involved in the virtual environment and experienced more oculomotor discomfort with the HMD than with the monitor.


Procedia Manufacturing | 2015

An International Crowdsourcing Study into People's Statements on Fully Automated Driving

Pavlo Bazilinskyy; Miltos Kyriakidis; Joost C. F. de Winter


Applied Ergonomics | 2017

Take-over again: Investigating multimodal and directional TORs to get the driver back into the loop

Sebastiaan M. Petermeijer; Pavlo Bazilinskyy; Klaus Bengler; Joost C. F. de Winter


Transportation Research Part F-traffic Psychology and Behaviour | 2018

Take-over requests in highly automated driving: A crowdsourcing survey on auditory, vibrotactile, and visual displays

Pavlo Bazilinskyy; S.M. Petermeijer; V. Petrovych; Dimitra Dodou; J.C.F. de Winter


IFAC-PapersOnLine | 2016

Blind driving by means of auditory feedback

Pavlo Bazilinskyy; Lars van der Geest; Stefan van Leeuwen; Bart Numan; Joris Pijnacker; Joost C. F. de Winter


Archive | 2018

Graded auditory feedback based on headway : An on-road pilot study

Pavlo Bazilinskyy; J.C.J. Stapel; C.L.A. de Koning; H. Lingmont; T.S. de Lint; T.C. van der Sijs; F.C. van den Ouden; F. Anema; J.C.F. de Winter; Dick de Waard; F. Di Nocera; D. Coelho; J. Edworthy; Karel Brookhuis; Fabio Ferlazzo; Thomas Franke; Antonella Toffetti

Collaboration


Dive into the Pavlo Bazilinskyy's collaboration.

Top Co-Authors

Avatar

Joost C. F. de Winter

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

J.C.F. de Winter

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Amir Al Jawahiri

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Bart Numan

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Charles Beaumont

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Coen Berssenbrugge

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dimitra Dodou

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Hashim Quraishi

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jasper Binda

Delft University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge