Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonathan Dobres is active.

Publication


Featured researches published by Jonathan Dobres.


Ergonomics | 2016

Multi-modal assessment of on-road demand of voice and manual phone calling and voice navigation entry across two embedded vehicle systems

Bruce Mehler; David G. Kidd; Bryan Reimer; Ian J. Reagan; Jonathan Dobres; Anne Taylor McCartt

Abstract One purpose of integrating voice interfaces into embedded vehicle systems is to reduce drivers’ visual and manual distractions with ‘infotainment’ technologies. However, there is scant research on actual benefits in production vehicles or how different interface designs affect attentional demands. Driving performance, visual engagement, and indices of workload (heart rate, skin conductance, subjective ratings) were assessed in 80 drivers randomly assigned to drive a 2013 Chevrolet Equinox or Volvo XC60. The Chevrolet MyLink system allowed completing tasks with one voice command, while the Volvo Sensus required multiple commands to navigate the menu structure. When calling a phone contact, both voice systems reduced visual demand relative to the visual–manual interfaces, with reductions for drivers in the Equinox being greater. The Equinox ‘one-shot’ voice command showed advantages during contact calling but had significantly higher error rates than Sensus during destination address entry. For both secondary tasks, neither voice interface entirely eliminated visual demand. Practitioner Summary: The findings reinforce the observation that most, if not all, automotive auditory–vocal interfaces are multi-modal interfaces in which the full range of potential demands (auditory, vocal, visual, manipulative, cognitive, tactile, etc.) need to be considered in developing optimal implementations and evaluating drivers’ interaction with the systems. Social Media: In-vehicle voice-interfaces can reduce visual demand but do not eliminate it and all types of demand need to be taken into account in a comprehensive evaluation.


Ergonomics | 2016

Multi-modal demands of a smartphone used to place calls and enter addresses during highway driving relative to two embedded systems

Bryan Reimer; Bruce Mehler; Ian J. Reagan; David G. Kidd; Jonathan Dobres

Abstract There is limited research on trade-offs in demand between manual and voice interfaces of embedded and portable technologies. Mehler et al. identified differences in driving performance, visual engagement and workload between two contrasting embedded vehicle system designs (Chevrolet MyLink and Volvo Sensus). The current study extends this work by comparing these embedded systems with a smartphone (Samsung Galaxy S4). None of the voice interfaces eliminated visual demand. Relative to placing calls manually, both embedded voice interfaces resulted in less eyes-off-road time than the smartphone. Errors were most frequent when calling contacts using the smartphone. The smartphone and MyLink allowed addresses to be entered using compound voice commands resulting in shorter eyes-off-road time compared with the menu-based Sensus but with many more errors. Driving performance and physiological measures indicated increased demand when performing secondary tasks relative to ‘just driving’, but were not significantly different between the smartphone and embedded systems. Practitioner Summary: The findings show that embedded system and portable device voice interfaces place fewer visual demands on the driver than manual interfaces, but they also underscore how differences in system designs can significantly affect not only the demands placed on drivers, but also the successful completion of tasks.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2014

Comparing the Demands of Destination Entry using Google Glass and the Samsung Galaxy S4

Niek Beckers; Sam Schreiner; Pierre Bertrand; Bryan Reimer; Bruce Mehler; Daniel Munger; Jonathan Dobres

A driving simulation study assessed the impact of vocally entering an alpha numeric destination into Google Glass relative to voice and touch-entry methods using a handheld Samsung Galaxy S4 smartphone. Driving performance (standard deviation of lateral lane position and longitudinal velocity) and reaction to a light detection response task (DRT) were recorded for a gender-balanced sample of 24 young adult drivers. Task completion time and subjective workload ratings were also measured. Using Google Glass for destination entry had a statistically higher miss rate than using the Samsung Galaxy S4 voice interface, the Google Glass method took less time to complete, and the two methods were given comparable workload ratings by participants. In agreement with previous work, both voice interfaces performed significantly better than touch entry; this was seen in workload ratings, task duration, lateral lane control, and DRT metrics. Finally, irrespective of device or modality, destination entry significantly decreased responsiveness to events in the forward scene (as measured by the DRT reaction time) as compared to the baseline driving.


Ergonomics | 2014

Assessing the impact of typeface design in a text-rich automotive user interface

Bryan Reimer; Bruce Mehler; Jonathan Dobres; Joseph F. Coughlin; Steve Matteson; David Gould; Nadine Chahine; Vladimir Levantovsky

Text-rich driver–vehicle interfaces are increasingly common in new vehicles, yet the effects of different typeface characteristics on task performance in this brief off-road based glance context remains sparsely examined. Subjects completed menu selection tasks while in a driving simulator. Menu text was set either in a ‘humanist’ or ‘square grotesque’ typeface. Among men, use of the humanist typeface resulted in a 10.6% reduction in total glance time as compared to the square grotesque typeface. Total response time and number of glances showed similar reductions. The impact of typeface was either more modest or not apparent for women. Error rates for both males and females were 3.1% lower for the humanist typeface. This research suggests that optimised typefaces may mitigate some interface demands. Future work will need to assess whether other typeface characteristics can be optimised to further reduce demand, improve legibility, increase usability and help meet new governmental distraction guidelines. Practitioner Summary: Text-rich in-vehicle interfaces are increasingly common, but the effects of typeface on task performance remain sparsely studied. We show that among male drivers, menu selection tasks are completed with 10.6% less visual glance time when text is displayed in a ‘humanist’ typeface, as compared to a ‘square grotesque’.


Ergonomics | 2016

Utilising psychophysical techniques to investigate the effects of age, typeface design, size and display polarity on glance legibility.

Jonathan Dobres; Nadine Chahine; Bryan Reimer; David Gould; Bruce Mehler; Joseph F. Coughlin

Abstract Psychophysical research on text legibility has historically investigated factors such as size, colour and contrast, but there has been relatively little direct empirical evaluation of typographic design itself, particularly in the emerging context of glance reading. In the present study, participants performed a lexical decision task controlled by an adaptive staircase method. Two typefaces, a ‘humanist’ and ‘square grotesque’ style, were tested. Study I examined positive and negative polarities, while Study II examined two text sizes. Stimulus duration thresholds were sensitive to differences between typefaces, polarities and sizes. Typeface also interacted significantly with age, particularly for conditions with higher legibility thresholds. These results are consistent with previous research assessing the impact of the same typefaces on interface demand in a simulated driving environment. This simplified methodology of assessing legibility differences can be adapted to investigate a wide array of questions relevant to typographic and interface designs. Practitioner Summary: A method is described for rapidly investigating relative legibility of different typographical features. Results indicate that during glance-like reading induced by the psychophysical technique and under the lighting conditions considered, humanist-style type is significantly more legible than a square grotesque style, and that black-on-white text is significantly more legible than white-on-black.


automotive user interfaces and interactive vehicular applications | 2014

A Pilot Study Measuring the Relative Legibility of Five Simplified Chinese Typefaces Using Psychophysical Methods

Jonathan Dobres; Bryan Reimer; Bruce Mehler; Nadine Chahine; David Gould

In-vehicle user interfaces increasingly rely on screens filled with digital text to display information to the driver. As these interfaces have the potential to increase the demands placed upon the driver, it is important to design them in a way that minimizes attention time to the device and thus keeps the driver focused on the road. Previous research has shown that even relatively subtle differences in the design of the on-screen typeface can influence to-device glance time in a measurable and meaningful way. Here we outline a methodology for rapidly and flexibly investigating the legibility of typefaces in glance-like contexts, and apply this method to a comparison of 5 Simplified Chinese typefaces. We find that the legibility of the typefaces, measured as the minimum presentation time needed to read character strings and respond to a yes/no lexical decision task, is sensitive to differences in the typefaces design characteristics. The most legible typeface under study could be read 33.1% faster than the least legible typeface in this glance-induced context. Benefits and limitations of the methodology are discussed.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2015

Comparing the Relative Impact of Smartwatch and Smartphone Use While Driving on Workload, Attention, and Driving Performance:

Aubrey Samost; David Perlman; August G. Domel; Bryan Reimer; Bruce Mehler; Alea Mehler; Jonathan Dobres; Thomas McWilliams

A simulator study evaluated the extent to which the use of a smartwatch to initiate phone calls while driving impacts driver workload, attention, and performance, relative to visual-manual (VM) and auditory-vocal (AV) calling methods on a smartphone. Participants completed four calling tasks using each method while driving in a simulator and completing a remote detection response task (R-DRT). Among the 36 participants evaluated, R-DRT miss rates and reaction time were comparable between AV calling on the smartwatch and smartphone, but significantly higher using VM calling on the smartphone. Participants also exhibited more erratic driving behavior (lane position deviation and major steering wheel reversals) with smartphone VM calling compared to both AV calling methods. Finally, participants rated AV calling on both devices as entailing lower workload than VM calling on the smartphone. Overall, few differences emerged for the metrics reported between voice calling on a smartphone versus a smartwatch while driving.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2013

A Driving Simulation Study Examining Destination Entry with iOS 5 Google Maps and a Garmin Portable GPS System

Celena Dopart; Anders Häggman; Cameron Thornberry; Bruce Mehler; Jonathan Dobres; Bryan Reimer

A simulation study compared 23 young adult drivers’ task completion time, mean glance time, number of glances, and percentage of long glances while performing a navigation entry task with a Garmin portable GPS system and a mobile navigation application (iOS 5 Google Maps) on an iPod Touch. We compared participants’ performance on these tasks using the National Highway Traffic Safety Administration (NHTSA) eye-glance acceptance criteria. We found that, irrespective of the device used, no participant was able to complete the task within the recommended total time window of 12 seconds. When entering a destination into the iOS interface, only 73.9% of the drivers meet the NHTSA criteria for long duration glances. With the Garmin system, 91.3% of the participants meet this criterion. All participants were able to maintain adequate mean off road glance durations. Finally, we compared the NHTSA recommended method of assessing all off road glances to more traditional methods of assessing glances only to the task interface. Differences between the two methods are discussed.


Transportation Research Record | 2017

Empirical Assessment of the Legibility of the Highway Gothic and Clearview Signage Fonts

Jonathan Dobres; Susan T. Chrysler; Benjamin Wolfe; Nadine Chahine; Bryan Reimer

Older drivers represent the fastest-growing segment of the driving population. Aging is associated with well-known declines in reaction time and visual processing, and, as such, future roadway infrastructure and related design considerations will need to accommodate this population. One potential area of concern is the legibility of highway signage. FHWA recently revoked an interim approval that allowed optional use of the Clearview typeface in place of the traditional Highway Gothic typeface for signage. The legibility of the two fonts was assessed with color combinations that maximized the contrast (positive or negative) or approximated a color configuration used in highway signage. Psychophysical techniques were used to establish thresholds for the time needed to decide accurately—under glancelike reading conditions—whether a string of letters was a word, as a proxy for legibility. These thresholds were lower for Clearview (indicating superior legibility) than for Highway Gothic across all conditions. Legibility thresholds were lowest for negative-contrast conditions and highest for positive-contrast conditions, with colored highway signs roughly between the two extremes. These thresholds also increased significantly across the age range studied. The method used to investigate the legibility of signage fonts adds methodological diversity to the literature along with evidence supporting the superior legibility of the Clearview font over Highway Gothic. The results do not suggest that the Clearview typeface is the optimal solution for all signage but they do indicate that additional scientific evaluations of signage legibility are warranted in different operating contexts.


automotive user interfaces and interactive vehicular applications | 2016

The Effect of Font Weight and Rendering System on Glance-Based Text Legibility

Jonathan Dobres; Bryan Reimer; Nadine Chahine

In-vehicle user interfaces increasingly rely on digital text to display information to the driver. Led by Apples iOS, thin, lightweight typography has become increasingly popular in cutting-edge HMI designs. The legibility trade-offs of lightweight typography are sparsely studied, particularly in the glance-like reading scenarios necessitated by driving. Previous research has shown that even relatively subtle differences in the design of the on-screen typeface can influence to-device glance time in a measurable and meaningful way. Here we investigate the relative legibility of four different weights (line thicknesses) of type under two different rendering systems (suboptimal rendering and optimal rendering). Results indicate that under suboptimal rendering, the lightest weight typeface renders poorly and is associated with markedly degraded legibility. Under optimal rendering, lighter weight typefaces show enhanced legibility compared to heavier typefaces. The reasons for this pattern of results, and its implications for design considerations in modern HMIs, are discussed.

Collaboration


Dive into the Jonathan Dobres's collaboration.

Top Co-Authors

Avatar

Bryan Reimer

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bruce Mehler

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nadine Chahine

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Joseph F. Coughlin

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Benjamin Wolfe

University of California

View shared research outputs
Top Co-Authors

Avatar

Thomas McWilliams

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alea Mehler

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Daniel Munger

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hale McAnulty

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ruth Rosenholtz

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge