Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Myounghoon Jeon is active.

Publication


Featured researches published by Myounghoon Jeon.


Human Factors | 2013

Spearcons (Speech-Based Earcons) Improve Navigation Performance in Advanced Auditory Menus

Bruce N. Walker; Jeffrey Lindsay; Amanda Nance; Yoko Nakano; Dianne K. Palladino; Tilman Dingler; Myounghoon Jeon

Objective: The goal of this project is to evaluate a new auditory cue, which the authors call spearcons, in comparison to other auditory cues with the aim of improving auditory menu navigation. Background: With the shrinking displays of mobile devices and increasing technology use by visually impaired users, it becomes important to improve usability of non-graphical user interface (GUI) interfaces such as auditory menus. Using nonspeech sounds called auditory icons (i.e., representative real sounds of objects or events) or earcons (i.e., brief musical melody patterns) has been proposed to enhance menu navigation. To compensate for the weaknesses of traditional nonspeech auditory cues, the authors developed spearcons by speeding up a spoken phrase, even to the point where it is no longer recognized as speech. Method: The authors conducted five empirical experiments. In Experiments 1 and 2, they measured menu navigation efficiency and accuracy among cues. In Experiments 3 and 4, they evaluated learning rate of cues and speech itself. In Experiment 5, they assessed spearcon enhancements compared to plain TTS (text to speech: speak out written menu items) in a two-dimensional auditory menu. Results: Spearcons outperformed traditional and newer hybrid auditory cues in navigation efficiency, accuracy, and learning rate. Moreover, spearcons showed comparable learnability as normal speech and led to better performance than speech-only auditory cues in two-dimensional menu navigation. Conclusion: These results show that spearcons can be more effective than previous auditory cues in menu-based interfaces. Application: Spearcons have broadened the taxonomy of nonspeech auditory cues. Users can benefit from the application of spearcons in real devices.


automotive user interfaces and interactive vehicular applications | 2009

Enhanced auditory menu cues improve dual task performance and are preferred with in-vehicle technologies

Myounghoon Jeon; Benjamin K. Davison; Michael A. Nees; Jeff Wilson; Bruce N. Walker

Auditory display research for driving has mainly focused on collision warning signals, and recent studies on auditory in-vehicle information presentation have examined only a limited range of tasks (e.g., cell phone operation tasks or verbal tasks such as reading digit strings). The present study used a dual task paradigm to evaluate a plausible scenario in which users navigated a song list. We applied enhanced auditory menu navigation cues, including spearcons (i.e., compressed speech) and a spindex (i.e., a speech index that used brief audio cues to communicate the users position in a long menu list). Twenty-four undergraduates navigated through an alphabetized song list of 150 song titles---rendered as an auditory menu---while they concurrently played a simple, perceptual-motor, ball-catching game. The menu was presented with text-to-speech (TTS) alone, TTS plus one of three types of enhanced auditory cues, or no sound at all. Both performance of the primary task (success rate of the game) and the secondary task (menu search time) were better with the auditory menus than with no sound. Subjective workload scores (NASA TLX) and user preferences favored the enhanced auditory cue types. Results are discussed in terms of multiple resources theory and practical IVT design applications.


Human Factors and Ergonomics Society Annual Meeting Proceedings | 2009

“Spindex”: Accelerated Initial Speech Sounds Improve Navigation Performance in Auditory Menus

Myounghoon Jeon; Bruce N. Walker

Users interact with mobile devices through menus, which can include many items. Auditory menus can supplement or even replace visual menus. Unfortunately, little research has been devoted to enhancing the usability of large auditory menus. We evaluated a novel auditory menu enhancement called a “spindex” (i.e., speech index), in which brief audio cues inform the user where she is in a long menu. In the current implementation, each item in a menu is preceded by a sound based on the items initial letter. 25 undergraduates navigated through an alphabetized contact list of 50 or 150 names. The menu was presented with text-to-speech (TTS) alone, or TTS plus spindex, and with the visual menu displayed or not. Search time was faster with the spindex-enhanced menu, especially for long lists. Subjective ratings also favored the spindex. Results are discussed in terms of theory and practical applications.


ACM Transactions on Accessible Computing | 2011

Spindex (Speech Index) Improves Auditory Menu Acceptance and Navigation Performance

Myounghoon Jeon; Bruce N. Walker

Users interact with mobile devices through menus, which can include many items. Auditory menus have the potential to make those devices more accessible to a wide range of users. However, auditory menus are a relatively new concept, and there are few guidelines that describe how to design them. In this paper, we detail how visual menu concepts may be applied to auditory menus in order to help develop design guidelines. Specifically, we examine how to optimize the designs of a new contextual cue, called “spindex” (i.e., speech index). We developed and evaluated various design alternatives for spindex and iteratively refined the design with sighted users and visually impaired users. As a result, the “attenuated” spindex was the best in terms of preference as well as performance, across user groups. Nevertheless, sighted and visually impaired participants showed slightly different responses and feedback. Results are discussed in terms of acoustical theory, practical display design, and assistive technology design.


International Journal of Human-computer Interaction | 2015

Menu Navigation With In-Vehicle Technologies: Auditory Menu Cues Improve Dual Task Performance, Preference, and Workload

Myounghoon Jeon; Thomas M. Gable; Benjamin K. Davison; Michael A. Nees; Jeff Wilson; Bruce N. Walker

Auditory display research for driving has mainly examined a limited range of tasks (e.g., collision warnings, cell phone tasks). In contrast, the goal of this project was to evaluate the effectiveness of enhanced auditory menu cues in a simulated driving context. The advanced auditory cues of “spearcons” (compressed speech cues) and “spindex” (a speech-based index cue) were predicted to improve both menu navigation and driving. Two experiments used a dual task paradigm in which users selected songs on the vehicle’s infotainment system. In Experiment 1, 24 undergraduates played a simple, perceptual-motor ball-catching game (the primary task; a surrogate for driving), and navigated through an alphabetized list of 150 song titles—rendered as an auditory menu—as a secondary task. The menu was presented either in the typical visual-only manner, or enhanced with text-to-speech (TTS), or TTS plus one of three types of additional auditory cues. In Experiment 2, 34 undergraduates conducted the same secondary task while driving in a simulator. In both experiments, performance on both the primary task (success rate of the game or driving performance) and the secondary task (menu search time) was better with the auditory menus than with no sound. Perceived workload scores as well as user preferences favored the enhanced auditory cue types. These results show that adding audio, and enhanced auditory cues in particular, can allow a driver to operate the menus of in-vehicle technologies more efficiently while driving more safely. Results are discussed with multiple resources theory.


automotive user interfaces and interactive vehicular applications | 2011

An angry driver is not the same as a fearful driver: effects of specific negative emotions on risk perception, driving performance, and workload

Myounghoon Jeon; Jung-Bin Yim; Bruce N. Walker

Most emotion detection research starts with a valence dimension---positive and negative states. However, these approaches have not discriminated the effects of distinct emotions of the same valence. Recent psychological findings have proposed that different emotions may have different impacts even though they belong to the same valence. The current study consists of a simulated driving experiment with two induced affective states that are important in driving contexts, to investigate how anger and fear differently influence driving-related risk perception, driving performance, and perceived workload. Twenty four undergraduates drove under three different road conditions with either induced anger or fear. Anger led to more errors than fear, regardless of difficulty level and error type. Also, participants with induced fear reported greater workload than participants with induced anger. Results are discussed in terms of the cognitive appraisal mechanism and design directions for the in-vehicle emotion detection and regulation system.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2011

What to detect? Analyzing Factor Structures of Affect in Driving Contexts for an Emotion Detection and Regulation System

Myounghoon Jeon; Bruce N. Walker

This research is a part of the IVAT (In-Vehicle Assistive Technology) project, an in-dash interface design project to help drivers who have various disabilities, including deficits in emotion regulation. While there have been several studies on emotion detection for drivers, few studies have seriously addressed what to detect and why. Those are crucial issues to consider when implementing an effective affect management system. Phase 1 of our study gathered a total of 33 different driving situations that can induce emotions and 56 plausible affective keywords to describe such emotions. Phase 2 analyzed factor structures of affect for driving contexts through user ratings and Factor Analysis, and obtained nine factors: fearful, happy, angry, depressed, curious, embarrassed, urgent, bored, and relieved. These factors accounted for 65.1% of the total variance. Results are discussed in terms of designing the IVAT emotion detection and regulation system for driving contexts.


human factors in computing systems | 2012

Listen2dRoom: helping blind individuals understand room layouts

Myounghoon Jeon; N. Nazneen; Ozum Akanser; Abner Ayala-Acevedo; Bruce N. Walker

Over half a million Americans are legally blind. Despite much effort in assistive technology, blindness remains a major challenge to accessibility. For individuals who are blind, there has been considerable research on indoor/outdoor way finding, but there has been little research on room layout information. The purpose of the current research is to support blind individuals to understand the layout of an unfamiliar room. We found some important applications for this type of assistive technology such as safety, easy-to-use furniture and home appliances. To this end, we identified user needs and variables with blind participants, designed and evaluated prototype systems, and iteratively improved the system. The overall process, findings, and on-going future works are discussed. This effort is expected to enhance independence for persons who are blind.


ACM Transactions on Computer-Human Interaction | 2012

“Spindex” (Speech Index) Enhances Menus on Touch Screen Devices with Tapping, Wheeling, and Flicking

Myounghoon Jeon; Bruce N. Walker; Abhishek Srivastava

Users interact with many electronic devices via menus such as auditory or visual menus. Auditory menus can either complement or replace visual menus. We investigated how advanced auditory cues enhance auditory menus on a smartphone, with tapping, wheeling, and flicking input gestures. The study evaluated a spindex (speech index), in which audio cues inform users where they are in a menu; 122 undergraduates navigated through a menu of 150 songs. Study variables included auditory cue type (text-to-speech alone or TTS plus spindex), visual display mode (on or off), and input gesture (tapping, wheeling, or flicking). Target search time and subjective workload were lower with spindex than without for all input gestures regardless of visual display mode. The spindex condition was rated subjectively higher than plain speech. The effects of input method and display mode on navigation behaviors were analyzed with the two-stage navigation strategy model. Results are discussed in relation to attention theories and in terms of practical applications.


software technologies for embedded and ubiquitous systems | 2004

Implementation of new services to support ubiquitous computing for campus life

Tack-Don Han; Cheol-Ho Cheong; Jae-Won Ann; Jong-Young Kim; Hyung-Min Yoon; Chang-Su Lee; Hyon-Gu Shin; Young-Jin Lee; Hyoung-Min Yook; Myounghoon Jeon; Jung Soo Choi; Joo Hyeon Lee; Young-Woo Sohn; Yoon Su Baek; Sang-Yong Lee; Eun-Dong Shin; Woo-Shik Kang; Seong-Woon Kim

The various services that make up the ubiquitous computing concept have developed with the rapid growth of wireless Internet environments and mobile devices. The major purpose of this research is to provide a context-aware U-Town environment to users by utilizing sensors and mobile devices currently available in the public sector and the marketplace. In this paper, the first step of the UTOPIA (Ubiquitous computing TOwn Project: Intelligent context Awareness) project is to introduce the U-Campus service (Ubiquitous computing for the campus). The paper introduces services such as the U-Restaurant, U-Museum, and the U-Theme Park, which are the second step of the U-Town project. In addition, we introduce the MoCE (Mobile Context Explorer) architecture, which is the context-aware recognition technology used to realize U-Town project services.

Collaboration


Dive into the Myounghoon Jeon's collaboration.

Top Co-Authors

Avatar

Bruce N. Walker

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Chung Hyuk Park

George Washington University

View shared research outputs
Top Co-Authors

Avatar

Steven Landry

Michigan Technological University

View shared research outputs
Top Co-Authors

Avatar

Andreas Riener

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

Ayanna M. Howard

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jason Sterkenburg

Michigan Technological University

View shared research outputs
Top Co-Authors

Avatar

Maryam FakhrHosseini

Michigan Technological University

View shared research outputs
Top Co-Authors

Avatar

Jaclyn Barnes

Michigan Technological University

View shared research outputs
Top Co-Authors

Avatar

Eric Vasey

Michigan Technological University

View shared research outputs
Researchain Logo
Decentralizing Knowledge