Kyla McMullen
University of Florida
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kyla McMullen.
international conference on internationalization design and global development | 2009
Wanda Eugene; Leshell Hatley; Kyla McMullen; Quincy Brown; Yolanda A. Rankin; Sheena Lewis
The goal of this paper is to bridge the gap between existing frameworks for the design of culturally relevant educational technology. Models and guidelines that provide potential frameworks for designing culturally authentic learning environment are explained and transposed into one comprehensive design framework, understanding that integrating culture into the design of educational technology promotes learning and a more authentic user experience. This framework establishes principles that promote a holistic approach to design.
human factors in computing systems | 2014
Alireza Zare; Kyla McMullen; Christina Gardner-McCune
Many people with visual impairments actively play soccer, however the task of making the game accessible is met with significant challenges. These challenges include: the need to constantly talk to signify location and detecting the positions of silent objects on the field. Our work aims to discover methods to help persons with visual impairments play soccer more efficiently and safely. The proposed system uses headphone-rendered spatial audio, an on-person computer, and sensors to create 3D sound that represents the objects on the field in real-time. This depiction of the field will help players to more accurately detect the locations of objects and people on the field. The present work describes the design of such a system and discusses perceptual challenges. Broadly, our work aims to discover ways to enable people with visual impairments to detect the position of moving objects, which will allow them to feel empowered in their personal lives and give them the confidence to navigate more independently.
2014 IEEE VR Workshop: Sonic Interaction in Virtual Environments (SIVE) | 2014
Kyla McMullen
Digital sounds can be processed such that auditory cues are created that convey spatial location within a virtual auditory environment (VAE). Only in recent years has technology advanced such that audio can be processed in real-time as a user navigates an environment. We must first consider the perceptual challenges faced by 3D sound rendering, before we can realize its full potential. Now more than ever before, large quantities of data are created and collected at an increasing rate. Research in human perception has demonstrated that humans are capable of differentiating among many sounds. One potential application is to create an auditory virtual world in which data is represented as various sounds. Such a representation could aid data analysts in detecting patterns in data, decreasing cognitive load, and performing their jobs faster. Although this is one application, the full extent of the manner in which 3D sounds can be used to augment virtual environments has yet to be discovered.
symposium on 3d user interfaces | 2014
Kyla McMullen; Gregory H. Wakefield
Virtual auditory environments (VAEs) are created by processing digital sounds such that they convey a 3D location to the listener. This technology has the potential to augment systems in which an operator tracks the positions of targets. Prior work has established that listeners can locate sounds in VAEs, however less is known concerning listener memory for virtual sounds. In this study, three experimental tasks assessed listener recall of sound positions and identities, using free and cued recall, with one or more delays. Overall, accuracy degrades as listeners recall the environment, however when using free recall, listeners exhibited less degradation.
technical symposium on computer science education | 2018
Colleen M. Lewis; Catherine Ashcraft; Kyla McMullen
Many SIGCSE attendees are committed to inclusive teaching practices and creating an inclusive culture within their classrooms; yet, advocating for and sustaining these initiatives may require having difficult conversations with our colleagues and students. Understandably, many faculty are unsure about how to talk about sensitive topics such as race and gender with their colleagues and students. Research suggests that practicing some of these difficult conversations is essential to achieve the goals of inclusive teaching and culture. Most SIGCSE attendees probably use active learning throughout their teaching, but we rarely see active learning at SIGCSE - lets try it! In this interactive session, attendees will learn strategies for responding to bias in academic settings. Attendees will then practice those strategies in small groups. This will be facilitated by playing two rounds of a research-based game learning approach developed by the NSF project CSTeachingTips.org (#1339404), which has been tested in group of 200 teaching assistants. This is the fifth iteration of the game-learning approach and all attendees will receive a printed copy of the game and a link to download and print more copies.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2017
Terek R. Arce; Henry Fuchs; Kyla McMullen
Currently available augmented reality systems have a narrow field of view, giving users only a small window to look through to find holograms in the environment. The challenge for developers is to direct users’ attention to holograms outside this window. To alleviate this field of view constraint, most research has focused on hardware improvements to the head mounted display. However, incorporating 3D audio cues into programs could also aid users in this localization task. This paper investigates the effectiveness of 3D audio on hologram localization. A comparison of 3D audio, visual, and mixed-mode stimuli shows that users are able to localize holograms significantly faster under conditions that include 3D audio. To our knowledge, this is the first study to explore the use of 3D audio in localization tasks using augmented reality systems. The results provide a basis for the incorporation of 3D audio in augmented reality applications.
Human Factors and Ergonomics Society 2017 International Annual Meeting, HFES 2017 | 2017
Kyla McMullen; Gregory H. Wakefield
Although static localization performance in auditory displays is known to substantially improve as a listener spends more time in the environment, the impact of real-time interactive movement on these tasks is not yet well understood. Accordingly, a training procedure was developed and evaluated to address this question. In a set of experiments, listeners searched for and marked the locations of five virtually spatialized sound sources. The task was performed with and without training. Finally, the listeners performed a second search and mark task to assess the impacts of training. The results indicate that the training procedure maintained or significantly improved localization accuracy. In addition, localization performance did not improve for listeners who did not complete the training procedure.
ACM Sigaccess Accessibility and Computing | 2017
Terek R. Arce; Kyla McMullen
Accurately perceiving the structure of biochemical molecules is key to understanding their function in biological systems. Visualization software has given the scientific and medical communities a means to study these structures in great detail; however, these tools lack an intuitive means to convey this information to persons with visual impairment. Advances in spatial audio technology have allowed for sound to be perceived in 3-dimensional space when played over headphones. This work presents the development of a novel computational tool that utilizes spatial audio to convey the three dimensional structure of biochemical molecules.
international conference on auditory display | 2016
Ziqi Fan; Yunhao Wan; Kyla McMullen
As 3D audio becomes more commonplace to enhance auditory environments, designers are faced with the challenge of choosing HRTFs for listeners that provide proper audio cues. Subjective selection is a low-cost alternative to expensive HRTF measurement, however little is known concerning whether the preferred HRTFs are similar or if users exhibit random behavior in this task. In addition, PCA (principal component analysis) can be used to decompose HRTFs in representative features, however little is known concerning whether the features have a relevant perceptual basis. 12 listeners completed a subjective selection experiment in which they judged the perceptual quality of 14 HRTFs in terms of elevation, and front-back distinction. PCA was used to decompose the HRTFs and create an HRTF similarity metric. The preferred HRTFs were significantly more similar to each other, the preferred and non-preferred HRTFs were significantly less similar to each other, and in the case of front-back distinction the non-preferred HRTFs were significantly more similar to each other.
symposium on spatial user interaction | 2015
Shashank Ranjan; Kyla McMullen
With current advancements in computer vision depth sensing technologies, gestures provide a new means of computer interaction. 3D audio research has gained significant ground in accurately localizing sound in 3D space, but not much work has been conducted relating to modes of user interaction in such applications. In this paper, gestures are used as a more natural way of interacting with 3D spatial audio applications, specifically for the localization and manipulation of sound sources.