Leilah Lyons
University of Illinois at Chicago
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Leilah Lyons.
tangible and embedded interaction | 2012
Leilah Lyons; Brian Slattery; Priscilla Jimenez; Brenda Lopez; Thomas G. Moher
This paper describes a frequently-overlooked aspect of embodied interaction design: physical effort. Although exertion is the direct goal of many embodied activities (e.g., exergames), and is used indirectly to discourage certain user interactions (as with affordances), exertion has not been used to support direct expressive interaction with an embodied system. Situating exertion in both psychological and physiological literature, this paper suggests guidelines for employing exertion as more than just an incidental component of proprioception in embodied interaction designs. Specifically, the linkages between exertion, affect, and recall are reviewed and analyzed for their potential to support embodied learning activities, and literature concerning human perceptions of effort is reviewed to help designers understand how to incorporate effort more directly and intentionally in embodied interaction designs. Also presented is an illustration of how these guidelines affected the design of an educational embodied interaction experience for an informal learning setting.
human factors in computing systems | 2013
Francesco Cafaro; Alessandro Panella; Leilah Lyons; Jessica Roberts; Joshua Radinsky
Museums are increasingly embracing technologies that provide highly-individualized and highly-interactive experiences to visitors. With embodied interaction experiences, increased localization accuracy supports greater nuance in interaction design, but there is usually a tradeoff between fast, accurate tracking and the ability to preserve the identity of users. Customization of experience relies on the ability to detect the identity of visitors, however. We present a method that combines fine-grained indoor tracking with robust preservation of the unique identities of multiple users. Our model merges input from an RFID reader with input from a commercial camera-based tracking system. We developed a probabilistic Bayesian model to infer at run-time the correct identification of the subjects in the cameras field of view. This method, tested in a lab and at a local museum, requires minimal modification to the exhibition space, while addressing several identity-preservation problems for which many indoor tracking systems do not have robust solutions.
human factors in computing systems | 2011
Tia Shelley; Leilah Lyons; Moira Zellner; Emily S. Minor
Many claims have been made regarding the potential benefits of Tangible User Interfaces (TUIs). Presented here is an experiment assessing the usability, problem solving, and collaboration benefits of a TUI for direct placement tasks in spatially-explicit simulations for environmental science education. To create a low-cost deployment for single-computer classrooms, the TUI uses a webcam and computer vision to recognize the placement of paper symbols on a map. An authentic green infrastructure urban planning problem was used as the task for a within-subjects with rotation experiment with 20 pairs of participants. Because no prior experimental study has isolated the influence of the embodied nature of the TUI on usability, problem solving, and collaboration, a control condition was designed to highlight the impact of embodiment. While this study did not establish the usability benefits suggested by prior research, certain problem solving and collaboration advantages were measured.
human factors in computing systems | 2011
Priscilla Jimenez Pazmino; Leilah Lyons
New mobile device features and the growing proportion of visitors carrying mobiles allow the range of museum exhibit design possibilities to be expanded. In particular, we see opportunities for using mobiles to help exhibits scale up to support variable-sized groups of visitors, and to support collaborative visitor-visitor interactions. Because exhibit use is generally one-time-only, any interfaces created for these purposes must be easily learnable, or visitors may not use the exhibit at all. To guide the design of learnable mobile interfaces, we chose to employ the Consistency design principle. Consistency was originally applied to desktop UIs, so we extended the definition to cover three new categories of consistency relevant to ubiquitous computing: Within-Device Consistency, Across-Device Consistency and Within-Context Consistency. We experimentally contrasted designs created from these categories. The results show small differences in learnability, but illustrate that even for one-off situations learnability may not be as important as usability.
Journal of Environmental Planning and Management | 2017
Joshua Radinsky; Dan Milz; Moira Zellner; K. Pudlock; C. Witek; Charles Hoch; Leilah Lyons
Planning researchers traditionally conceptualize learning as cognitive changes in individuals. In this tradition, scholars assess learning with pre- and post-measures of understandings or beliefs. While valuable for documenting individual change, such methods leave unexamined the social processes in which planners think, act, and learn in groups, which often involve the use of technical tools. The present interdisciplinary research program used Learning Sciences research methods, including conversation analysis, interaction analysis, and visualization of discourse codes, to understand how tools like agent-based models and geographic information systems mediate learning in planning groups. The objective was to understand how the use of these tools in participatory planning can help stakeholders learn about complex environmental problems, to make more informed judgments about the future. The paper provides three cases that illustrate the capacity of such research methods to provide insights into planning groups’ learning processes, and the mediating roles of planning tools.
Planning Theory & Practice | 2015
Charles Hoch; Moira Zellner; Dan Milz; Josh Radinsky; Leilah Lyons
Planners making groundwater plans often use scientific hydrological forecasts to estimate long term the risk of water depletion. We study a group of Chicago planners and stakeholders who learned to use and helped develop agent-based models (ABM) of coupled land-use change and groundwater flow, to explore the effects of resource use and policy on future groundwater availability. Using discourse analysis, we found planners learned to play with the ABM to judge complex interaction effects. The simulation results challenged prior policy commitments, and instead of reconsidering those commitments to achieve sustainability, participants set aside the ABM and the lessons learned with them. Visualizing patterns of objections and agreements in the dialogue enabled us to chart how clusters of participants gradually learned to grasp and interpret the simulated effects of individual and policy decisions even as they struggled to incorporate them into their deliberations.
ubiquitous computing | 2010
Tia Shelley; Leilah Lyons; Jingmin Shi; Emily S. Minor; Moira Zellner
We present a new low-cost paper-based user interface strategy (Paper-to-Parameters) for making interaction with simulations of complex systems pragmatic within an Environmental Science curriculum. Students specify initial simulation conditions by sticking pieces of paper to a wall, and can experiment with the simulation by repositioning the pieces of paper. Computer vision recognizes the paper-based symbols and converts them into parameters used by the simulation. This tangible input approach contrasts with current slider- and programming-based approaches for interacting with simulations. We hypothesize that the affordances of this interaction strategy better supports manipulations of spatial simulation parameters. We report here on the initial prototype of the system, and present plans for future work.
ubiquitous computing | 2010
Francesco Cafaro; Leilah Lyons; Joshua Radinsky; Jessica Roberts
RFID is usually used for identification but with some post-processing it can also be used for localization. These properties expand the typical range of possible interactions with digital displays in museums. Our goal is to encourage the collaborative investigation of a rich information space presented on an Ambient Display in a museum exhibit. We consider two different models of interacting with an exhibit: Tangible Control, wherein passive RFID tags are embedded in some artifacts and multiple users can control the information on the screen by moving those artifacts, and Embodied Control, wherein people directly carry an RFID tag and interact with the information by walking within the simulation space. Each model has different implications for how the visitors might relate (a) to the information being displayed, and (b) to one another. Here we present preliminary results on the suitability of a single-reader and passive tag setup for providing localization input.
human factors in computing systems | 2006
Leilah Lyons; Joseph Lee; Chris Quintana; Elliot Soloway
We designed the MUSHI (Multi-User Simulation with Handheld Integration) framework to address two educational needs: (1) to help students learn about complex, multi-scalar systems, and (2) to help students collaborate with one another in small groups. The MUSHI system provides each student with a handheld computer that is wirelessly synchronized with a simulation running on a tablet PC computer. A group of students can interact with small-scale elements of the simulation via their personal handhelds, and can observe large-scale elements on the shared computer. Because this is a novel combination of devices, we conducted use trials with middle school students to explore issues surrounding multi-device representations, small-group collaboration, and equitable computing.
interaction design and children | 2015
Priscilla Jimenez Pazmino; Brian Slattery; Leilah Lyons; Benjamin Hunt
Informal science institutions (ISIs) are beginning to adopt mobile technology to support interpreters (docents), but not much is known about how to design these supports. One approach to designing technology for new scenarios is participatory design (PD), where end-users are involved as experts in the task domain who can help envision the application of technology. However, in our context end-users are often youth interpreters who are emerging professionals. This poses a challenge because traditional PD methods trust that the users can represent the task domain. Novice professionals may not yet fully understand the task domain, but eliciting their needs and visions is still important for producing a tool they will find useful. A design approach is needed that captures the requirements for supporting expert task execution as an underlying structure for the tool, while nonetheless eliciting and respecting the special needs of novices. We developed and applied two different framing strategies (one technological, one sociotechnological) to traditional PD methods to help youth non-expert interpreters generate task-relevant design ideas. We report results from using these strategies in an exploratory fashion and discuss opportunities for future research on PD methods that can address the needs of youth as emerging professionals.