Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Birchfield is active.

Publication


Featured researches published by David Birchfield.


computer supported collaborative learning | 2009

Earth science learning in SMALLab: A design experiment for mixed reality

David Birchfield; Colleen Megowan-Romanowicz

Conversational technologies such as email, chat rooms, and blogs have made the transition from novel communication technologies to powerful tools for learning. Currently virtual worlds are undergoing the same transition. We argue that the next wave of innovation is at the level of the computer interface, and that mixed-reality environments offer important advantages over prior technologies. Thus, mixed reality is positioned to have a broad impact on the future of K-12 collaborative learning. We propose three design imperatives that arise from our ongoing work in this area grounded in research from the learning sciences and human-computer interaction. By way of example, we present one such platform, the Situated Multimedia Arts Learning Lab [SMALLab]. SMALLab is a mixed-reality environment that affords face-to-face interaction by colocated participants within a mediated space. We present a recent design experiment that involved the development of a new SMALLab learning scenario and a collaborative student participation framework for a 3-day intervention for 72 high school earth science students. We analyzed student and teacher exchanges from classroom sessions both during the intervention and during regular classroom instruction and found significant increases in the number of student-driven exchanges within SMALLab. We also found that students made significant achievement gains. We conclude that mixed reality can have a positive impact on collaborative learning and that it is poised for broad dissemination into mainstream K-12 contexts.


international conference on computer graphics and interactive techniques | 2006

SMALLab: a mediated platform for education

David Birchfield; Thomas Ciufo; Gary Minyard

In this article we describe the design and realization of SMALLab, a new Student Centered Learning Environment [SCLE] that utilizes interactive digital media in a multimodal sensing framework. We draw on methodologies from interactive art, computer music composition, animation, and educational technology to facilitate learning with digital media that emphasizes collaboration and human-to-human interaction. We describe the realization of this learning platform, and discuss the design of a complementary curriculum that helps students develop a deeper understanding of movement and sound.


Proceedings of the 1st ACM workshop on Story representation, mechanism and context | 2004

Communicating everyday experiences

Preetha Appan; Hari Sundaram; David Birchfield

In this paper, we present our approach to the problem of communicating everyday experiences. This is a challenging problem, since media from everyday events are unstructured, and often poorly annotated. We first attempt to communicate everyday experiences using a dramatic framework, by categorizing media and by introducing causal relations. Based on our experience of the dramatic framework for the everyday media, we introduce an event based framework as well as a viewpoint centric visualization that allows the viewer to have agency, in a highly interactive, non-linear manner. Our approach focuses on structured interaction for consumption of everyday experiences, in contrast to non-interactive consumption of structured communication. Our results indicate that dramatic structures do not work well with everyday media, and novel interactions / visualizations are needed. Experimental results indicate that the viewpoint centric visualization works well. We are in the process of creating a large event database of everyday events, and we are creating the necessary recording and annotation tools.


advances in computer entertainment technology | 2005

A pressure sensing floor for interactive media applications

Prashant Srinivasan; David Birchfield; Gang Qian; Assegid Kidané

This paper explores the design of a reconfigurable large-area high-resolution pressure sensing floor to help study human dance movement. By measuring the pressure of a user interacting with the system, our device is able to provide real-time knowledge about both the location of the performer on the floor as well as the amount and distribution of force being exerted on the floor. This system has been designed to closely integrate and synchronize with external systems including marker-based motion capture systems, audio-sensing equipment and video-sensing technology, thus allowing for robust multimodal sensing of a subject in the integrated environment. Furthermore, the mats comprising the floor can be readily re-arranged in order to allow for a large number of configurations. Some other possible applications of the pressure sensing floor include virtual reality based entertainment systems or video game control interfaces as well as rehabilitation projects for disabled people with foot or motor-control disorders.


european conference on smart sensing and context | 2007

The design of a pressure sensing floor for movement-based human computer interaction

Sankar Rangarajan; Assegid Kidané; Gang Qian; Stjepan Rajko; David Birchfield

This paper addresses the design of a large area, high resolution, networked pressure sensing floor with primary application in movement-based human-computer interaction (M-HCI). To meet the sensing needs of an M-HCI system, several design challenges need to be overcome. Firstly, high frame rate and low latency are required to ensure real-time human computer interaction, even in the presence of large sensing area (for unconstrained movement in the capture space) and high resolution (to support detailed analysis of pressure patterns). The optimization of floor system frame rate and latency is a challenge. Secondly, in many cases of M-HCI there are only a small number of subjects on the floor and a large portion of the floor is not active. Proper data compression for efficient data transmission is also a challenge. Thirdly, locations of disjoint active floor regions are useful features in many M-HCI applications. Reliable clustering and tracking of active disjoint floor regions poses as a challenge. Finally, to allow M-HCI using multiple communication channels, such as gesture, pose and pressure distributions, the pressure sensing floor needs to be integrable with other sensing modalities to create a smart multimodal environment. Fast and accurate alignment of floor sensing data in space and time with other sensing modalities is another challenge. In our research, we fully addressed the above challenges. The pressure sensing floor we developed has a sensing area of about 180 square feet, with a sensor resolution of 6.25 sensels/in2. The system frame rate is up to 43 Hz with average latency of 25 ms. A simple but efficient data compression scheme is in place. We have also developed a robust clustering and tracking procedure for disjoint active floor regions using the mean-shift algorithm. The pressure sensing floor can be seamlessly integrated with a marker based motion capture system with accurate temporal and spatial alignment. Furthermore, the modular and scalable structure of the sensor floor allows for easy installation to real rooms of irregular shape. The pressure sensing floor system described in this paper forms an important stepping stone towards the creation of a smart environment with context aware data processing algorithms which finds extensive applications beyond M-HCI, e.g. diagnosing gait pathologies and evaluation of treatment.


Annetta, L.; Bronack, S.C. (ed.), Serious educational game assessment: practical methods and models for educational games, simulations and virtual worlds | 2011

Semi-virtual Embodied Learning-Real World STEM Assessment

Mina C. Johnson-Glenberg; David Birchfield; Philippos Savvides; Colleen Megowan-Romanowicz

This chapter presents several results from a multiyear research endeavor on embodiment in gaming and learning. A trans-disciplinary group at Arizona State University has designed an innovative learning environment that allows the learner’s body to move freely in space while interacting with dynamic visual and sonic media. This semi-virtual interface/environment is called SMALLab (Situated Multimedia Arts Learning Laboratory). The environment relies on 3D object tracking, real time graphics, and surround-sound to enhance embodied learning. Our hypothesis is that optimal learning and retention will occur when learning is embodied and multiple modalities are incorporated during the act of learning. In addition, we have created game-like scenarios that are collaborative and, where appropriate, incorporate the constructs of competition and low-stakes formative, stealth assessment to increase engagement, knowledge construction, and metrics on student learning.


Advances in Human-computer Interaction | 2008

Embodiment, Multimodality, and Composition: Convergent Themes across HCI and Education for Mixed-Reality Learning Environments

David Birchfield; Harvey D. Thornburg; M. Colleen Megowan-Romanowicz; Sarah Hatton; Brandon Mechtley; Igor Dolgov; Winslow Burleson

We present concurrent theoretical work from HCI and Education that reveals a convergence of trends focused on the importance of three themes: embodiment, multimodality, and composition. We argue that there is great potential for truly transformative work that aligns HCI and Education research, and posit that there is an important opportunity to advance this effort through the full integration of the three themes into a theoretical and technological framework for learning. We present our own work in this regard, introducing the Situated Multimedia Arts Learning Lab (SMALLab). SMALLab is a mixed-reality environment where students collaborate and interact with sonic and visual media through full-body, 3D movements in an open physical space. SMALLab emphasizes human-to-human interaction within a multimodal, computational context. We present a recent case study that documents the development of a new SMALLab learning scenario, a collaborative student participation framework, a student-centered curriculum, and a three-day teaching experiment for seventy-two earth science students. Participating students demonstrated significant learning gains as a result of the treatment. We conclude that our theoretical and technological framework can be broadly applied in the realization of mixed reality, student-centered learning environments.


Ninth International Conference on Information Visualisation (IV'05) | 2005

Design of a pressure sensitive floor for multimodal sensing

Prashant Srinivasan; David Birchfield; Gang Qian; Assegid Kidané

Visualization and knowledge of detailed pressure information can play a vital role in multimodal sensing of human movement. We have designed a high-resolution pressure sensing floor prototype with a sensor density of one sensor per square centimeter that can provide real-time information about the location of the performer on the floor as well as the amount of pressure being exerted on the floor. Hardware and software have been developed for detecting, collecting, transmitting and rendering a graphical representation of the pressure data gathered from the sensors. This prototype can be easily reconfigured to cover large floor areas, and integrates closely with video, audio and motion-based sensing technologies to facilitate robust multimodal sensing. In our demonstrations, we show that the system accurately captures and transmits pressure information, and we illustrate how this forms a basis for a variety of applications including use in rehabilitation, virtual reality, entertainment, and children activity centers.


acm sigmm workshop on experiential telepresence | 2003

Generative model for the creation of musical emotion, meaning, and form

David Birchfield

The automated creation of perceptible and compelling large-scale forms and hierarchical structures that unfold over time is a nontrivial challenge for generative models of multimedia content. Nonetheless, this is an important goal for multimedia authors and artists who work in time-dependent mediums. This paper and associated demonstration materials present a generative model for the automated composition of music.The model draws on theories of emotion and meaning in music, and relies on research in cognition and perception to ensure that the generated music will be communicative and intelligible to listeners. The model employs a coevolutionary genetic algorithm that is comprised of a population of musical components. The evolutionary process yields musical compositions which are realized as digital audio, a live performance work, and a musical score in conventional notation. These works exhibit musical features which are in accordance with aesthetic and compositional goals described in the paper.


Frontiers in Psychology | 2016

Effects of embodied learning and digital platform on the retention of physics content: Centripetal force

Mina C. Johnson-Glenberg; Colleen Megowan-Romanowicz; David Birchfield; Caroline Savio-Ramos

Embodiment theory proposes that knowledge is grounded in sensorimotor systems, and that learning can be facilitated to the extent that lessons can be mapped to these systems. This study with 109 college-age participants addresses two overarching questions: (a) how are immediate and delayed learning gains affected by the degree to which a lesson is embodied, and (b) how do the affordances of three different educational platforms affect immediate and delayed learning? Six 50 min-long lessons on centripetal force were created. The first factor was the degree of embodiment with two levels: (1) low and (2) high. The second factor was platform with three levels: (1) a large scale “mixed reality” immersive environment containing both digital and hands-on components called SMALLab, (2) an interactive whiteboard system, and (3) a mouse-driven desktop computer. Pre-tests, post-tests, and 1-week follow-up (retention or delayed learning gains) tests were administered resulting in a 2 × 3 × 3 design. Two knowledge subtests were analyzed, one that relied on more declarative knowledge and one that relied on more generative knowledge, e.g., hand-drawing vectors. Regardless of condition, participants made significant immediate learning gains from pre-test to post-test. There were no significant main effects or interactions due to platform or embodiment on immediate learning. However, from post-test to follow-up the level of embodiment interacted significantly with time, such that participants in the high embodiment conditions performed better on the subtest devoted to generative knowledge questions. We posit that better retention of certain types of knowledge can be seen over time when more embodiment is present during the encoding phase. This sort of retention may not appear on more traditional factual/declarative tests. Educational technology designers should consider using more sensorimotor feedback and gestural congruency when designing and opportunities for instructor professional development need to be provided as well.

Collaboration


Dive into the David Birchfield's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lisa Tolentino

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sarah Hatton

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ellen Campana

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aisling Kelliher

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Gang Qian

Arizona State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge