Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mario Romero is active.

Publication


Featured researches published by Mario Romero.


human computer interaction with mobile devices and services | 2012

An evaluation of BrailleTouch: mobile touchscreen text entry for the visually impaired

Caleb Southern; James Clawson; Brian Frey; Gregory D. Abowd; Mario Romero

We present the evaluation of BrailleTouch, an accessible keyboard for blind users on touchscreen smartphones. Based on the standard Perkins Brailler, BrailleTouch implements a six-key chorded braille soft keyboard. Eleven blind participants typed for 165 twenty-minute sessions on three mobile devices: 1) BrailleTouch on a smartphone; 2) a soft braille keyboard on a touchscreen tablet; and 3) a commercial braille keyboard with physical keys. Expert blind users averaged 23.2 words per minute (wpm) on the BrailleTouch smartphone. The fastest participant, a touchscreen novice, achieved 32.1 wpm during his first session. Overall, participants were able to transfer their existing braille typing skills to a touchscreen device within an hour of practice. We report the speed for braille text entry on three mobile devices, an in depth error analysis, and the lessons learned for the design and evaluation of accessible and eyes-free soft keyboards.


computer vision and pattern recognition | 2013

Decoding Children's Social Behavior

James M. Rehg; Gregory D. Abowd; Agata Rozga; Mario Romero; Mark A. Clements; Stan Sclaroff; Irfan A. Essa; Opal Ousley; Yin Li; Chanho Kim; Hrishikesh Rao; Jonathan C. Kim; Liliana Lo Presti; Jianming Zhang; Denis Lantsman; Jonathan Bidwell; Zhefan Ye

We introduce a new problem domain for activity recognition: the analysis of childrens social and communicative behaviors based on video and audio data. We specifically target interactions between children aged 1-2 years and an adult. Such interactions arise naturally in the diagnosis and treatment of developmental disorders such as autism. We introduce a new publicly-available dataset containing over 160 sessions of a 3-5 minute child-adult interaction. In each session, the adult examiner followed a semi-structured play interaction protocol which was designed to elicit a broad range of social behaviors. We identify the key technical challenges in analyzing these behaviors, and describe methods for decoding the interactions. We present experimental results that demonstrate the potential of the dataset to drive interesting research questions, and show preliminary results for multi-modal activity recognition.


international conference on universal access in human computer interaction | 2011

Brailletouch: mobile texting for the visually impaired

Brian Frey; Caleb Southern; Mario Romero

BrailleTouch is an eyes-free text entry application for mobile devices. Currently, there exist a number of hardware and software solutions for eyes-free text entry. Unfortunately, the hardware solutions are expensive and the software solutions do not offer adequate performance. BrailleTouch bridges this gap. We present our design rationale and our explorative evaluation of BrailleTouch with HCI experts and visually impaired users.


human computer interaction with mobile devices and services | 2011

BrailleTouch: designing a mobile eyes-free soft keyboard

Mario Romero; Brian Frey; Caleb Southern; Gregory D. Abowd

Texting is the essence of mobile communication and connectivity, as evidenced by todays teenagers, tomorrows workforce. Fifty-four percent of American teens contact each other daily by texting, as compared to face-to-face (33%) and talking on the phone (30%) according to the Pew Research Centers Internet & American Life Project, 2010. Arguably, todays technologies support mobile text input poorly, primarily due to the size constraints of mobile devices. This is the case for everyone, but it is particularly relevant to the visually impaired. According to the World Health Organization, 284 million people are visually impaired worldwide. In order to connect these users to the global mobile community, we need to design effective and efficient methods for eyes-free text input on mobile devices. Furthermore, everyone would benefit from effective mobile texting for safety and speed. This design brief presents BrailleTouch, our working prototype solution for eyes-free mobile text input.


human factors in computing systems | 2015

Situational Ethics: Re-thinking Approaches to Formal Ethics Requirements for Human-Computer Interaction

Cosmin Munteanu; Heather Molyneaux; Wendy Moncur; Mario Romero; Susan O'Donnell; John Vines

Most Human-Computer Interaction (HCI) researchers are accustomed to the process of formal ethics review for their evaluation or field trial protocol. Although this process varies by country, the underlying principles are universal. While this process is often a formality, for field research or lab-based studies with vulnerable users, formal ethics requirements can be challenging to navigate -- a common occurrence in the social sciences; yet, in many cases, foreign to HCI researchers. Nevertheless, with the increase in new areas of research such as mobile technologies for marginalized populations or assistive technologies, this is a current reality. In this paper we present our experiences and challenges in conducting several studies that evaluate interactive systems in difficult settings, from the perspective of the ethics process. Based on these, we draft recommendations for mitigating the effect of such challenges to the ethical conduct of research. We then issue a call for interaction researchers, together with policy makers, to refine existing ethics guidelines and protocols in order to more accurately capture the particularities of such field-based evaluations, qualitative studies, challenging lab-based evaluations, and ethnographic observations.


IEEE Transactions on Visualization and Computer Graphics | 2008

Viz-A-Vis: Toward Visualizing Video through Computer Vision

Mario Romero; Jay W. Summet; John T. Stasko; Gregory D. Abowd

In the established procedural model of information visualization, the first operation is to transform raw data into data tables. The transforms typically include abstractions that aggregate and segment relevant data and are usually defined by a human, user or programmer. The theme of this paper is that for video, data transforms should be supported by low level computer vision. High level reasoning still resides in the human analyst, while part of the low level perception is handled by the computer. To illustrate this approach, we present Viz-A-Vis, an overhead video capture and access system for activity analysis in natural settings over variable periods of time. Overhead video provides rich opportunities for long-term behavioral and occupancy analysis, but it poses considerable challenges. We present initial steps addressing two challenges. First, overhead video generates overwhelmingly large volumes of video impractical to analyze manually. Second, automatic video analysis remains an open problem for computer vision.


ubiquitous computing | 2012

Supporting parents for in-home capture of problem behaviors of children with developmental disabilities

N. Nazneen; Agata Rozga; Mario Romero; Addie J. Findley; Nathan A. Call; Gregory D. Abowd; Rosa I. Arriaga

Ubiquitous computing has shown promise in applications for health care in the home. In this paper, we focus on a study of how a particular ubicomp capability, selective archiving, can be used to support behavioral health research and practice. Selective archiving technology, which allows the capture of a window of data prior to and after an event, can enable parents of children with autism and related disabilities to record video clips of events leading up to and following an instance of problem behavior. Behavior analysts later view these video clips to perform a functional assessment. In contrast to the current practice of direct observation, a powerful method to gather data about child problem behaviors but costly in terms of human resources and liable to alter behavior in the subjects, selective archiving is cost effective and has the potential to provide rich data with minimal instructions to the natural environment. To assess the effectiveness of parent data collection through selective archiving in the home, we developed a research tool, CRAFT (Continuous Recording And Flagging Technology) and conducted a study by installing CRAFT in eight households of children with developmental disabilities and severe behavior concerns. The results of this study show the promise and remaining challenges for this technology. We have also shown that careful attention to the design of a ubicomp system for use by other domain specialists or non-technical users is key to moving ubicomp research forward.


ubiquitous computing | 2008

Alien presence in the home: the design of Tableau Machine

Mario Romero; Zachary Pousman; Michael Mateas

We introduce a design strategy, alien presence, which combines work in human–computer interaction, artificial intelligence, and media art to create enchanting experiences involving reflection over and contemplation of daily activities. An alien presence actively interprets and characterizes daily activity and reflects it back via generative, ambient displays that avoid simple one-to-one mappings between sensed data and output. We describe the alien presence design strategy for achieving enchantment, and report on Tableau Machine, a concrete example of an alien presence design for domestic spaces. We report on an encouraging formative evaluation indicating that Tableau Machine does indeed support reflection and actively engages users in the co-construction of meaning around the display.


human factors in computing systems | 2006

Tableau machine: an alien presence in the home

Mario Romero; Zachary Pousman; Michael Mateas

We present Tableau Machine, a non-human social actor for the home. The machine senses, interprets and reports abstract qualities of human activity through the language of visual art. The goal of the machine is to serve as a strange mirror of everyday life, open unusual viewpoints and generate engaging and long lasting conversations and reflections. We introduce new models for sensing, interpreting, and reporting human activity and we describe results of our formative evaluation which suggest reflection and social engagement among participants.


computer vision and pattern recognition | 2004

Tracking Head Yaw by Interpolation of Template Responses

Mario Romero; Aaron F. Bobick

We propose an appearance based machine learning architecture that estimates and tracks in real time large range head yaw given a single non-calibrated monocular grayscale low resolution image sequence of the head. The architecture is composed of five parallel template detectors, a Radial Basis Function Network and two Kalman filters. The template detectors are five view-specific images of the head ranging across full profiles in discrete steps of 45 degrees. The Radial Basis Function Network interpolates the response vector from the normalized correlation of the input image and the 5 template detectors. The first Kalman filter models the position and velocity of the response vector in five dimensional space. The second is a running average that filters the scalar output of the network. We assume the head image has been closely detected and segmented, that it undergoes only limited roll and pitch and that there are no sharp contrasts in illumination. The architecture is person-independent and is robust to changes in appearance, gesture and global illumination. The goals of this paper are, one, to measure the performance of the architecture, two, to asses the impact the temporal information gained from video has on accuracy and stability and three, to determine the effects of relaxing our assumptions.

Collaboration


Dive into the Mario Romero's collaboration.

Top Co-Authors

Avatar

Gregory D. Abowd

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Caleb Southern

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Mateas

University of California

View shared research outputs
Top Co-Authors

Avatar

Brian Frey

University of Maryland

View shared research outputs
Top Co-Authors

Avatar

Zachary Pousman

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Carla F. Griggio

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christopher E. Peters

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Agata Rozga

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Björn Thuresson

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hanna Hasselqvist

Royal Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge