Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luis A. Leiva is active.

Publication


Featured researches published by Luis A. Leiva.


human computer interaction with mobile devices and services | 2012

Back to the app: the costs of mobile application interruptions

Luis A. Leiva; Matthias Böhmer; Sven Gehring; Antonio Krüger

Smartphone users might be interrupted while interacting with an application, either by intended or unintended circumstances. In this paper, we report on a large-scale observational study that investigated mobile application interruptions in two scenarios: (1) intended back and forth switching between applications and (2) unintended interruptions caused by incoming phone calls. Our findings reveal that these interruptions rarely happen (at most 10% of the daily application usage), but when they do, they may introduce a significant overhead (can delay completion of a task by up to 4 times). We conclude with a discussion of the results, their limitations, and a series of implications for the design of mobile phones.


human factors in computing systems | 2015

Text Entry on Tiny QWERTY Soft Keyboards

Luis A. Leiva; Alireza Sahami; Alejandro Catala; Niels Henze; Albrecht Schmidt

The advent of wearables (e.g., smartwatches, smartglasses, and digital jewelry) anticipates the need for text entry methods on very small devices. We conduct fundamental research on this topic using 3 qwerty-based soft keyboards for 3 different screen sizes, motivated by the extensive training that users have with qwerty keyboards. In addition to ZoomBoard (a soft keyboard for diminutive screens), we propose a callout-based soft keyboard and ZShift, a novel extension of the Shift pointing technique. We conducted a comprehensive user study followed by extensive analyses on performance, usability, and short-term learning. Our results show that different small screen sizes demand different types of assistance. In general, manufacturers can benefit from these findings by selecting an appropriate qwerty soft keyboard for their devices. Ultimately, this work provides designers, researchers, and practitioners with new understanding of qwerty soft keyboard design space and its scalability for tiny touchscreens.


The Prague Bulletin of Mathematical Linguistics | 2013

CASMACAT: An Open Source Workbench for Advanced Computer Aided Translation

Vicent Alabau; Ragnar Bonk; Christian Buck; Michael Carl; Francisco Casacuberta; Mercedes García-Martínez; Jesús González; Philipp Koehn; Luis A. Leiva; Bartolomé Mesa-Lao; Daniel Ortiz; Herve Saint-Amand; Germán Sanchis; Chara Tsoukala

Abstract We describe an open source workbench that offers advanced computer aided translation (CAT) functionality: post-editing machine translation (MT), interactive translation prediction (ITP), visualization of word alignment, extensive logging with replay mode, integration with eye trackers and e-pen.


Information Sciences | 2013

Warped K-Means: An algorithm to cluster sequentially-distributed data

Luis A. Leiva; Enrique Vidal

Many devices generate large amounts of data that follow some sort of sequentiality, e.g., motion sensors, e-pens, eye trackers, etc. and often these data need to be compressed for classification, storage, and/or retrieval tasks. Traditional clustering algorithms can be used for this purpose, but unfortunately they do not cope with the sequential information implicitly embedded in such data. Thus, we revisit the well-known K-means algorithm and provide a general method to properly cluster sequentially-distributed data. We present Warped K-Means (WKM), a multi-purpose partitional clustering procedure that minimizes the sum of squared error criterion, while imposing a hard sequentiality constraint in the classification step. We illustrate the properties of WKM in three applications, one being the segmentation and classification of human activity. WKM outperformed five state-of-the-art clustering techniques to simplify data trajectories, achieving a recognition accuracy of near 97%, which is an improvement of around 66% over their peers. Moreover, such an improvement came with a reduction in the computational cost of more than one order of magnitude.


ACM Transactions on Intelligent Systems and Technology | 2016

Gestures à Go Go: Authoring Synthetic Human-Like Stroke Gestures Using the Kinematic Theory of Rapid Movements

Luis A. Leiva; Daniel Martín-Albo; Réjean Plamondon

Training a high-quality gesture recognizer requires providing a large number of examples to enable good performance on unseen, future data. However, recruiting participants, data collection, and labeling, etc., necessary for achieving this goal are usually time consuming and expensive. Thus, it is important to investigate how to empower developers to quickly collect gesture samples for improving UI usage and user experience. In response to this need, we introduce Gestures à Go Go (g3), a web service plus an accompanying web application for bootstrapping stroke gesture samples based on the kinematic theory of rapid human movements. The user only has to provide a gesture example once, and g3 will create a model of that gesture. Then, by introducing local and global perturbations to the model parameters, g3 generates from tens to thousands of synthetic human-like samples. Through a comprehensive evaluation, we show that synthesized gestures perform equally similar to gestures generated by human users. Ultimately, this work informs our understanding of designing better user interfaces that are driven by gestures.


intelligent user interfaces | 2009

Interactive multimodal transcription of text images using a web-based demo system

Verónica Romero; Luis A. Leiva; Alejandro Héctor Toselli; Enrique Vidal

This document introduces a web based demo of an interactive framework for transcription of handwritten text, where the user feedback is provided by means of pen strokes on a touchscreen. Here, the automatic handwriting text recognition system and the user both cooperate to generate the final transcription.


human computer interaction with mobile devices and services | 2011

Restyling website design via touch-based interactions

Luis A. Leiva

This paper introduces an ongoing technique for dynamically updating presentational attributes of UI elements. Starting from an existing web layout, the webmaster specifies what elements are candidates to be modified. Then, touch-based events are used as implicit inputs to an adaptive engine that automatically modifies, rearranges, and restyles the interacted items according to browsing usage. In this way, the UI is capable of (incrementally) adapting itself to the abilities of individual users at run-time.


intelligent user interfaces | 2010

Interactive machine translation using a web-based architecture

Daniel Ortiz-Martínez; Luis A. Leiva; Vicente Alabau; Francisco Casacuberta

In this paper we present a new way of translating documents by using a Web-based system. An interactive approach is proposed as an alternative to post-editing the output of a machine translation system. In this approach, the users feedback is used to validate or to correct parts of the system output that allow the generation of improved versions of the rest of the output.


human factors in computing systems | 2017

Synthesizing Stroke Gestures Across User Populations: A Case for Users with Visual Impairments

Luis A. Leiva; Daniel Martín-Albo; Radu-Daniel Vatavu

We introduce a new principled method grounded in the Kinematic Theory of Rapid Human Movements to automatically generate synthetic stroke gestures across user populations in order to support ability-based design of gesture user interfaces. Our method is especially useful when the target user population is difficult to sample adequately and, consequently, when there is not enough data to train gesture recognizers to deliver high levels of accuracy. To showcase the relevance and usefulness of our method, we collected gestures from people without visual impairments and successfully synthesized gestures with the articulation characteristics of people with visual impairments. We also show that gesture recognition accuracy improves significantly when using our synthetic gesture samples for training. Our contributions will benefit researchers and practitioners that wish to design gesture user interfaces for people with various abilities by helping them prototype, evaluate, and predict gesture recognition performance without having to expressly recruit and involve people with disabilities in long, time-consuming gesture collection experiments.


international acm sigir conference on research and development in information retrieval | 2016

Predicting User Engagement with Direct Displays Using Mouse Cursor Information

Ioannis Arapakis; Luis A. Leiva

Predicting user engagement with direct displays (DD) is of paramount importance to commercial search engines, as well as to search performance evaluation. However, understanding within-content engagement on a web page is not a trivial task mainly because of two reasons: (1) engagement is subjective and different users may exhibit different behavioural patterns; (2) existing proxies of user engagement (e.g., clicks, dwell time) suffer from certain caveats, such as the well-known position bias, and are not as effective in discriminating between useful and non-useful components. In this paper, we conduct a crowdsourcing study and examine how users engage with a prominent web search engine component such as the knowledge module (KM) display. To this end, we collect and analyse more than 115k mouse cursor positions from 300 users, who perform a series of search tasks. Furthermore, we engineer a large number of meta-features which we use to predict different proxies of user engagement, including attention and usefulness. In our experiments, we demonstrate that our approach is able to predict more accurately different levels of user engagement and outperform existing baselines.

Collaboration


Dive into the Luis A. Leiva's collaboration.

Top Co-Authors

Avatar

Vicent Alabau

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar

Enrique Vidal

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar

Daniel Martín-Albo

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar

Francisco Casacuberta

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar

Alejandro Héctor Toselli

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar

Daniel Ortiz-Martínez

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar

Joan-Andreu Sánchez

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar

Ricardo Sánchez-Sáez

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar

Verónica Romero

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar

Réjean Plamondon

École Polytechnique de Montréal

View shared research outputs
Researchain Logo
Decentralizing Knowledge