Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Radu-Daniel Vatavu is active.

Publication


Featured researches published by Radu-Daniel Vatavu.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2015

Touch interaction for children aged 3 to 6 years

Radu-Daniel Vatavu; Gabriel Cramariuc; Doina Maria Schipor

Our present understanding of young childrens touch-screen performance is still limited, as only few studies have considered analyzing childrens touch interaction patterns so far. In this work, we address children aged between 3 and 6 years old during their preoperational stage according to Piagets cognitive developmental theory, and we report their touch-screen performance with standard tap and drag and drop interactions on smart phones and tablets. We show significant improvements in childrens touch performance as they grow from 3 to 6 years, and point to performance differences between children and adults. We correlate childrens touch performance expressed with task completion times and target acquisition accuracy with sensorimotor evaluations that characterize childrens finger dexterity and graphomotor and visuospatial processing abilities, and report significant correlations. Our observations are drawn from the largest children touch dataset available in the literature, consisting in data collected from 89 children and an additional 30 young adults to serve as comparison. We use our findings to recommend guidelines for designing touch-screen interfaces for children by adopting the new perspective of sensorimotor abilities. We release our large dataset into the interested community for further studies on childrens touch input behavior. It is our hope that our findings on the little-studied age group of 3- to 6-year-olds together with the companion dataset will contribute toward a better understanding of childrens touch interaction behavior and toward improved touch interface designs for small-age children. HighlightsWe investigate small-age childrens touch performance on smart phones and tablets.Childrens touch performance improves significantly from 3 to 6 years.We discuss findings in terms of Piagets preoperational developmental stage.We recommend design guidelines by considering childrens sensorimotor skills.Dataset released for the largest children touch-screen study up to date (89 children).


international conference on human computer interaction | 2011

Estimating the perceived difficulty of pen gestures

Radu-Daniel Vatavu; Daniel Vogel; Géry Casiez; Laurent Grisoni

Our empirical results show that users perceive the execution difficulty of single stroke gestures consistently, and execution difficulty is highly correlated with gesture production time. We use these results to design two simple rules for estimating execution difficulty: establishing the relative ranking of difficulty among multiple gestures; and classifying a single gesture into five levels of difficulty. We confirm that the CLC model does not provide an accurate prediction of production time magnitude, and instead show that a reasonably accurate estimate can be calculated using only a few gesture execution samples from a few people. Using this estimated production time, our rules, on average, rank gesture difficulty with 90% accuracy and rate gesture difficulty with 75% accuracy. Designers can use our results to choose application gestures, and researchers can build on our analysis in other gesture domains and for modeling gesture performance.


european conference on interactive tv | 2013

There's a world outside your TV: exploring interactions beyond the physical TV screen

Radu-Daniel Vatavu

We explore interactions in the space surrounding the TV set, and use this space as a canvas to display additional TV content in the form of projected screens and customized controls and widgets. We present implementation details for a prototype that creates a hybrid, physical-projected, augmented TV space with off-the-shelf, low-cost technical equipment. The results of a participatory design study are reported to inform development of interaction techniques for multimedia content that spans the physical-projected TV space. We report a set of commands for twelve frequently-used TV tasks and compile guidelines for designing interactions for augmented TV spaces. With this work, we plan to make TV interface designers aware of the many opportunities offered by the space around the physical TV screen once turned interactive. It is our hope that this very first investigation of the interactive potential of the area surrounding the TV set will encourage new explorations of augmented TV spaces, and will foster the design of new applications and new entertainment experiences for our interactive, smart TV spaces of the future.


international conference on multimodal interfaces | 2011

The effect of sampling rate on the performance of template-based gesture recognizers

Radu-Daniel Vatavu

We investigate in this work the effect of motion sampling rate over recognition accuracy and execution time for current template-based gesture recognizers in order to provide performance guidelines to practitioners and designers of gesture-based interfaces. We show that as few as 6 sampling points are sufficient for Euclidean and angular recognizers to attain high recognition rates and that a linear relationship exists between sampling rate and number of gestures for the dynamic time warping technique. We report execution times obtained with our controlled downsampling which are 10-20 times faster than shown by existing work at the same high recognition rates. The results of this work will benefit practitioners by providing important performance aspects to consider when using template-based gesture recognizers.


Multimedia Tools and Applications | 2012

Point & click mediated interactions for large home entertainment displays

Radu-Daniel Vatavu

This work introduces and discusses the implementation details of a novel concept for a home entertainment system together with an affordable controlling interface that uses point & click interactions in order to create, mix and manipulate media screens within the same projection-based display. Scenarios for single and multiple viewers are discussed with users being able to create, reposition, resize, and control their own-defined media screens. The standard and familiar WIMP interaction techniques are transferred from PCs to home entertainment using a motion-sensing remote controller. An optional system feature is described for the automatic configuration of such media screens by analyzing the home environment using computer vision techniques. Observations from initial user studies are reported with regards to the perceived usefulness and acceptability of the proposed system. The main benefit introduced by this work is that of a large entertainment display that becomes shared and personalized while it is being adapted and fit into the home environment.


european conference on interactive tv | 2008

Interactive Coffee Tables: Interfacing TV within an Intuitive, Fun and Shared Experience

Radu-Daniel Vatavu; Stefan Gheorghe Pentiuc

Watching television is usually a shared experience allowing family or friends that share the same viewing interests to watch, comment and enjoy programs together. The interaction part however is at the opposite end being reduced to the traditional remote control which by itself proves very limited with respect to the sharing part: although the viewing experience is shared among the group, the control part of the interface only allows one-viewer-at-a-time interaction. We are discussing in this paper a new interaction technique for controlling the TV set using one commonly available shared wide-area interface: the coffee table. By visually designating interaction sensitive areas on the coffee table surface, television control may be achieved via simple hand movements across the surface which may be performed by any of the viewers at any time. The final interface is thus fun, simple, intuitive, and very important, wide-shareable and immediately available for all the participants.


Gesture-Based Human-Computer Interaction and Simulation | 2009

Gesture Recognition Based on Elastic Deformation Energies

Radu-Daniel Vatavu; Laurent Grisoni; Stefan Gheorghe Pentiuc

We present a method for recognizing gesture motions based on elastic deformable shapes and curvature templates. Gestures are modeled using a spline curve representation that is enhanced with elastic properties: the entire spline or any of its parts may stretch or bend. The energy required to transform a gesture into a given template gives an estimation of the similarity between the two. We demonstrate the results of our gesture classifier with a video-based acquisition approach.


human factors in computing systems | 2016

Between-Subjects Elicitation Studies: Formalization and Tool Support

Radu-Daniel Vatavu; Jacob O. Wobbrock

Elicitation studies, where users supply proposals meant to effect system commands, have become a popular method for system designers. But the method to date has assumed a within-subjects procedure and statistics. Despite the benefits of examining the relative agreement of independent groups (e.g., men versus women, children versus adults, novices versus experts, etc.), the lack of appropriate tools for between-subjects agreement rate analysis have prevented so far such comparative investigations. In this work, we expand the elicitation method to between-subjects designs. We introduce a new measure for evaluating coagreement between groups and a new statistical test for agreement rate analysis that reports the exact p-value to evaluate the significance of the difference between agreement rates calculated for independent groups. We show the usefulness of our tools by re-examining previously published gesture elicitation data, for which we discuss significant differences in agreement for technical and non-technical participants, men and women, and different acquisition technologies. Our new tools will enable practitioners to properly analyze their user-elicited data resulted from complex experimental designs with multiple independent groups and, consequently, will help them understand agreement data and verify hypotheses about agreement at more sophisticated levels of analysis.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2013

Automatic recognition of object size and shape via user-dependent measurements of the grasping hand

Radu-Daniel Vatavu; Ionu Alexandru Zaii

An investigation is conducted on the feasibility of using the posture of the hand during prehension in order to identify geometric properties of grasped objects such as size and shape. A recent study of Paulson et al. (2011) already demonstrated the successful use of hand posture for discriminating between several actions in an office setting. Inspired by their approach and following closely the results in motor planning and control from psychology (Makenzie and Iberall, 1994), we adopt a more cautious and punctilious approach in order to understand the opportunities that hand posture brings for recognizing properties of target objects. We present results from an experiment designed in order to investigate recognition of object properties during grasping in two different conditions: object translation (involving firm grasps) and object exploration (which includes a large variety of different hand and finger configurations). We show that object size and shape can be recognized with up to 98% accuracy during translation and up to 95% and 91% accuracies during exploration by employing user-dependent training. In contrast, experiments show less accuracy (up to 60%) for user-independent training for all tested classification techniques. We also point out the variability of individual grasping postures resulted during object exploration and the need for using classifiers trained with a large set of examples. The results of this work can benefit psychologists and researchers interested in human studies and motor control by providing more insights on grasping measurements, pattern recognition practitioners by reporting recognition results of new algorithms, as well as designers of interactive systems that work on gesture-based interfaces by providing them with design guidelines issued from our experiment.


Proceedings of the 2007 workshop on Multimodal interfaces in semantic interaction | 2007

Hand posture recognition for human-robot interaction

Tudor Ioan Cerlinca; Stefan Gheorghe Pentiuc; Radu-Daniel Vatavu; Marius Cristian Cerlinca

In this paper, we describe a fast and accurate method for hand posture recognition in video sequences using multiple video cameras. The technique we propose is based on the head detection, skin detection and human body proportions in order to recognize commands from real-time video sequences. Our technique is also a robust one in order to deal with changes of lighting. The experimental results show that it can be used in various vision-based applications that require real-time detection and recognition of hand posture.

Collaboration


Dive into the Radu-Daniel Vatavu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luis A. Leiva

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar

Daniel Martín-Albo

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar

Jean Vanderdonckt

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei-Tek Tsai

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Tudor Ioan Cerlinca

Ştefan cel Mare University of Suceava

View shared research outputs
Researchain Logo
Decentralizing Knowledge