Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mathilde M. Bekker is active.

Publication


Featured researches published by Mathilde M. Bekker.


designing interactive systems | 1995

Analysis of gestures in face-to-face design teams provides guidance for how to use groupware in design

Mathilde M. Bekker; Judith S. Olson; Gary M. Olson

Many phases of design projects are done in groups. Communication in these groups is naturally supported through a variety of gestures. We catalog four types of gestures that people use when engaged in design (kinetic, spatial, pointing, and other), and overlay it with the purpose of the design subtask, -design, meeting management, and other. From this and other observations, we list recommendations for supporting this kind of communication in settings which have technology support, either face-to-face with group editors (where people do not necessarily see the same thing at the same time), and remote work (where people see neither the same view of the object nor a full room view of the other participants).


Interacting with Computers | 2003

On the assessment of usability testing methods for children

Panos Markopoulos; Mathilde M. Bekker

The paper motivates the need to acquire methodological knowledge for involving children as test users in usability testing. It introduces a methodological framework for delineating comparative assessments of usability testing methods for children participants. This framework consists in three dimensions: (1) assessment criteria for usability testing methods, (2) characteristics describing usability testing methods and, finally, (3) characteristics of children that may impact upon the process and the result of usability testing. Two comparative studies are discussed in the context of this framework along with implications for future research.


Interacting with Computers | 2003

Interaction design and children

Panos Markopoulos; Mathilde M. Bekker

This editorial paper introduces an emerging area for human–computer interaction research, which concerns interaction design and children. To avoid treating children as a homogeneous user group, it discusses some perspectives on their development, their use of technology for entertainment and education and, finally, how to involve children in the various stages of the design process.


human factors in computing systems | 1997

Helping and hindering user involvement — a tale of everyday design

Stephanie Wilson; Mathilde M. Bekker; Peter Johnson; Hilary Johnson

The importance of an early and on-going focus on users in interactive system design is widely accepted. However, in practice, involving users poses many problems and requires designers to balance conflicting demands. Various factors can hinder or ease the involvement of users. This paper reports a case study involving the design of a bespoke application and gives a detailed account of the obstacles and facilitators to user involvement encountered during the design activity. The obstacles and facilitators are presented in terms of issues such as contacting and selecting users, motivating users, facilitating and mediating meetings and offering points of focus for user contributions. We report and contrast the views of various stakeholders in the design process, and supplement these with our own observations as non-participant observers. Finally, we discuss issues raised by the study and draw out a number of lessons for the CHI community.


Interacting with Computers | 2003

KidReporter: a user requirements gathering technique for designing with children

Mathilde M. Bekker; Julie Beusmans; David V. Keyson; Peter Lloyd

This paper describes a design method, novel to the domain of interaction design, for gathering user requirements from children called the KidReporter method. The KidReporter method was chosen and further refined based on assumptions about User-Centred Design. The method was considered to be suitable and appealing for children in terms of participating in design. Two school classes participated in making a newspaper about a zoo, to gather requirements for the design process of an interactive educational game. The educational game was developed to educate children about animals while walking through a zoo. The KidReporter methods main strengths are that it combines many techniques for eliciting information from children, such as interviews, drawing and making pictures. In this paper we describe how the KidReporter method was applied, in what manner it was successful and what we would do differently next time.


interaction design and children | 2003

Assessing usability evaluation methods on their effectiveness to elicit verbal comments from children subjects

Ilse Van Kesteren; Mathilde M. Bekker; Arnold P. O. S. Vermeeren; Peter Lloyd

An exploratory study is described looking at childrens ability to provide verbal comments in usability evaluation sessions. Six evaluation methods were applied to test an interactive toy by children aged 6 and 7. The results show that most verbal comments were gathered during Active Intervention sessions, by asking children questions during tasks. Unexpectedly, the Co-Discovery sessions were less successful, because children did not collaborate very well. Children also provided useful comments in the Thinking Aloud, Retrospection, and Peer Tutoring sessions. They could reflect on their actions at the end of Retrospection sessions, and were able to teach other children how to interact with the toy in Peer Tutoring sessions.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2006

Identifying usability and fun problems in a computer game during first use and after some practice

Wolmet Barendregt; Mathilde M. Bekker; D.G. Bouwhuis; Ester Baauw

This paper describes an experiment to discover the change in the types of detected problems and the attitude of children towards a game when user testing a computer game for young children during first use and after they have practiced with a game. Both the numbers of different types of identified problems and the severity of the problems are investigated. Based on this knowledge, practitioners could adapt the set up of their user tests to effectively find as many aspects of the game as possible that merit change, according to the aims of the developers. The study shows that usability problems caused by a lack of knowledge were more often identified during first use. Furthermore, fun problems related to a too-high challenge level may disappear after some practice, whereas fun problems caused by the game taking over control for too long while the user wants to proceed playing the game were identified more often after some practice. The study shows that the impact severity of problems detected during first use was higher than when children had more practice with a game. As a result of these changes in experienced problems the commonly used measures efficiency, effectiveness and satisfaction increased when children had practiced with the game. Finally, the study also shows that the set of most severe problems identified during first use may be radically different from the set of most severe problems identified after some practice.


IPO Annual Progress Report | 2000

User Involvement in the Design of Human—Computer Interactions: Some Similarities and Differences between Design Approaches

Mathilde M. Bekker; John Long

This paper reviews user involvement in the design of human—computer interactions, as advocated by a selection of different approaches to design. The selection comprises: User-Centred Design; Participatory Design; Socio-Technical Design; Soft Systems Methodology; and Joint Application Design. The review reveals a preliminary identification of non-configurable and configurable ‘attributes’ of user involvement in design, and their associated ‘values’, which characterise the similarities and differences between the design approaches. The attributes and values are intended in the longer term to support designers to compare and contrast various design approaches and to make more informed choices about the configuration of user involvement in design practice. Requirements for future research into the better understanding and configuring of user involvement are proposed.


Cognition, Technology & Work | 2008

Development and evaluation of the problem identification picture cards method

Wolmet Barendregt; Mathilde M. Bekker; Ester Baauw

In this paper the development and assessment of a new formative evaluation method called the problem identification picture cards (PIPC) method is described. This method enables young children to express both usability and fun problems while playing a computer game. The method combines the traditional thinking-aloud method with picture cards that children can place in a box to indicate that there is a certain type of problem. An experiment to assess this method shows that children may express more problems (verbally, or with a picture card, or with a combination of a picture card and a verbalisation) with the PIPC method than without this method (in which they can only indicate problems verbally). Children in the experiment did not just replace verbalisations by using the provided picture cards and some children preferred to use the PIPC method during the test instead of the standard thinking-aloud method. The PIPC method or some aspects of the method could be a good instrument to increase the amount of information expressed by young children during an evaluation.


international conference on human computer interaction | 2005

A structured expert evaluation method for the evaluation of children's computer games

Ester Baauw; Mathilde M. Bekker; Wolmet Barendregt

Inspection-based evaluation methods predicting usability problems can be applied for evaluating products without involving users. A new method (named SEEM), inspired by Norman’s theory-of-action model [18] and Malone’s concepts of fun [15], is described for predicting usability and fun problems in children’s computer games. This paper describes a study to assess SEEM’s quality. The results show that the experts in the study predicted about 76% of the problems found in a user test. The validity of SEEM is quite promising. Furthermore, the participating experts were able to apply the inspection-questions in an appropriate manner. Based on this first study ideas for improving the method are presented.

Collaboration


Dive into the Mathilde M. Bekker's collaboration.

Top Co-Authors

Avatar

J.H. Eggen

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Panos Markopoulos

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

J.A. Sturm

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ester Baauw

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

Rob Tieben

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Janet C. Read

University of Central Lancashire

View shared research outputs
Researchain Logo
Decentralizing Knowledge