Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mariana Lilley is active.

Publication


Featured researches published by Mariana Lilley.


international conference of design, user experience, and usability | 2014

Towards the Development of Usability Heuristics for Native Smartphone Mobile Applications

Ger Joyce; Mariana Lilley

This paper reports on initial work in the identification of heuristics that may be most usefully applied in the heuristic evaluation of native smartphone applications. Given the prevalence of such applications, this work seems pertinent, particularly as it also seems under-represented in the literature. Once defined, the heuristics were developed further based on the quantitative and qualitative feedback received from sixty Human-Computer Interaction experts in eighteen countries. The resulting heuristics could be beneficial to HCI researchers and educators, and could also potentially expedite and cut the cost of smartphone application usability evaluations for HCI practitioners.


intelligent tutoring systems | 2004

A computer adaptive test that facilitates the modification of previously entered responses : An empirical study

Mariana Lilley; Trevor Barker

In a computer-adaptive test (CAT), learners are not usually allowed to revise previously entered responses. In this paper, we present findings from our most recent empirical study, which involved two groups of learners and a modified version of a CAT application that provided the facility to revise previously entered responses. Findings from this study showed that the ability to modify previously entered responses did not lead to significant differences in performance for one group of learners (p>0.05), and only relatively small yet significant differences for the other (p<0.01). The implications and the reasons for the difference between the groups are explored in this paper. Despite the small effect of the modification, it is argued that this option is likely to lead to a reduction in student anxiety and an increase in student confidence in this assessment method.


Journal of Internet Services and Applications | 2014

Evaluating security and usability of profile based challenge questions authentication in online examinations

Abrar Ullah; Hannan Xiao; Trevor Barker; Mariana Lilley

Student authentication in online learning environments is an increasingly challenging issue due to the inherent absence of physical interaction with online users and potential security threats to online examinations. This study is part of ongoing research on student authentication in online examinations evaluating the potential benefits of using challenge questions. The authors developed a Profile Based Authentication Framework (PBAF), which utilises challenge questions for students’ authentication in online examinations. This paper examines the findings of an empirical study in which 23 participants used the PBAF including an abuse case security analysis of the PBAF approach. The overall usability analysis suggests that the PBAF is efficient, effective and usable. However, specific questions need replacement with suitable alternatives due to usability challenges. The results of the current research study suggest that memorability, clarity of questions, syntactic variation and question relevance can cause usability issues leading to authentication failure. A configurable traffic light system was designed and implemented to improve the usability of challenge questions. The security analysis indicates that the PBAF is resistant to informed guessing in general, however, specific questions were identified with security issues. The security analysis identifies challenge questions with potential risks of informed guessing by friends and colleagues. The study was performed with a small number of participants in a simulation online course and the results need to be verified in a real educational context on a larger sample size.


international conference on human computer interaction | 2009

The Application of the Flexilevel Approach for the Assessment of Computer Science Undergraduates

Mariana Lilley; Andrew Pyper

This paper reports on the use of the flexilevel approach for the formative assessment of Computer Science undergraduates. A computerized version of the flexilevel was designed and developed, and its scores were compared with those of a traditional computer-based test. The results showed that the flexilevel and traditional scores were highly and significantly correlated (p<0.01). Findings from this study suggest that the flexilevel approach is a viable adaptive testing strategy, and may be a good candidate for smaller applications where IRT-based CATs may be too demanding in terms of resources.


international conference of design user experience and usability | 2015

Smartphone Application Usability Evaluation: The Applicability of Traditional Heuristics

Ger Joyce; Mariana Lilley; Trevor Barker; Amanda Jefferies

The Heuristic Evaluation method has been popular with HCI experts for over 20 years. Yet, we believe that the set of heuristics defined by Nielsen in 1994 needs to be modified prior to the usability evaluation of smartphone applications. In this paper, we investigate the applicability of each of Nielsens traditional heuristics to the usability evaluation of smartphone applications following an analysis of 105 peer-reviewed papers. It is anticipated that this work might benefit HCI practitioners, educators and researchers as they attempt to define usability heuristics for smartphone applications. This set of heuristics, once defined, could enable the discovery of usability issues early in the smartphone application development life cycle, while continuing to be a discount usability engineering method as originally defined by Nielsen.


Archive | 2016

Mobile Application Usability: Heuristic Evaluation and Evaluation of Heuristics

Ger Joyce; Mariana Lilley; Trevor Barker; Amanda Jefferies

Many traditional usability evaluation methods do not consider mobile-specific issues. This can result in mobile applications that abound in usability issues. We empirically evaluate three sets of usability heuristics for use with mobile applications, including a set defined by the authors. While the set of heuristics defined by the authors surface more usability issues in a mobile application than other sets of heuristics, improvements to the set can be made.


Innovation in Teaching and Learning in Information and Computer Sciences | 2012

Understanding the Student Experience through the Use of Personas

Mariana Lilley; Andrew Pyper; Sue Attwood

This paper reports on work conducted by the Computer Science Usability Group at the University of Hertfordshire in which a User-Centred Design methodology was applied to gain a deeper understanding of our undergraduate distance learning student population. Specifically, the work reported here is concerned with the approach employed to the development of personas, and how these were applied to the design of learning experiences. This paper also includes samples of the personas produced as part of this work. Discussions with staff elicited a mixed response to the approach; some colleagues felt they already had a good intuitive sense of who the learners were. However it is argued here that one of the benefits of using personas is in how they make such implicit knowledge explicit and the impact this has upon the collective understanding of who our learners are.


Innovation in Teaching and Learning in Information and Computer Sciences | 2011

Attitudes to and Usage of CAT in Assessment in Higher Education

Mariana Lilley; Andrew Pyper; Paul Wernick

Abstract Understanding the perception and usage of Computerised Adaptive Testing (CAT) by colleagues in the sector represents an important part of an ongoing research programme into Learner Centred Design (LCD). For this reason, a web survey was published that asked about colleagues’ experiences and views of assessment in general and CAT in particular. Sixty-nine participants responded to the survey. A range of techniques for assessment were reported, with objective testing being employed more commonly in formative assessment. This represents an important finding in the context of potential uses of CAT given the prevalence of objective testing in CAT. Few of the respondents used CAT as part of their assessment regime although most were aware of it. Practical implementation issues represent the most commonly cited reasons for not using CAT.


international conference on human computer interaction | 2016

Mobile Application Tutorials: Perception of Usefulness from an HCI Expert Perspective

Ger Joyce; Mariana Lilley; Trevor Barker; Amanda Jefferies

Mobile application tutorials are an opportunity to educate users about a mobile application. Should a mobile application tutorial not be used, the number of frustrated users and uninstalled applications could increase, resulting in a substantial loss in revenue for mobile application developers. Yet, the historical ineffectiveness of printed documentation and online help may have a negative influence on the perception of usefulness of mobile application tutorials for more experienced HCI experts. This in turn may influence their design decisions, whereby they may choose to not design a mobile application tutorial when it may have been better for the user. Our research suggests that while there is a split in the perception of usefulness of mobile application tutorials within the HCI community, the length of time in an HCI role did not have a statistically significant effect on this perception.


future technologies conference | 2016

Evaluating the impact of changing contexts on mobile application usability within agile environments

Ger Joyce; Mariana Lilley; Trevor Barker; Amanda Jefferies

Mobile applications tend to be used in contexts that change over time. These varying contexts may impact the usability, and potentially the overall user experience, of mobile applications. However, the impact of context from a temporal perspective is not fully considered within usability evaluations. Consequently, this work focuses on a conceptual method that attempts to address this limitation. The proposed Contextual Usability Evaluation method promises to allow Human-Computer Interaction experts to evaluate the perceived impact of varying contexts over time on the usability of mobile applications. Despite the focus on context over time, the method is well suited to fast-paced Agile environments.

Collaboration


Dive into the Mariana Lilley's collaboration.

Top Co-Authors

Avatar

Trevor Barker

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Amanda Jefferies

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Ger Joyce

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Abrar Ullah

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Andrew Pyper

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Hannan Xiao

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Carol Britton

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Paul Wernick

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Jill Hewitt

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Karen Clark

University of Hertfordshire

View shared research outputs
Researchain Logo
Decentralizing Knowledge