User Interface Factors of Mobile UX: A Study with an Incident Reporting Application
UUser Interface Factors of Mobile UX: A Study with an IncidentReporting Application
Lasse Einfeldt , Auriol Degbelo xdot GmbH, M¨unster, Germany Institute for Geoinformatics, University of M¨unster, M¨unster, [email protected], [email protected]
Keywords: mobile UX, mobile form design, map UX, environmental monitoringAbstract: Smartphones are now ubiquitous, yet our understanding of user interface factors that maximize mobile userexperience (UX), is still limited. This work presents a controlled experiment, which investigated factors thataffect the usability and UX of a mobile incident reporting app. The results indicate that sequence of userinterface elements matters while striving to increase UX, and that there is no difference between tab andscrolling as navigation modalities in short forms. These findings can serve as building blocks for empirically-derived guidelines for mobile incident reporting.
User experience (UX) has gained attention frommany sides since the turn of the millennium. Al-though many authors mentioned the lack of clear defi-nitions of user experience and no clear understandingof it in early research (Forlizzi and Ford, 2000; Mc-Carthy and Wright, 2004; Hassenzahl, 2005; Wrightet al., 2005), there has been a lot of research deal-ing with user experience since then. There has beenreviews of UX in human-computer interaction (e.g.Bargas-Avila and Hornbæk (2011); Pettersson et al.(2018); Kieffer et al. (2019)) and work investigat-ing factors of mobile UX in general (e.g. Wigeliusand V¨a¨at¨aj¨a (2009); Arhippainen and T¨ahti (2003)).Nonetheless, our understanding of user interface fac-tors which maximize mobile user experience is stilllimited. Investigating these factors is important forat least two reasons. First, mobile devices are pecu-liar by virtue of their size, their (still) relatively re-duced processing power, and input modalities (e.g.post-WIMP interaction). That is, insights gatheredwhile assessing user experience on desktop (or otherdevices) are not readily portable to the world of mo-bile UX. Second, an understanding of these factorsis important to develop design heuristics, which canthen be integrated as constraints during computer-generated user interface design (Oulasvirta, 2017).This work is primarily concerned with the im- a https://orcid.org/0000-0001-5087-8776 pact of positioning and type of navigation modalityon the overall usability and user experience of mobilegeospatial applications. Incident reporting apps (ofwhich maps are a central component) are of interesthere for two reasons. From the theoretical point ofview (and as mentioned in Kray et al. (2017); Roth(2013)), there are currently no consolidated set ofguidelines that have emerged as to how to design in-teractions with maps. A few works (Sch¨oning et al.,2014; Arhippainen, 2013; Kraak et al., 2018) madesome useful recommendations, but these do not ad-dress the UX of mobile geospatial apps (i.e. maps)directly. Thus, research in this area must remain ac-tive for study outcomes to crystallize into empirically-derived guidelines in the future. From the practi-cal point of view, maximizing UX in the context ofincident reporting is a catalyst for uptake and pro-longed use. Put differently, a positive user experienceis crucial to guarantee prolonged contributions by cit-izens. As a starting point for the work, the mobileapp “Meine Umwelt” was used. The rationale for itschoice is introduced next.Reports of ecological data are an important datasource for German federal agencies dealing with thepreservation of the environment. Reported data canhave several topics including findings of neophytes,illegally disposed waste, or endangered animals. Thisinformation can be used to find items of interest inthe wild easily, instead of searching for them. Asystem used for reporting this data is the mobileapp “Meine Umwelt” (Kooperation-Umweltportale, a r X i v : . [ c s . H C ] F e b As indicated in (Ricker and Roth, 2018), mobiledevices enable users to volunteer their local knowl-edge and experience while situated in place, provid-ing timely and unstructured information about chang-ing geographic conditions. Since “Meine Umwelt” isdedicated to the reporting of environmental data, re-lated work on reporting systems is briefly presented inthis section. In addition, research on factors of mobileUX is briefly introduced, followed by work on formdesign on mobile devices, and interactive maps.
Reporting systems
Winckler et al. (2016) focused on the UX dimensionsimportant for incident reporting systems. Interviewswith participants were used to gain insights into theusers’ perception of the investigated app. They foundusers to prefer a selection of reportable items froma menu. This provides an overview of all items andavoids generic forms trying to fit all items at once.Moreover, an interactive map was clearly demandedby the users. Another requirement found in their in-terviews was the necessity to provide an identifica-tion, to avoid fake reports. Pictures and videos ofthe item should also be included in the report. Re- lated to incident reporting are citizen observatories,defined as “the participation of citizens in monitoringthe quality of the environment they live in” (Liu et al.,2014). Here, no recent incident or event is presentas a motivation for the report. Instead, observatoriesare more focused on ecological data and observationsthan on incidents in an urban context. The acquireddata should be used by the government and if possi-ble made available to the public as data and/or ser-vice (Liu et al., 2014). Citizens’ observatories are de-scribed as a cheap and easy way for the administrationto collect data about the environment.The collection of data about the environment,where the citizens live is exactly what the prototypespresented later in this study were designed for. In ad-dition, targeting a broad audience alongside the envi-ronmentalists, who are highly active in preservationanyway, is possible with a mobile application. Hav-ing data reported by local citizens as stakeholders canlead to enhanced quality in environmental decisions(Reed, 2008). This data is easy to collect and as areview of ten years of community-based monitoringfound, one of the most efficient methods of monitor-ing (Conrad and Hilchey, 2011). Preece (2016) ref-erences multiple technologies and concepts used forcitizen science, like passive and active data collec-tion, mobile apps, web portals, webcams, drones, andgamification. All of these can be useful approaches tocollect data, which fulfill some given requirements.In addition, these concepts can be combined - likean active data collection platform available both asan app and web portal. She also mentions some re-porting systems like Floracaching, iSpotNature.org,and iNaturalist.org. The study in this work uses ac-tive data collection via mobile apps. The domainsof the reports are comparable to the one in iSpotNa-ture.org, which provides a wide range of species toreport. In contrast, other reporting services like Flo-racaching focus on one domain only, i.e. plants.
Factors of mobile UX
An early listing of important dimensions affectingmobile UX, in general, was provided in (Arhippainenand T¨ahti, 2003). These dimensions were: the user,the social factors, the cultural factors, the context ofuse, and the given product. Along similar lines, Sub-ramanya and Yi (2007) divided factors contributing toa good user experience into three categories: device-related (i.e. hardware), communication-related (i.e.provide a feeling of face-to-face communication asmuch as possible), and application-related (i.e. UIrelated); Wigelius and V¨a¨at¨aj¨a (2009) identified fivedimensions affecting mobile UX: the social context,the spatial context, the temporal context, the infras-ructural as well as the task contexts; and Ickin et al.(2012) identified seven factors that influence mobileUX in their study: application interface design, ap-plication performance, battery efficiency, phone fea-tures, application and connectivity cost, user routines,and user lifestyle. Concerning investigating UX, Ko-rhonen et al. (2010) proposed that mobile UX can betraced back to two features of the user’s context: thetriggering context (i.e. single contextual factor thatchanges the user’s experience stream in a positive ornegative direction), and the core experience (i.e. ex-perience that was most memorable to the user duringthe interaction). The lessons learned about UI factorsin this work intend to extend this body of knowledgewith insight from UI element design and placement.
Form design on mobile devices
Harms et al. (2015) investigated the use of long formson smartphones. They tested four different designsand found scrolling to perform the worst of all pos-sible methods, while the other three designs (tabs,menus, and collapsible fieldsets) worked equally well.This finding matches the framework of Zhang andAdipat (2005), who stated that the cognitive loadneeded from the user should be minimized by avoid-ing long lists. One way to avoid scrolling in longforms is to structure the content in categories, eachfitting the screen size. Besides, the framework from(Zhang and Adipat, 2005) proposes to use as littleinteraction as possible (which changes in categorieswould imply). Hence, the structuring of contenton mobile phones should be designed thoughtfully.While Couper et al. (2011) investigated the placementof buttons in long forms (and not their structuring),their statement that the design of forms can affect theuser behavior speaks in favor of further investigatingform design in general. These studies all examinedthe design of, and interaction with long forms. Thequestion remains whether the findings are also appli-cable to short forms, and if scrolling in forms shouldbe avoided in general.
Interactive maps
Degbelo et al. (2019) compared the merits of form-based and map-based interaction for geodata creationtasks. They reported that the sweet spot of interactivemaps (on desktop devices) seems not so much theirimpact on productivity, but rather their positive influ-ence on the overall UX. Users reported that maps aremore stimulating and attractive than forms for infor-mation creation tasks. Regarding the mobile contextof interactive maps, Burigat and Chittaro (2011) in-vestigated mobile maps and the use of three differ- ent approaches to visualizing the references to off-screen content of maps. In another study, Burigatet al. (2008) found zoomable maps with an overviewwindow (overview+detail approach) to be useful formap interaction. A third study by Burigat and Chit-taro (2013) showed increased performance in taskcompletion time with overview+detail layouts, whichusers could manipulate. The focus was on interfaces,which allow map manipulation through interactionswith the overview and highlighting of objects. Fi-nally, Horbi´nski et al. (2020) provided evidence thatpositioning of UI elements on mobile maps matters .Users indeed expressed different preferences for thepositioning of buttons for features such as geoloca-tion, search, or routing. Though these studies providevaluable insights into the use of interactive maps onmobile devices, they do not cover an understanding offactors affecting mobile map user experience, whichis the topic of this article.
The goal was to investigate differences in users’perception of the navigation modalities and sequencesof the interactions. The interfaces were designed toinvestigate the influence of map positioning as wellas different form designs on a smartphone. Thisstudy can be viewed, following Schmidt (2009), asa ‘follow-up study with a replication condition’ toHarms et al. (2015)’s work. The focus was on twoquestions. RQ1: what is the influence of differentform designs on user experience and usability? Thefocus here was on short forms (i.e. up to 10 fields).RQ2: what is the influence of UI component se-quences on user experience and usability?
Variables
Independent variables: The sequence of the UI com-ponents, and the design of the form UI were vari-ables controlled by the different prototypes; depen-dent variables: usability, user experience, and taskperformance. Usability and user experience weremeasured using the System Usability Scale (SUS,Brooke (1995)) and the User Experience Question-naire (UEQ-S, Schrepp et al. (2017)) respectively.Since it was not possible to integrate interaction logsinto “Meine Umwelt”, the time to complete the taskswas measured using screen recordings of the smart-phone used during the experiment for all prototypes. tudy design
Besides the application “Meine Umwelt”, two pro-totypes were developed for the experiment (Figure1). The prototypes were designed to separate the se-quences of the UI elements, and the form designs.Therefore, three applications were used: Map + Se-lection + Form (scroll) [hereafter, Prototype 1]; Map+ Selection + Form (tab) [hereafter, Prototype 2]; Se-lection + Form (scroll) + map [hereafter, Base con-dition] (“Meine Umwelt”). A within-subject designwas used in the experiment. Each participant com-pleted three ecological reporting tasks, with differentlevels of difficulty. The order of the tasks was coun-terbalanced using a Latin Square approach (see sup-plementary material).
Tasks
Each prototype was tested with three different tasks ofequivalent properties and difficulty. Properties werethe complexity of the task , the complexity of the place ,and the point of time . The complexity of the task refersto different forms provided in the tasks. The testedcondition of the “Base condition” provides six cat-egories to report. These were all used in the tasks,while some of these had to be used multiple times toreach the number of nine tasks. The plant category“Ambrosia” was used in all task groups, because thecomplexity of this task was higher than for all otherconditions. The form to report Ambrosia had three in-put fields more than the remaining categories, result-ing in a more demanding task (as more informationhas to be remembered and filled in by the user). Eachtask group had one task with a point of time requiringthe user to go back in the calendar for several months,as opposed to the other dates used (e.g. “right now”,“yesterday”). Lastly, the complexity of the place washigher for one task per group. This was used to makeusers interact with the map more than just searchingfor street names. The tasks were in German and areavailable in the supplementary material . The studywas approved by the local ethics board and pilot testedwith two participants. A result of the pilot study was aslight adjustment of the tasks, to improve their under-standability. Also, a search bar was implemented forthe prototypes to enable place searches on the map.The results from the two participants are not includedin the analysis. See https://doi.org/10.6084/m9.figshare.13174550.
Participants
18 participants (10 females and 8 males) were re-cruited by e-mail and word of mouth. Participantsall spoke German because the study and app “MeineUmwelt” were in German. The participants includedfour landscape ecologists and biologists, who are in-volved in landscape and environmental preservation,along with other users, who might use the app morecasually. This mix ensures a representative group tothe user group of “Meine Umwelt”. The average ageof the participants was 22.7 (SD: 2.7).
Table 1 presents the results of the study. In linewith recommendations from previous work Dragice-vic (2016), the whole analysis was done using intervalestimates. Confidence intervals provide much richerinformation than p-values alone. A confidence in-terval that does not include zero indicates statisticalsignificance; the tighter the interval, and the fartherfrom zero, the stronger the evidence. The analysiswas done using the BootES package (Kirby and Ger-lanc, 2013) in R. The number of bootstrap resampleswas N = 2000. Task performance (i.e. TimeOnTask)was similar across all three prototypes.Regarding usability, the Base condition “MeineUmwelt” had an average value of 59.7. The differ-ence between the Base condition and Prototype 1 wassignificant (9.4, 95%CI [1.5, 18.1]), and so was thedifference between the Base condition and Prototype2 (7.9, 95%CI [0.7, 15.9]). In contrast, the differencebetween Prototype 1 and Prototype 2 was not signifi-cant with respect to usability ratings.As regards user experience, the Base condition(hereafter MU) was rated with a pragmatic qualityof 1.01 by the participants. The hedonic quality waslower at 0.38 and the overall rating was 0.70. Overall,users rated the user experience of the Prototype 1 (P1)significantly higher than the user experience of theBase condition (+0.57, 95%CI [0.25, 0.97]). Proto-type 2 (P2) collected also significantly higher ratingsthan the Base condition (+0.54, 95%CI [0.14, 0.96])for the overall user experience. The difference be-tween MU and P1 was significant for the pragmaticquality (0.85, 95%CI [0.2, 1.6]), and so was the dif-ference between the MU and P2 for the pragmatic as-pects (0.85, 95%CI [0.01, 1.7]). The difference be-tween P1 and P2 was not significant for the pragmaticquality. The differences between all prototypes werenot significant for the hedonic quality.Regarding the size of the effects, Hedges’ g was igure 1: Top: P1 & P2 (map as landing screen) vs MU (map integrated into a form); Middle: P1 with scrollable form vs P2with tabs to structure the form vs MU (Baseline); Bottom: Example species type in P1 & P2 vs example species type in MU. omputed for all significant differences. This is inline with recommendations from previous work (e.g.Sauro, 2014; Lakens, 2013) for studies with less than20 participants. The values obtained were 0.52 (MUvs P1, usability); 0.46 (MU vs P2, usability); 0.71(MU vs P1, user experience total); 0.56 (MU vs P2,user experience total); 0.50 (MU vs P1, user experi-ence pragmatic); and 0.44 (MU vs P2, user experiencepragmatic). These suggest a medium effect .Gender effects (male vs female) and backgrounddomain effects (landscape ecologist/biologist vs oth-ers) were tested. These were all non-significant onall dimensions (TimeOnTask, usability, and user ex-perience). A learning effect was apparent in the datafor task completion time. For 16 of 18 participants,the first app interaction took the most amount of time.Nevertheless, the effects were apparent for all appli-cations, because of the Latin square design of the ex-periment and the resulting order of apps for each par-ticipant. Therefore, the learning effect did not influ-ence the results of a single app. The main differencebetween the tested applications was the positioning ofthe map and the form designs. Both prototypes scoredhigher values for both usability and user experience.These prototypes were using the map as a landingscreen for the application. The influence of form de-sign was not apparent, as both prototypes scored sim-ilar values and performed with similar times. Theseresults are now discussed in detail. This section touches upon five topics: interpreta-tion of the task performance results, the effect of formdesign on UX (RQ1), the effect of the sequence of UIelements on UX (RQ2), implications of the results fortheory and design, and the limitations of the study.
Functional similarity of the apps
The results of the user study have shown similar re-sults for the task performance of the tested appli-cations. All three of them need approximately thesame time to complete all tasks (Table 1). This indi-cates that the applications are all equally suited for thetask at hand, namely the reporting of ecological data.There were no clear indications for an app hinderingusers in effectively performing the provided tasks. A Corresponding Cohen’s d values were 0.54 (MU vs P1,usability); 0.48 (MU vs P2, usability); 0.74 (MU vs P1, userexperience total); 0.59 (MU vs P2, user experience total);0.53 (MU vs P1, user experience pragmatic); and 0.46 (MUvs P2, user experience pragmatic). goal of designing the two additional prototypes wasto provide a similar design, to be able to compare out-comes. With all applications performing on a similarlevel regarding the time required to solve the tasks,this goal seems to have been achieved.
Effect of form design
Considering the first research question, the resultshave shown no significant difference between scrol-lable forms and tab-structured forms. Put differently,the sequences “Map + Selection + Form (scroll)” [P1]and “Map + Selection + Form (tab)” [P2] can be con-sidered equivalent with regard to task completion, us-ability and UX. The main implication is that the find-ings of Harms et al. (2015), who found scrolling tobe worse (as to usability) for the navigation in longforms, cannot be generalized to all sizes of forms. Theusability of forms of a size up to 10 input fields is notsignificantly improved by a tab-based design. Thus,form designs do not noticeably influence usability anduser experience in all cases.
Effect of sequence of UI elements
Significant differences between the Base condition(MU) and Prototype 1 were found for the user experi-ence ratings. The differing variable between the twoapplications was the positioning of the map. Lookingclosely at the dimensions of user experience, hedonicUX was not significantly impacted, while pragmaticUX was rated differently by the participants. Like-wise, usability ratings were much higher for Proto-type 1. This answers the second research question:positioning the interactive map before the form in thesequence of interactions results in better user expe-rience and usability. Given that perceived usabilityis strongly associated with pragmatic user experience(see e.g. Diefenbach et al. (2014); Pettersson et al.(2018)), the convergence of usability and UEQ rat-ings during the experiment increases our confidencethat pragmatic user experience has been positively af-fected by the change in the positioning of the mapcomponent. In addition, given that the sequences“Map + Selection + Form (scroll)” [P1] and “Map +Selection + Form (tab)” [P2] can be considered equiv-alent (see above), the fact that similar observations aremade for Prototype 2 also provide confirmatory evi-dence for the result. There may be several reasons forthis, and these are now discussed using two theories:‘cognitive fit theory’ (Vessey, 1991) and ‘memory-based theory of sequence effects’ (Cockburn et al.,2017).One reason could be that the sequence ‘locationreporting = > species type selection = > specie report-U 95% CI P1 95% CI P2 95% CI Time (seconds)
318 [298, 339] 306 [279, 335] 323 [287, 379]Time (Diff, MU) -13 [-42, 23] 4 [-38, 56]Time (Diff, P2) -17 [-84, 31]
SUS Score
UEQ (prag.)
UEQ (hed.)
UEQ (total)
Table 1: Results. Average values per prototype are reported first, followed by the average of the differences between theprototypes. Cells highlighted in blue indicate significant values (i.e. confidence intervals that do not include zero). ing’ (sequence 1) provides a better cognitive fit thanthe sequence ‘species type selection = > specie report-ing = > location reporting’ (sequence 2). Cognitivefit theory Vessey (1991) posits that problem-solvingefficiency and effectiveness increases when problem-solving aids (in this context UI elements) supportstrategies required to perform a task. In the case ofsequence 2, the map popping up to enter the location interrupts the interaction of filling a form (see Fig-ure 1, left). In contrast, sequence 1 provides a full-screen map interface to enter the location and addi-tionally encapsulates the location input. Hence, usersare done with entering location data, when they startto provide additional data (e.g. species type, date,height/number of plants, see Figure 1, middle) usingforms. Thus, sequence 1 offers a ‘cleaner’ task sepa-ration and hence a better cognitive fit. It is worth men-tioning that cognitive fit in the strictest sense only pre-dicts improved efficiency and effectiveness for someconditions. These were not observed (see Table 1).Yet, if the scope of cognitive fit theory is broadened tonon-instrumental aspects of interaction, the explana-tion above is plausible. A single experiment may notjustify broadening the scope of the theory, but basedon the results, the following provisional postulate canbe formulated: user experience and/or performanceincrease when problem-solving aids support strate-gies required to perform a task.Another reason for differences in the scores mightbe that both SUS and UEQ measure the perceived us-ability and UX respectively. Therefore, the first im-pression of the map as the landing screen for an ap-plication might simply be perceived more positively.This explanation would concur with Lindgaard et al. (2006), who found that users form their judgmentabout the visual appeal of web pages during the first50 milliseconds of their interaction. That finding wasconfirmed in (Tractinsky et al., 2006), who also pro-vided evidence that first impressions about the attrac-tiveness of web pages are not only formed quickly,but lasting. Since users find maps to be more stimu-lating and attractive than forms for information pro-vision (see Degbelo et al. (2019)), it is plausible thattheir first impression seeing the map first was muchmore positive, and lasted till the end of the interac-tion. Besides, given that UX was measured after theexperiment, a memory-based view of UX could bevaluable at this point. According to memory-basedtheories of UX (Cockburn et al., 2017), three factorsinfluence people’s memory of experiences: primacy(i.e. over-weighted influence of the initial moments ofan experience), recency (i.e. over-weighted influenceof the terminating moments of the experience), andpeak-end (i.e. over-weighted influence of the most in-tense moments of the experience). Since the map waspresent at the beginning in sequence 1 and at the endin sequence 2, producing in one case a positive pri-macy effect, and in the other a positive recency effect,the results suggest that there may be cases where pri-macy effects with a UI element weight stronger thanrecency effects with that element. The explanationsprovided in this and the previous paragraph are ar-guably tentative, but are also useful hypotheses to cor-roborate in follow-up experiments. Implications for theory and design “Meine Umwelt” is an incident reporting app, andas such, the lessons learned apply to incident report-ng apps more broadly. As to theory, Winckler et al.(2013) provided a comprehensive model for tasks re-lated to incident reporting. In their model, they sug-gested that sequence does not matter for the sub-tasks:‘describe the incident’, ‘locate the incident’, and ‘in-form time for the incident’. The results above sug-gest that make the scope of their theoretical modelmore precise. Sequence does not matter from the timecompletion point of view but does matter from thepragmatic UX point of view. As to design, Norman(2009) calling designers to design for memorable ex-periences, formulated these rules of thumb: “What isthe most important part of an experience? Psychol-ogists emphasize what they call the primacy and re-cency effects, with recency being the most important.In other words, what is most important? The ending.What is most important after that? The start. So makesure the beginning and the end are wonderful”. Theend may not always be more important than the startaccording to the results above.
Limitations
Though the prototypes were designed as closely aspossible to the baseline application “Meine Umwelt”,some minor differences can still be detected whencomparing the applications side by side (e.g. captionsin the pictures, or the choice of the map tile provider,see Figure 1, left). These differences could not be re-moved, and stem from the different technologies usedfor the app development. “Meine Umwelt” uses anAPI key for Google Maps (and thus Google Maps asa base map) while the prototypes implemented usedOpen Street Map as a base map. Besides, while“Meine Umwelt” used the Cordova framework, Re-act Native was used to develop the prototypes. Thenative components provided by React Native havesome slightly different properties than those of Cor-dova (e.g. the placeholders for < input > tags dis-played in a slightly different way, see Figure 1, mid-dle). These differences (i.e. map[googlemaps] vs map[openstreetmap] or form [cordova] vs form [reactna-tive]) are minor nonetheless and are mentioned herefor the sake of completeness only . There is no recent study known to the authors reportingon a comparison between the user experience - on mobiledevices - of Google Maps vs. OpenStreetMaps, and theuser experience of Cordova Apps vs. React Native apps.Nonetheless, a study by Schnur et al. (2018) reported thatthe perceived user complexity of Google Maps was consis-tently lower than that of OpenStreetMaps for several levelsof details. Tuch et al. (2009), in the context of websites,reported an inverse-linear relationship between visual com-plexity and pleasure. That is, start pages with low visualcomplexity were rated by users as more pleasurable. A
This article has investigated the effect of sequenceof UI elements and type of forms within a mobile ap-plication for reporting ecological data. A user studyhas shown significant preferences for the map as astarting element, instead of the map as an ending el-ement. Besides, a tab-based, structured design wastested against a scrollable view. Results have shownno significant difference between these designs inshort forms. With respect to earlier research (Harmset al., 2015), it can be concluded, that scrolling doesnot perform worse for all sizes of form length. Inshort, the sequence of user interface elements on mo-bile devices matters, and the type of form design mat-ters, depending on the length of the forms. Designersshould keep in mind both (besides button placementidentified in previous work Horbi´nski et al. (2020))while building their next incident reporting app. Fu-ture work can replicate this study, for instance, factor-ing in data about the complexity of the base maps, andcollecting qualitative data about what users dis/liked.Additionally, future studies can further investigatewhy perceived user experience and usability are betterfor some sequences than for others, based on a revisedversion of the cognitive fit theory, and memory-basedtheories of UX.
Acknowledgments
We thank xdot GmbH for their support and for sharingthe code of “Meine Umwelt”.
REFERENCES
Arhippainen, L. (2013). A tutorial of ten user experienceheuristics. In Lugmayr, A., Franssila, H., Paavilainen, J.,and K¨arkk¨ainen, H., editors,
International Conference onMaking Sense of Converging Media, Academic MindTrek’13 , pages 336–337, Tampere, Finland. ACM.Arhippainen, L. and T¨ahti, M. (2003). Empirical evaluationof user experience in two adaptive mobile applicationprototypes. In
MUM 2003. Proceedings of the 2nd In-ternational Conference on Mobile and Ubiquitous Mul- replication of several studies (Miniukovich and Marchese,2020) confirmed that inverse-linear relationships for web-sites. Miniukovich and De Angeli (2014) also observeda negative correlation between visual complexity and aes-thetics for mobile apps. Putting these findings together sug-gests - if the difference map [googlemaps] vs. map [open-streetmap] was important - that P1 & P2 (OpenStreetMaps,Figure 1, top left) would have obtained lower user experi-ence ratings than MU (Google Maps, Figure 1, top right).The opposite was observed during the study. imedia , pages 27–34. Link¨oping University ElectronicPress.Bargas-Avila, J. A. and Hornbæk, K. (2011). Old wine innew bottles or novel challenges: a critical analysis of em-pirical studies of user experience. In Tan, D. S., Amershi,S., Begole, B., Kellogg, W. A., and Tungare, M., editors,
Proceedings of CHI ’11 , pages 2689–2698, Vancouver,British Columbia, Canada. ACM Press.Brooke, J. (1995). SUS - A quick and dirty usability scale.
Usability Evaluation in Industry , 189:4–7.Burigat, S. and Chittaro, L. (2011). Visualizing referencesto off-screen content on mobile devices: A comparisonof Arrows, Wedge, and Overview + Detail.
Interactingwith Computers , 23(2):156–166.Burigat, S. and Chittaro, L. (2013). On the effectiveness ofOverview+Detail visualization on mobile devices.
Per-sonal and Ubiquitous Computing , 17(2):371–385.Burigat, S., Chittaro, L., and Parlato, E. (2008). Map,diagram, and web page navigation on mobile de-vices: the effectiveness of zoomable user interfaces withoverviews. In
Proceedings of MobileHCI ’08 , pages147–156.Cockburn, A., Quinn, P., and Gutwin, C. (2017). The ef-fects of interaction sequencing on user experience andpreference.
International Journal of Human-ComputerStudies , 108:89–104.Conrad, C. C. and Hilchey, K. G. (2011). A review of cit-izen science and community-based environmental mon-itoring: Issues and opportunities.
Environmental Moni-toring and Assessment , 176(1-4):273–291.Couper, M. P., Baker, R., and Mechling, J. (2011). Place-ment of navigation buttons in Web surveys.
Survey Prac-tice , 4(1):11.Degbelo, A., Kruse, J., and Pfeiffer, M. (2019). Interactivemaps, productivity and user experience: A user study inthe e-mobility domain.
Transactions in GIS , 23(6):1352–1373.Diefenbach, S., Kolb, N., and Hassenzahl, M. (2014). The’hedonic’ in human-computer interaction: history, con-tributions, and future research directions. In Wakkary,R., Harrison, S., Neustaedter, C., Bardzell, S., and Pau-los, E., editors,
Designing Interactive Systems Confer-ence 2014 (DIS’14) , pages 305–314, Vancouver, BritishColumbia, Canada. ACM.Dragicevic, P. (2016). Fair statistical communication inHCI. In Robertson, J. and Kaptein, M., editors,
ModernStatistical Methods for HCI , pages 291–330. Springer,Cham.Forlizzi, J. and Ford, S. (2000). The building blocks of ex-perience: an early framework for interaction designers.In
Proceedings of the 3rd conference on Designing De-signing interactive systems , pages 419–423. ACM. Harms, J., Kratky, M., Wimmer, C., Kappel, K., andGrechenig, T. (2015). Navigation in long forms onsmartphones: scrolling worse than tabs, menus, and col-lapsible fieldsets. In Abascal, J., Barbosa, S., Fetter,M., Gross, T., Palanque, P., and Winckler, M., editors,
Human-Computer Interaction – INTERACT 2015 , pages333–340. Springer, Cham.Hassenzahl, M. (2005). The Thing and I: Understanding theRelationship Between User and Product. In
Funology:From Usability to Enjoyment , chapter 3, pages 31–42.Kluwer Academic Publishers, 2 edition.Horbi´nski, T., Cybulski, P., and Medy´nska-Gulij, B. (2020).Graphic design and button placement for mobile map ap-plications.
The Cartographic Journal , 0(0):1–13.Ickin, S., Wac, K., Fiedler, M., Janowski, L., Hong, J.-H.,and Dey, A. K. (2012). Factors influencing quality ofexperience of commonly used mobile applications.
IEEECommunications Magazine , 50(4):48–56.Kieffer, S., Rukonic, L., Kervyn de Meerendr´e, V., and Van-derdonckt, J. (2019). Specification of a UX process ref-erence model towards the strategic planning of UX ac-tivities. In Chessa, M., Paljic, A., and Braz, J., editors,
Proceedings of the 14th International Joint Conferenceon Computer Vision, Imaging and Computer GraphicsTheory and Applications , pages 74–85, Prague, CzechRepublic. SCITEPRESS.Kirby, K. N. and Gerlanc, D. (2013). BootES: An R pack-age for bootstrap confidence intervals on effect sizes.
Be-havior Research Methods , 45(4):905–927.Kooperation-Umweltportale (2019). Meine Umwelt.https://play.google.com/store/apps/details?id=de.bw.umwelt.meineumwelt.Korhonen, H., Arrasvuori, J., and V¨a¨an¨anen-Vainio-Mattila, K. (2010). Analysing user experience of per-sonal mobile products through contextual factors. In An-gelides, M. C., Lambrinos, L., Rohs, M., and Rukzio,E., editors,
Proceedings of the 9th International Con-ference on Mobile and Ubiquitous Multimedia (MUM2010) , page 11, Limassol, Cyprus. ACM.Kraak, M., Ricker, B., and Engelhardt, Y. (2018). Chal-lenges of mapping sustainable development goals in-dicators data.
ISPRS International Journal of Geo-Information , 7(12):482.Kray, C., Schmid, F., and Fritze, H. (2017). Guest editorial:map interaction.
GeoInformatica , 21(3):573–576.Lakens, D. (2013). Calculating and reporting effect sizesto facilitate cumulative science: a practical primer for t-tests and ANOVAs.
Frontiers in Psychology , 4.Lindgaard, G., Fernandes, G., Dudek, C., and Brown, J.(2006). Attention web designers: You have 50 millisec-onds to make a good first impression!
Behaviour & In-formation Technology , 25(2):115–126.iu, H.-Y., Kobernus, M., Broday, D., and Bartonova, A.(2014). A conceptual approach to a citizens’ observa-tory – supporting community-based environmental gov-ernance.
Environmental Health , 13(1):107.McCarthy, J. and Wright, P. (2004).
Technology as experi-ence , volume 11. MIT Press.Miniukovich, A. and De Angeli, A. (2014). Visual impres-sions of mobile app interfaces. In Roto, V., H¨akkil¨a, J.,V¨a¨an¨anen-Vainio-Mattila, K., Juhlin, O., Olsson, T., andHvannberg, E. T., editors,
Proceedings of NordiCHI ’14 ,pages 31–40, Helsinki, Finland. ACM Press.Miniukovich, A. and Marchese, M. (2020). Relationshipbetween visual complexity and aesthetics of webpages.In Bernhaupt, R., Mueller, F. F., Verweij, D., Andres,J., McGrenere, J., Cockburn, A., Avellino, I., Goguey,A., Bjøn, P., Zhao, S., Samson, B. P., and Kocielnik, R.,editors,
Proceedings of CHI 2020 , pages 1–13, Honolulu,Hawaii, USA. ACM.Norman, D. A. (2009). Memory is more important thanactuality. interactions , 16(2):24.Oulasvirta, A. (2017). User interface design with combina-torial optimization.
Computer , 50(1):40–47.Pettersson, I., Lachner, F., Frison, A.-K., Riener, A., andButz, A. (2018). A Bermuda Triangle? A review ofmethod application and triangulation in user experienceevaluation. In Mandryk, R. L., Hancock, M., Perry, M.,and Cox, A. L., editors,
Proceedings of CHI ’18 , pages1–16, Montreal, Quebec, Canada. ACM Press.Preece, J. (2016). Citizen science: new research challengesfor human–computer interaction.
International Journalof Human-Computer Interaction , 32(8):585–612.Reed, M. S. (2008). Stakeholder participation for envi-ronmental management: A literature review.
BiologicalConservation , 141(10):2417–2431.Ricker, B. and Roth, R. (2018). Mobile maps and responsivedesign. In Wilson, J. P., editor,
Geographic InformationScience & Technology Body of Knowledge , number Q2.Roth, R. E. (2013). Interactive maps: What we know andwhat we need to know.
Journal of Spatial InformationScience , 6:59–115.Sauro, J. (2014). MeasuringU: Understanding effect sizes inuser research. https://measuringu.com/effect-sizes/, lastaccessed: November 03, 2020.Schmidt, S. (2009). Shall we really do it again? The pow-erful concept of replication is neglected in the social sci-ences.
Review of General Psychology , 13(2):90–100.Schnur, S., Bektas¸, K., and C¸ ¨oltekin, A. (2018). Measuredand perceived visual complexity: a comparative studyamong three online map providers.
Cartography and Ge-ographic Information Science , 45(3):238–254. Sch¨oning, J., Hecht, B. J., and Kuhn, W. (2014). Informingonline and mobile map design with the collective wis-dom of cartographers. In Wakkary, R., Harrison, S.,Neustaedter, C., Bardzell, S., and Paulos, E., editors,
Designing Interactive Systems Conference 2014 , pages765–774, Vancouver, British Columbia, Canada. ACM.Schrepp, M., Hinderks, A., and Thomaschewski, J. (2017).Design and evaluation of a short version of the userexperience questionnaire (UEQ-S).
International Jour-nal of Interactive Multimedia and Artificial Intelligence ,4(6):103.Subramanya, S. and Yi, B. K. (2007). Enhancing the userexperience in mobile phones.
Computer , 40(12):114–117.Tractinsky, N., Cokhavi, A., Kirschenbaum, M., and Sharfi,T. (2006). Evaluating the consistency of immediate aes-thetic perceptions of web pages.
International Journal ofHuman-Computer Studies , 64(11):1071–1083.Tuch, A. N., Bargas-Avila, J. A., Opwis, K., and Wilhelm,F. H. (2009). Visual complexity of websites: Effects onusers’ experience, physiology, performance, and mem-ory.
International Journal of Human-Computer Studies ,67(9):703–715.Vessey, I. (1991). Cognitive fit: A theory-based analysisof the graphs versus tables literature.
Decision Sciences ,22(2):219–240.Wigelius, H. and V¨a¨at¨aj¨a, H. (2009). Dimensions of contextaffecting user experience in mobile work. In Gross, T.,Gulliksen, J., Kotz´e, P., Oestreicher, L., Palanque, P. A.,Prates, R. O., and Winckler, M., editors,
Proceedingsof INTERACT 2009 , pages 604–617, Uppsala, Sweden.Springer.Winckler, M., Bach, C., and Bernhaupt, R. (2013). Iden-tifying user experience dimensions for mobile incidentreporting in urban contexts.
IEEE Transactions on Pro-fessional Communication , 56(2):97–119.Winckler, M., Bernhaupt, R., and Bach, C. (2016). Identifi-cation of UX dimensions for incident reporting systemswith mobile applications in urban contexts: a longitudi-nal study.
Cognition, Technology and Work , 18(4):673–694.Wright, P., Mccarthy, J., and Meekison, L. (2005). MakingSense of Experience. In
Funology: From Usability toEnjoyment , pages 43–53. Kluwer Academic Publishers,2 edition.Zhang, D. and Adipat, B. (2005). Challenges, Methodolo-gies, and Issues in the Usability Testing of Mobile Ap-plications.