A mobile web for enhancing statistics and mathematics education
Jamie Lentin, Anna H. Jonsdottir, David Stern, Victoria Mokua, Gunnar Stefansson
AA MOBILE WEB FOR ENHANCING STATISTICS AND MATHEMATICS EDUCATION
Jamie Lentin, ¹ Anna H. Jonsdottir, ² David Stern, ³ Victoria Mokua, ³ Eva Dogg Steingrimsdottir, ² Magnea Run Vignisdottir ² and Gunnar Stefansson ² ¹ Shuttle Thread, Manchester, England ² University of Iceland; Science Institute, Taeknigardur Dunhanga 5; 107 Reykjavik; Iceland ³ Maseno University, Private Bag, Maseno, Kenya Contact email: [email protected]
Abstract:
A freely available educational application (a mobile website) is presented. This provides access to educational material and drilling on selected topics within mathematics and statistics with an emphasis on tablets and mobile phones. The application adapts to the student's performance, selecting from easy to difficult questions, or older material etc. These adaptations are based on statistical models and analyses of data from testing precursors of the system within several courses, from calculus and introductory statistics through multiple linear regression. The application can be used in both on-line and off-line modes. The behavior of the application is determined by parameters, the effects of which can be estimated statistically. Results presented include analyses of how the internal algorithms relate to passing a course and general incremental improvement in knowledge during a semester.
Key words: mobile website; mathematics education; interactive drill system.
INTRODUCTION Many on-line drilling systems exist, with some specially designed for a specific topic whereas others are general in nature. The “tutor-web” is a general system for drilling students in addition to storing general educational content. This system has been tested and used by over 2000 students, mostly in introductory courses on statistics and mathematics. Design principles include content modularity, open source software, creative commons texts and drilling exercises which are freely accessible to all students without regard to their physical location or whether they are registered into any school or university. Earlier versions of the system have been written for various platforms and in different programming languages (Stefansson, 2004; Jonsdottir, Jakobsdottir & Stefansson, 2014). These have been used to test various concepts and have forged the basis for the algorithms implemented here. The system is made freely available and primarily intended for learning, not mere evaluation. Therefore students are encouraged to continue using the drilling system until they have achieved expertise on the topic in question. For this reason there is no limit on how long students can request new questions within the drilling system. Research efforts have therefore concentrated on ensuring that the system actually entices students to continue until a high level of expertise is obtained. Since research shows students continuing until a high grade is achieved, the grading scheme needs to be formulated so that a high grade reflects a high level of expertise. At the core of the tutor-web is the use of formative assessment in drills. Formative assessment has been found to be effective in building knowledge in students (Black & William, 1998). After a student answers a drill, a step-by-step solution is normally provided so the student can understand where they went wrong. A drilling system is, by nature, different from a computer-aided-testing (CAT) system. It has been demonstrated that learning occurs during the typical tutor-web practice session (Stefansson & Sigurdardottir, 2011), whereas the Item Response Theory (IRT) used in CAT uses models, which do not permit learning. For this reason a drilling system will allocate items (questions), which are typically easy initially but become more difficult as the student’s grade increases. It has been seen that students using on-line drills try to work very hard towards a high grade when given the option to do so (Stefansson & Sigurdardottir, 2011), but potentially with extensive guessing. In combination with a grading scheme based on the last 8 answers, this results in drill grades, which can be far too high when compared to performance on an exam (Stefansson, 2004). It has therefore been proposed that one could implement a timeout option so that a high rade ensures not only that the student has the capability to eventually solve a problem but has the expertise to solve it quickly (Jonsdottir and Stefansson, 2013). One can also implement a lecture grade, which is a taper of recent grades with a tail, which becomes longer with extended guessing (as proposed in Desjardins et al., 2014). As shown below, it turns out that implementing the timeout and longer tail has a considerable effect. THE MOBILE TUTOR-WEB DRILLING SYSTEM Drill questions are organized by course, tutorial and lecture. A student, upon visiting tutor-web and logging in, can explore the courses available and find a lecture they wish to load onto their computer/mobile. Alternatively they can proceed directly to the drill interface, where they get a choice of already-loaded lectures to study. Either way, once a lecture is chosen they can start working through questions. The interface they see is shown in the screenshot in Figure 1. A student is presented with a question, selected based upon their current grade, and a choice of answers, both of which can involve TeX equations as well an image. They then have to choose one of the answers within a specified time- there is a countdown timer near the bottom of the screen. The answers are displayed in a random order to avoid learning where correct answers are placed. Once an answer has been selected they will then be shown whether their answer is correct or not, and an explanation as to why this is the correct answer. They can also see their current grade, and how many questions for this lecture they have answered. The drill interface runs entirely on the student's device using HTML and Javascript, which means it is capable of working on any modern mobile, tablet or desktop computer. It utilises AppCache and LocalStorage to store the code and the question data on the student's device, this means that the interface remains quick, and can even work without any Internet connection. Answers are simply stored locally until there is an internet connection to send them back to the tutor-web server. The server is based on the Plone CMS and MySQL. Plone provides user/class management, as well as storing the banks of questions. Questions can be imported from TeX files as well as entered and edited manually. Once a student chooses a lecture, then the drill interface asks the server for up to 100 randomly allocated questions from the lecture. The random allocation both ensures that we do not fill the device with too many questions and gives an amount of security, as each student will not be answering the same questions. We also give questions long sparse references that are tied to individual students, so a student cannot download an entire question bank by guessing IDs, or download a question allocated for another student.
Fig. 1.
Screen shot of mobile tutor-web question after student has submitted a correct answer. Note the detailed explanation, which has subsequently appeared below the question. eriodically, the drill interface will send any answers to questions back to the tutor-web server. A class tutor will get to see the progress of their class from an administration interface, and the raw answers will be available via MySQL for further analysis. The algorithms used to set the timeout are the first implementation of those proposed in Jonsdottir and Stefansson (2013) and described in more detail in Desjardins et al. (2014). Basically, the timeout is in the shape of an inverse dome curve, with a minimum time set to correspond to some grade within the lecture. In this manner, beginning or struggling students do not get affected by the timeout, and the timeout does not affect the most difficult or time-consuming questions. However, the students can not get a high grade or proceed to the most difficult questions without passing through the timeout set at the intermediate-level grades. These settings are determined by a parametric function flexible enough to also allow for no (or high, constant) timeout. Earlier versions of the tutor-web (non-mobile) just used the most recent 8 answers to compute a grade within each lecture. It became clear from earlier experiments that (a) students could fairly easily guess their way to what they considered an adequate grade and (b) students tended to quit if they answered a questions incorrectly after 7 correct in a row. A taper was therefore implemented in the mobile web version and used for grades in the following analysis. This initial taper is simply an average of the most recent responses, starting with the most recent 8, but the tail gets longer (n/2) as the number of responses (n) increases above 16, but only up to a maximum of 30 questions in the grade (achieved at n=60 or more). In principle the weights given to the answers could be described by a parametric function, with the effects of different parameter settings to be estimated with formal experimentation. As described in Desjardins et al. (2014), the resulting tutor-web grade should become a better predictor of whether the students pass or fail when the grading scheme includes a longer tail. Desjardins et al. (2014) also include a simple ROC-type analysis to conclude that the combined effects of a timeout+taper appear to correspond to a more internally consistent grading mechanism, with a more elaborate analysis is given below. It is therefore suggested that one should test the effect of other down weighting schemes and possibly also the effect of a higher weight on the very last answer, to entice students to continue. USES Currently the tutor-web system is mainly a support tool, used to supplement education in the classroom. The most elaborate tests of the mobile system described here (i.e. at http://mobile.tutor-web.net) have been in an undergraduate setting, i.e. in one large calculus course (450 entrants) and one large introductory statistics course (250 entrants).In addition the mobile tutor-web has been used for secondary school (high school) mathematics. The system has been tested and used in several high schools (secondary schools) in Iceland, for large and small classes at the University of Iceland and for a very large class at Maseno University in Kenya. Additional uses have been more sporadic, but the system is freely accessible and no formal record is maintained of the user’s whereabouts except when an instructor decides to use it with a class. Course content for a graduate remedial calculus/programming
Fig. 2.
Fig. 3.
Comparisons of results from experiments in calculus classes in 2012 and 2013 at the University of Iceland. In both panels, the y axis shows the grade on the final exam, as a function of the overall grade from tutor-web work during the semester. CKNOWLEDGEMENTS The research leading to the results in this paper has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement 613571 - MareFrame. REFERENCES Black, P., & William, D. (1998). Assessment and classroom learning.
Assessment in education, 5(1) , 7-74. Desjardins, C. D., Jonsdottir, A. H. and Stefansson, G. (2014). Enhanced Learning Through Open-Access Content and Drill Systems. (submitted) Jonsdottir, A. H., Jakobsdottir, A. and Stefansson, G. (2014). Development and use of an adaptive learning environment to research on-line study behaviour. (submitted) Jonsdottir, A. H. and Stefansson, G. (2013). Design and analysis of experiments linking on-line drilling methods to improvements in knowledge. Presented at JSM 2013. See http://arxiv.org/abs/1310.8236
Manyalla, B., Mbasu, Z., Stern, D. and Stern, R. (2014). Measuring the effectiveness of using computer assisted statistics textbooks in Kenya. Presented at ICOTS 9. Stefansson, G. (2004). The tutor-web: An educational system for class-room presentation, evaluation and self-study.
Computers & Education, 43 (4), 315-343. Stefansson, G. and Sigurdardottir, A. J. (2011). Web-assisted education: From evaluation to learning.