Content Sequencing and its Impact on Student Learning in Electromagnetism: Theory and Experiment
Benjamin J. Dringoli, Ksenia Kolosova, Thomas J. Rademaker, Juliann Wray, Jeremie Choquette, Michael Hilke
CContent Sequencing and its Impact on Student Learning in Electromagnetism:Theory and Experiment
Benjamin J. Dringoli, Ksenia Kolosova, Thomas J. Rademaker, Juliann Wray, Jeremie Choquette,
1, 2 and Michael Hilke ∗ Department of Physics, McGill University, 3600 rue University, Montr´eal, Qu´ebec H3A2T8, Canada Dawson College, 3040 Sherbrooke St W, Montr´eal, Qu´ebec H3Z 1A4, Canada (Dated: October 2, 2019)We investigate the impact of content sequencing on student learning outcomes in a first-year uni-versity electromagnetism course. Using a custom-built online system, the McGill Learning Platform(McLEAP), we test student problem-solving performance as a function of the sequence in which thestudents are presented aspects of new material. New material was divided into the three categoriesof conceptual, theoretical and example-based content. Here, we present findings from a two-yearstudy with over 1000 students participating. We find that content sequencing has a significant im-pact on learning outcomes in our study: students presented with conceptual content first performsignificantly better on our assessment than those presented with theoretical content. To explainthese results, we propose the
Content Cube as an extension to the the mental model frameworks.Additionally, we find that instructors preferences for content sequencing differ significantly fromthat of students. We discuss how this information can be used to improve course instruction andstudent learning, and motivate future work building upon our presented results to study the impactof additional factors on student performance.
I. INTRODUCTION
For physics instructors, a fundamental question un-derlies the evaluation of lectures, lesson plans, and otherlearning materials: “Is there a presentation of educa-tional content that results in optimal learning for stu-dents, and what factors affect this?” While this questionis simple, its investigation incorporates psychology, neu-robiology, education research, and subject-specific con-siderations [1–5]. Additionally, to understand whichteaching material is best for student learning, one muststudy how students, individually and as a collective, learn[6], as well as what other factors besides the educa-tional content provided can modify their learning out-come. These issues motivate the writing process of newtextbooks within physics, as authors may feel that othertexts or guidelines do not suit the perceived needs of theirstudents. The resulting works, which if popular serve asteaching guides or templates for additional professors andstudents, thus inherit the writers personal teaching goalsand pedagogical methods. The investigation of how dif-ferent approaches impact student learning is an activearea in STEM education research due in part to the in-creased focus on proficiency in science and math relatedsubjects in the past decades [7], especially at the uni-versity level. In addition, investigation of this kind is aunique opportunity to probe the subtle differences be-tween the preferences of new and experienced learners,such as populations of students and professors.While teaching an introductory Electricity and Mag-netism course for first-year University STEM students, ∗ [email protected] we began to investigate which characteristics of educa-tional content impact its teaching effectiveness. Eventhough there are many proposed theories for how stu-dents learn best [8–10], we find that within physics thereare specific educational content types that introduce ad-ditional complexity not addressed in these prescribedframeworks, and thus should be explored in more detail.We aim to measure the effectiveness of certain presenta-tions of physics content in improving student learning,and to find what factors (content type, content order,student preferences, etc.) are the most important to con-sider when designing or improving lesson plans.In this work, we probe how a fundamental aspect ofcontent presentation, the sequence in which the contenttypes underlying physics material are presented, con-tribute to a students learning outcome. This is achievedthrough an experimental assignment which tests studentsproblem solving skills after being presented with differentcontent types and sequences. To accomplish this we in-troduce our McGill Learning Platform (McLEAP) tool,which offers a flexible online space for investigating ped-agogical questions such as these, and explain our findingsthrough our “Content Cube” model as a new way of vi-sualizing the learning process in physics and evaluatingcontent presentation schemes. II. BACKGROUND/THEORYA. Common textbooks
To discern some common qualitative approachestaken by the wider physics education community, we firstexplore content presentation within popular textbooks a r X i v : . [ phy s i c s . e d - ph ] S e p aimed at teaching electromagnetism to university stu-dents. It is instructive to examine texts of different levelsto see how authors treat different populations of learn-ers with different needs, and it can inform us about howphysics educators break down topics for student learning.A general physics textbook used in first-year courses (forexample, University Physics by Young and Freedman)commonly starts with a broad explanation of the top-ics being introduced, setting the stage before introducingmathematics or theoretical arguments, and finishes withpractice problems. This is less common in more advancedtreatments: Griffiths’
Introduction to Electrodynamics isa common 2nd-3rd year undergraduate text and offers lit-tle conceptual explanation before presenting mathemati-cal background, formulae, and derivations. These upperlevel texts also often lack the large number of worked outexample problems that frequently follow the presentationof new information in introductory texts or books thatfavor applications over deep understanding.We therefore see that physics texts uniquely partitioneducational content used in instruction, with authors pri-oritizing different content presentations depending on theintended audience of the text. To underscore the differ-ent types of content prioritized for different approaches,we give an additional example of each (with content typedistinctions in bold): • Conceptual Physics by Hewitt [11], uses concept-heavylessons to introduce new students at the high schoolor undergraduate level, as concepts consist of gen-eral information and relatable analogies allowing forbroad connection and recognition for students less ac-customed to science. • Classical Electrodynamics by Jackson [12] is heavilytheoretical to prepare graduate students for advancedanalysis, as theories represent the mechanics behindthe way nature works in full detail, often accompaniedor assisted by mathematical equations. • Introductory Electromagnetics by Popovic and Popovic[13] is largely example-based to cater to its application-centric engineering audience, since examples serve asa road map which shows the correct setup and solu-tion method of a simple and commonly encounteredproblem.We find the classification of physics content into Ex-ample, Concept, or Theory types representative of ourown experience and from these textbook archetypes andtested these distinctions. This was accomplished by asurvey of our local physics department and students en-rolled in physics classes, supporting our classification.The full description is explained in the Results section.Before investigating how these different content types af-fect student learning, though, we must also consider dif-ferent frameworks of how students learn. Different the-ories of learning could dramatically affect how certaincontent types and their sequences are integrated by stu-dents, and therefore they must be considered so the re- sults of our sequencing studies can be placed within anappropriate model of the learning process.
B. Frameworks for Student Learning
To understand how the presentation of different typesof educational content could affect student learning, onemust explore how to present educational content andhow students learn new topics. Both facets have beenopen and interesting questions in the field of educationresearch for many years [6, 15–17], as improving learn-ing outcomes through understanding the student mind isfar from a novel approach. To develop the context forour investigations into the impact of content type andsequencing on student performance, we review some pre-viously proposed theories of student learning. Through-out this explanation we will connect existing frameworksto the three content types within physics, and describehow they relate to content sequencing and our currentstudy. We will then introduce our Content Cube learn-ing framework to visualize the impact of sequencing ofour content type divisions on student learning.
1. Early Frameworks
Two historical educational frameworks that exploreeducational content presentation are the Revised Bloom’sTaxonomy and the Theory of Multiple Intelligences.Bloom’s Taxonomy (BT) posits that there exists a hi-erarchy of objectives, that certain teaching material canbe more effective in teaching, and that there can exist anoptimal order for the presentation of content that maxi-mizes student understanding. The revised version mod-ernizes its vocabulary to emphasize the dynamic natureof learning and the cognitive processes required. ’Bloomsrevised Taxonomy’ is commonly visualized, like many hi-erarchical systems, as a sectioned pyramid, shown in theright of Fig. 1. This interpretation ranks the desired out-comes of teaching, with more surface-level or basic un-derstanding types representing conceptual understandingmaking up the foundation. These support the higher-level learning outcomes which require more expertise andfamiliarity [18, 19]. The higher-level outcomes, in turn,are similar to those expected from theoretical treatmentsin later lessons or advanced courses, including advancedtextbooks as discussed in Section II A.This system is contrasted by another framework thatseeks to address long-standing questions about the im-pact of individual learner diversity on teaching effective-ness: the Multiple Intelligences model (MIM). Shown onthe left of Fig. 1, the MIM does away with the hierarchyof prerequisites and rigid structure which are the founda-tion of BT, instead suggesting that there exist a varietyof learning ’styles’ that are effective for particular learn-
Multiple Intelligences
Unstructured
Mental models/Conceptual change
Semistructured
Bloom’s Taxonomy
Structured
FIG. 1. Visual representations of the Theory of Multiple Intelligences (left), the mental model framework combined withconceptual change (middle) and Blooms Taxonomy (right). The theories of learning are aligned along the axis of structure inlearning theory with the highly structured hierarchical framework in Blooms Taxonomy contrasting the overlapping and morefreeform structure of the Multiple Intelligences. On this axis, the mental model/conceptual change framework is semistructuredas it allows for ad-hoc addition of newly learned content, but always within a given mental model, and this mental model onlybe changed after the student encounters significant misconception. Images inspired by [10] (left) and [14] (right). ers depending on the individuals predispositions or skills[3]. These styles are wide-ranging in their origin: theyrepresent both external (interpersonal/social/verbal) andinternal (logical/intrapersonal/bodily) interactions andcan create complex systems of learning preferences inindividuals [20]. The MIM resists definite classificationwithin our content type framework, and suggests educa-tional content should be tailored in a very different waythan that prescribed by BT. It states that individualswill learn better if the content is tailored to their learn-ing style, rather than when a rigid structure is imposedthat presents information in the same logical progressionfor all learners. In addition to its applications for un-derstanding student learning in physics, it has influencedlater theories that can help us make predictions aboutcontent order effectiveness in physics instruction.
2. Mental Models, Scaffolding, and Sequencing
While the BT and MIM frameworks of student learn-ing break up teaching content into distinct categoriesbased on depth of information or mode of learning, manyrecent frameworks instead focus on how certain modes ofinstruction build up a particular idea or view within thestudent, introducing the idea of a ‘mental model’. Thesemental model-based frameworks forgo individual knowl-edge types for a more central theme based on holisticunderstanding. Many of these theories propose open-ing learning environments so that students are allowedto interact with facets of the topic that may not betouched upon with more prescribed, lecture-style instruc-tion, combining the advantages of both flexible and hier- archical models. For example, Schwartz and colleagues’work in education research focused on activities that pro-moted exploration, so that students developed an under-standing of the greater structure of the topic throughself-discovery [21]. This ‘greater structure’ echoes moreconceptual treatments of physics, where the big picturemust first be established before being bolstered by theo-retical justification and applications. Additionally, theirwork stresses that having a sound grasp on a topic re-quires not only content knowledge, but also the abilityto adapt and frame new information within previouslylearned schemes [22]. This treatment of learning as amore organic process leads to different teaching goals.Instead of expecting certain presented content to be ab-sorbed by learners, mental-model based instruction seeksto guide students through the topic so that they dis-cover the important details and become familiar withhow facets of the topic are connected. This allows fornew information to be incorporated such that a networkof new and previous knowledge is created, not just addedto a growing pile of isolated facts.These goals both support the view that learning isnot strictly about accumulating information until one hasenough to understand the topic, but about creating aframework within the learners mind, which can activelyaccept, process, evaluate, and adapt to new informationor challenges. This view is similar to other works prior-itizing student mental models [6]. These mental frame-works, especially within physics, can consist of previousinstruction, the individual’s lived experience or other fac-tors, and they influence the student’s interaction withnew material meant to teach or advance their knowledgein that topic; this is very much in line with our definitionof conceptual understanding. It is noted that if a modelis already present (and one that is usually due to previousexposure, even if brief), it is difficult to change if incor-rect and must be explained in a way that avoids graftingcorrect content onto an incorrect interpretation instead ofcorrect model adoption. This connects to theories of con-ceptual change, a topics which has been heavily studiedand has multiple competing theories within the field [23–25]. These theories posit different ways that the concep-tual model is created and advanced within the student asthey learn [26]. These interpretations are interesting andpresent new ways of thinking about the process of learn-ing, but are also loosely defined and still debated. Whenwe relate the conceptual change interpretation to previ-ous theories of learning, we realize teachers must care-fully design teaching method so that students can formthe correct mental model and build upon it. Using con-ceptual change to form a correct mental model thereforerequires a more prescribed teaching sequence like whatis posited in BT. That being said, this more individual-ized view also feeds back into the notion of an individuallearning style: because each learner has unique previousexperiences incorporated into their mental model, theirinteraction with new teaching material will be different,like in the MIM. This interpretation also stresses the no-tion of change within the student going from naivety oran incorrect view to a sound mental model which canbe built upon. This occurrence is extremely importantfor effective teaching, and thus has been a topic of studyin itself that we wish to incorporate into the design andanalysis of our study of content sequencing.The frameworks of scaffolding and sequencing are alsoclosely related to mental models and conceptual change.Scaffolding refers to setting up learning such that stu-dents have sufficient prior knowledge and correct mentalmodel formation to learn complex topics and completeevaluation tasks [27]. Sequencing is also formalized ina broad literature [28, 29], which suggest that the se-quence in which students are presented information willhave an impact on their learning quality. Podolefsky etal. have conducted studies in electromagnetism classesthat have found that scaffolding by teaching studentsnew topics using analogies improves student learning out-comes [30, 31]. This type of analogical scaffolding relatesclosely to our definition of the concept content type, asit links student knowledge to information that they arealready comfortable with, such as scenarios from the nat-ural world, and uses diagrammatic representations to ce-ment new ideas. Results from this literature thus moti-vate the use of content sequences that begin by providingstudents with a strong conceptual base, which we intendto further investigate and describe with our sequencingtests and models.
FIG. 2. “Content Cube” of learning. Each axis representsone of the content types in our model (Example, Concept,or Theory). Traversing a set of three edges from the originto the far vertex corresponds to learning via a given contentorder. Each outward face of the cube represents the available“content-space” after seeing the first content type (e.g. thetop face would be populated by students who were presentedwith Concept first).
3. Content Cube
Building upon the previously discussed literature, wehave developed a framework of student learning that in-corporates our content type division and illustrates theprocess of its order of presentation on student learning.To visualize our content type division and its order ofpresentation, we have created a representation of thethree content types and their application. Our frame-work takes on a three-dimensional form which we call the‘Content Cube’, where each of the three spatial axes rep-resent the presentation of a certain content type. The dif-ferent content presentation sequences manifest as pathsalong the edges of the cube from the origin to the (1,1,1)corner, shown as colored arrows in Fig. 2. Each pathrepresents the students full journey from minimal priorknowledge (origin), through all three content type pre-sentations (edge traversal), and to the desired learningoutcome (far corner). This places each content order asa unique path in space and thus allows for each orderto be represented uniquely, instead of relying on over-lapping populations to describe the results of differentcontent orders. It can also help visualize populations atdifferent stages in their journey to the learning outcome:for example, the outer faces of the cube represent popu-lations of students that saw the same content type first(here the top face would represent seeing Concept first,etc). This division of the total student population is use-ful in evaluating the impact of particular content typesand type ordering, as will be shown in the results sec-tion. This population division and subdivision can assistin visualizing the time-dependent changes that can oc-cur for students throughout the learning process, whichwould be much less clear in the previous models. Thestratified view of all the possible paths to the learningoutcome also lends itself well to evaluating the effective-ness of these paths and to exposing subtleties due to otherenvironmental factors. This will create a clear but nu-anced and flexible framework for examining the effects ofspecific content sequences and other potential variableson student learning outcome.With the ability to express and evaluate teachingmethods not only by what they present, but also in theirorder, we can start to infer the best course of actionfor planning a lesson that attempts to maximize studentlearning. Using the three content types previously de-fined, is there an obvious choice for the optimal contentorder? Based on the results from the survey and howothers have used the types in previous works like text-books, it seems like starting with concepts is a generallyaccepted strategy, since this is the first material presentedin introductory works. Examples, on the other hand, pro-vide students with a means of self-assessment, mirroredin their placement at the end of chapters after conceptualand theoretical arguments have been established as wellas their use in examinations. Thus, it is a fair guess toassume that a content order like CTE would perform bet-ter than TEC, since this presentation aligns with how astudent naturally progresses through learning. TEC, onthe other hand, effectively tries to deepen understand-ing before introducing it, and checks for facility beforecompleting the teaching process. While this order mayhave some unseen merits, from a simple thought-processframework it would seem to produce a less effective pathfor student learning than one which follows the studentprogression as well as what has been generally adoptedby the field. These inferences are the basis for the morespecific questions that will be investigated using the re-sults of our student study.
III. METHODSA. Survey
To validate our educational content classification, andalso to compare the content and order preferences of theintroductory course students to those more familiar withphysics topics, we issued an online survey within the localSTEM academic community. Participants were asked toself-identify their field (e.g. physics, chemistry) and role(e.g. undergraduate student, faculty).111 total participants consented to sharing their sur-vey results for research purposes; we first asked them toassess the accuracy of our content groupings, and thenthe effectiveness of the content in teaching. For the first 9 content samples, participants were asked to label con-tent type between Example, Concept, and Theory. Forthe following 9 content samples, participants were askedto rate the accuracy of a given label, and the contentseffectiveness in teaching the topic contained, both ona scale of 1 (Not Accurate/Effective) to 5 (Very Accu-rate/Effective). Three of those samples were mislabeled,serving as a control for response bias. The participantswere lastly asked if they have a preferred content typebetween Example, Concept, and Theory, if there is anorder for presenting educational material they think isbest between all permutations of Examples, Concept, andTheory, and for any comments and concerns about thecontent organization.This survey allowed for differences in the classificationand preference to be quantified across multiple disciplinesas well as level of expertise with the topic and with teach-ing. The participants within physics were contacted viadepartmental email, and further information about otherdisciplines was gained through distributing the surveyto the undergraduates who had taken the course, whohave concentrations across the sciences. Additional re-sponses were solicited through social media posts, at-tracting some responses from outside the McGill com-munity, but a majority ( > B. Experimental assignment
Once the goals of the study were identified, an as-signment was designed to test them and collect studentinformation. This was done through an online systemwhere the students fill out five evaluations: a pre-test,three intermediate tests (quiz 1 to 3), and a post-test. InFig. 11 Supplemental Material we show a screen shot ofMcLEAP to illustrating one intermediate test (quiz 1).An intermediate test has between 3 and 4 questions.The full methodological format is shown in Fig. 3.Two experimental assignments were given in two subse-quent years (2017 and 2018), yielding 4 datasets with 426,437, 346 and 399 participants respectively. There wereonly 2 students that participated in experiments fromboth years. The assignment was designed to introduce atopic within the scope of the course that had not beenpreviously taught (in this case reflection and refraction,which normally are taught toward the end of an intro-ductory Electricity and Magnetism course). The pre-testis given directly after the instructions and consent form,and is meant to test the students prior knowledge as wellas familiarize them with the assignment format. Afterthe pre-test, the students fill out a short questionnaire,asking them what type of content they would prefer tosee as well as how prepared they feel going into the nextassessment. After the first questionnaire, the studentsare shown a randomized piece of content from one of the
FIG. 3. Flow chart diagram of McLEAP assignment implementation, showing the mechanical steps the students took to progressand what was presented. The list of questions asked during each of the questionnaires can be found in the Supplemental Material(Fig. 12).FIG. 4. Reproduced McLEAP content pages illustrating the three different content types used for classifying educationalmaterial into groups and in the assignment study. Left represents a typical concept (the content was adapted from [32]), middlea typical example, and right a typical theory. three categories (theory, concept, or example), asked tosolve two to four problems regarding the material theywere just introduced to, and then fill out a similar ques-tionnaire after to see if their preference or preparednesshave changed. This was repeated twice more to cover allthree types of content, then a longer (five questions in2017 or ten in 2018) post-test was given. Student learn-ing outcome was defined by their performance in quanti-tative problem-solving tasks as used in previous studies[33]. Upon completion of the entire assignment, a finalsurvey was given to gather information on student pref-erences, including which learning methods (for exampleuse of resources) they found helpful.The web interface used for this study allowed for therandom selection of test questions and random contenttype pages, and could also generate content or questionsbased on previous answers. It tracked all of the answersas well as their timing. Students were required to com-plete one part of McLEAP in five hours, within a 5 dayperiod. McLEAP consisted of two parts, each part test-ing different course content. The online structure of part1 and 2 were the same. The test questions (which are au-tomatically graded and give immediate feedback to thestudents) are standard assignment questions (see Fig. 11)and the content pages are typical lecture content pagesbased on the three different content types (theory, exam-ple, and concept), which are illustrated in Fig. 4. No tim-ing differences were observed with the different contenttypes. However, students who started close to the endof the five day period did worse than the average. Theentire McLEAP platform was written in PHP (a web-site programming language) and is stored on a McGillserver. McLEAP stores all the answers in an encrypteddata file (DATA) that can only be accessed by the PI(Hilke). From DATA we use a decoding program whichgenerates two different files: FILE A, which contains thestudent IDs and their assignment grade. FILE B, whichextracts the data from the students who have answeredyes to the consent question, and assigns a random num-ber to the remaining students and includes all the datasaved by McLEAP. This ensures that analysis on FILE Bcan be done while keeping the participants anonymous.
IV. RESULTS
The results section is divided into three parts: First,we analyzed the student assignment results. Second, weconsidered the influence of content type order on studentlearning outcome. The main quantity of interest is therelative deviation from the mean grade of a subpopula-tion for a given test, which is defined as G subgroup − (cid:104) G (cid:105)(cid:104) G (cid:105) (1)where G is the mean grade of the test overall and G subgroup is the mean grade of a subgroup. Finally, we will discuss results from the student questionnaires dur-ing the assignment and from the independent survey oncontent type preference. A. Example optimizes students learning outcome
Fig. 5 depicts the performance of the student popula-tion per content type per test relative to the mean gradeper test, averaged over all data (2017-2018). The wholepopulation is distributed across the three content typeswith the constraint that every individual is subject to aparticular ECT ordering; in other words, they cannot re-ceive the same content type twice. We note that Examplefirst enhances Test 1 results by (+2 . ± .
2) relative tothe overall mean. Similarly, Theory first decreases Test 1results with ( − . ± . B. Theory reinforces Example/Concept contenttype
Students who were assigned Theory for Test 2 (se-quence CTE and ETC) had seen Concept or Examplefirst. If we compare them to students assigned sequenceCET and ECT, who have also seen Concept and Exam-ple first, but are given not-Theory second, we see thatthe grade relative to the mean grade for the group withTheory second (+3% ±
2) is higher as the for the groupwith not-Theory second (0% ± Test 1 Test 2 Test 36420246 D e v i a t i o n f r o m m e a n g r a d e ( % ) Intermediate test results
ConceptExampleTheory
FIG. 5. Relative change from the mean grade grouped pertest and content seen (Data 2017-2018). Example first (leftorange bar) (+2% ±
2) significantly enhances test results overtheory first (left green bar) ( − ± ±
2) arealso significantly enhanced over students who see Theory attest 1.
Test 1 and Test 2 outcomes of (+4% ±
4) and (+4% ± . C. Post-test: content order significantly influenceslearning outcome
Fig. 6 depicts the learning outcome (defined as thepost-test score) for each content type order by the rel-ative change from the mean grade. Starting with Con-cept leads to the best post-test grade, while starting withTheory leads to the worst learning outcome (Data 2017-2018). The difference in learning outcomes between dif-ferent content type orders hints on the existence of hier-archical learning, which supports a mental model frame-work. Here, students have to adapt the correct men-tal model, which will only happen when the new mentalmodel is intelligible, plausible, useful, and causes a large
CTE CET ECT ETC TEC TCE6420246 D e v i a t i o n f r o m m e a n g r a d e ( % ) Posttest results
Concept firstExample firstTheory first
FIG. 6. Relative change from the mean grade of the post-test grouped per ECT order (Data 2017-2018). Students withConcept first (CTE: (+3% ±
2) and CET: (+3% ±
2) per-formed significantly better than those with Theory first (TCE:( − ±
2) and TCE: ( − ± conflict with a previously established, potentially incor-rect mental model [6, 16]. Such a conflict is instigatedconceptually, and not by example or theory, whose im-plications can more easily be added as an extension toan incorrect mental model.We can explain hierarchical learning in more detail asfollows. The Concept content type introduces the namedconcepts such that students can place them within an ex-isting mental framework of physical laws, which every hu-man being has naturally built up over the course of theirlifetime (ex: apples fall downwards, similarly charged ob-jects repel one another, etc.). This is the vertical step inthe Content Cube model. The Theory content type thenadds structure to the concept that has been newly placedinto an existing framework through a formal language (inthis case mathematics). Eventually, the Example contenttype consolidates the learning by repeated and explicitusage of the new concept in various scenarios, provid-ing exploration in the topic and stimulating memory. Asit turns out, the order of Example and Theory can bereversed but the first content type is vital for effectivelearning. In other words, upon having reached the top ofthe Concept plane in the Content Cube, one can movearound freely along the Example and Theory axes untileventually reaching the desired learning outcome at thefar vertex (point of convergence of the three arrows inFig. 2). Our data suggests that presenting Theory firstbrings confusion, from which students not fully recover(Fig. 6) resulting in the lowest learning outcome. Wealso find that presenting Example first gives the best im-mediate learning outcome (“cheap learning”), but thiseffect is short-lived, and for the post-test, students whosaw Example have average performance. Thus, Exampleand Theory first are inferior to Concept first as measuredby post-test grades. We therefore conclude that first im-pressions carry significant weight, at least in presentingnew physics material, and that the most efficient learn-ing takes place in the Concept plane of the Content Cubemodel (Fig. 2).To be certain that content type ordering is the truevariable underlying the variation in learning outcome be-tween students in Concept first and Theory first popula-tions, we have considered various environmental factorsthat could confound our results. The four factors wechose to investigate are student preference for contenttypes, whether or not they worked in groups, whetheror not they used online resources, and pre-test results.With an Ordinary Least Square Multivariate Regression(computed via Pythons StatsModels library), we com-puted the correlation of the factors with the content or-der as well as post-test results. In Tables I and II in theSupplemental Material, we summarize the results for thecombined data and for the 2018 data separately. Whileworking in groups ( p < e −
7) and pre-test ( p < e − D. Questionnaire and Survey Results:Preparedness
In the 2017 data and part I of 2018 data, we observedthe impact of content order on learning outcome. Wedecided to add student preparedness and preference tothe fourth experiment to understand how content orderaffects learning outcome in more detail. The results fromthe students regarding their preparedness show that stu-dents felt more prepared when they were presented withconcept first, compared to example and theory first. Weuse the student self-reported preparedness as a markerfor student confidence and self-efficacy [34].As shown in Fig. 7, we find that the preparednessof the students in the concept first groups undergoes anoverall increase over time, from 2.33 to 2.69 (on a scale of1 to 5). In the example first group, the rating decreasesfrom 2.54 to 2.33. For the theory first group, there isan increase from 2.80 to 2.85. The absolute self-rating
Test 1 Test 2 Test 3 Posttest2.02.22.42.62.83.0 P r e p a r e d n e ss , r a t e d ( l o w ) t o ( h i g h ) Self-reported preparedness
Concept firstExample firstTheory first
FIG. 7. Student self-reported preparedness over time ratedfrom 1 (low) to 5 (high). of preparedness is lower for the concept first group. Thisis however the only group that significantly increases inpreparedness over time. The students preparedness be-ginning with concept first agrees with the previously re-ported results, where the order that yields the greatestlearning outcome is also concept first. Moreover, the re-sults of this analysis on preparedness suggest that stu-dents are aware of what type of content, and which con-tent order, is allowing them to acquire enough knowledgeto properly understand the material at each step in theassignment. This will be reinforced by the results of thesurvey, discussed at the end of the Results section.
E. Content type preference over time
We believe that the change in content type prefer-ence over time may stem from the fact that during theassignment, the students become aware of how they arelearning physical concepts and how well that particularstrategy is working for them, and thus discover that theymay benefit from a different content type or order to con-tinue to further their knowledge acquisition. Fig. 8 showsthis result through a plot of content type preference overtime. A majority (73%) of the students changed theirinitial preference over time. This agrees with our Con-tent Cube theory; students only attain a certain stage ofunderstanding with each content type, and in order toreach the desired learning outcome they require a changeof content type.We find that the percentage of students who preferExample rises from 42% before Test 1 to 66% after thepost-test, the preference of Theory falls from 39% be-fore Test 1 to 22% after post-test, and the preferenceof Concept falls from 19% before Test 1 to 12% afterthe post-test. As shown in Fig. 8, the Concept andTheory preferences mostly become Example preferences.We believe that this change in content type preferencemay demonstrate a form of conceptual change, and weplan to further investigate the student ability to remodelprevious knowledge, as well as their awareness of how0
FIG. 8. Details of the preference change by student. The student preferences Example (E), Concept (C) and Theory (T) arerepresented by the colors orange, blue and green, respectively. White is used to represent a lack of answer, while black is usedfor more than one preference. The top row is the initial pre-test preference, while the lower rows show subsequent tests. Thelowest row represents the post-test preference distribution based on the initial pre-test student preference population. The rightcolumn shows the relative preference evolution over the course of the different tests. Overall, the preference evolves towardsexample for all sub-groups, regardless of initial preference. The initial concept preference population changes almost entirelyto example or theory preference. they learn (see conclusion for further details). Students’change in content type preference over time ( i.e. theirpreference of content order) is developmental; they areimproving and building upon their previous conceptionsthroughout the assignment.We also considered how students’ preferences relate tothe optimal content order for their problem-solving per-formance, when probed at individual time points. Wefind that the percentage of students preferring to see theExample content type increased toward the end. Thepercentage of students preferring theory and concept de-creased. There was however a large percentage of stu-dents who opted consistently to receive Example. More-over, within the content order preference results of thesurvey in the following section, we come across an im-portant distinction between what students and teachersbelieve to be an ideal learning order.
F. Content order preference varies across academicpopulation
We justify our partitioning of physics content througha survey asking others in the sciences if the labels of con-cept, example, and theory accurately describe content ex-cerpts used in teaching electromagnetism. We find thatthe majority of participants, from undergraduates to fac-ulty members across multiple disciplines in STEM, agreewith the labels we placed on our educational content (seeFig. 13). Examples are particularly clear, since they posea question and a solution. Theory and concept are themost likely to be confused (see Fig. 14); we believe theinclusion of mathematical formulae in theory often actsas a marker that can make it distinctive as well. Additionally, the survey data collected concerning theparticipants preferences of both content type and contentorder show similar results. We observe that though thecontent type preference appears to be evenly distributedwhen considering the participant population as a whole,there exist discrepancies among the different classes ofparticipants, shown in Fig. 9.Notably, 53% of undergraduate participants preferExample, and only 21% of graduate students (TAs) sharethis preference. Moreover, 0% of faculty members pre-fer Example. 60% of graduate students and 55% of fac-ulty members prefer Concept. In other words, the typeof content that teachers and teaching assistants believeyield their greatest learning outcome differs from that ofthe undergraduate students. This difference between stu-dent and teacher preferences is an important factor forthe study of educational content type and sequencing, as
Undergraduatestudents Graduatestudents Faculty Overall0102030405060 N u m b e r o f r e s p o n s e s ( % ) Content type preference
ConceptExampleTheory
FIG. 9. Survey results for preferred content type used forself learning for undergraduate student, graduate students,faculty members and overall survey responders
Undergraduatestudents Graduatestudents Faculty Overall0102030405060 N u m b e r o f r e s p o n s e s ( % ) Content order preference
CTECETECTETCTECTCENo preference
FIG. 10. Survey results for the question: “Is there an order forpresenting educational material that you think is best for un-derstanding new concepts?”. Percentage of respondents whopreferred each permutation of content order, grouped by theclass of the participants. Conceptual content first is preferredmajoritarily; by 64% of undergraduates, 75% of graduate stu-dents, and 55% of faculty members. there is a content order that yields the greatest learningoutcome, these differences in content order preferencesindicate that instructors may not always be translatingtheir knowledge of introductory physics concepts througha content order that yields the greatest overall learningoutcome for students [37]. This in turn has potential toinfluence the effectiveness of teaching and the quality oflearning within university courses.To this end, the preference results of the survey fur-ther suggest that there exists a hierarchical model thatbest suits learning needs, and that it is majoritarily pre-ferred. Additional survey results pertaining to the justi-fications of our choice of categorization of learning ma-terials are available in the Supplemental Material (Figs.15-17).
V. CONCLUSIONS
The results of our two-year study with close to 1000students completing the McLEAP assignment show thatsequencing of content types had a significant impact onstudent learning outcome. We find that regardless of stu-dent sequencing preference, there is an optimal structur-ing of material presentation that led to better problem-solving performance. To maximize learning outcome, theoptimal sequence of content presentation is Concept first.Our Content Cube model of student learning progressiondepicts this outcome and proposes a structured theoryof the impact of sequencing on learning outcome, for-malizing and building upon the mental model learningframework. Course instructors and students may benefitfrom this learning model, as our results suggest there areunifying aspects in the way students interact with newmaterial, existing alongside heterogeneity in individuallearning preferences. Additionally, we find that receivingtheory first leads to the lowest learning outcome. Despitetimescale differences, the learning process is comparable,so this can be compared to learning presented in text-books. These results reflect that content type sequenc-ing seen in textbooks is important at the stage wheremental models still need to be formed, as introductorytextbooks are concept-heavy in the beginning of chap-ters, and are heavy on examples at the end of chapters.More advanced treatments assume that readers have astrong conceptual foundation to build upon (are in theright mental model) and start directly with theory with-out expanding on examples too much.While we find a significant difference between Con-cept first and Theory first, our results do not differenti-ate between the learning outcome from sequences CTEand CET with statistical significance. CTE results inslightly higher learning outcomes below statistical signif-icance. Interestingly, student’s preference in the surveyis also biased towards CTE, which could be a reflection oftheir experience in many introductory courses and text-2books, making this an interesting subject for further ex-ploration. More detailed studies might show one of thesecontent sequences to be more significant. However, theinherent student variability might also suggest adaptingthe sequence to individual or groups of students basedon their respective optimal learning framework. Furtherstudies may be able to segment the population and findwhich learning method works for different students.These findings are supported by conclusions from ourcontent type survey; within survey participants fromSTEM academia, a majority recognize the content typedistinctions we put forth, and prefer to learn with thecontent sequence that matches our most effective orderfrom the McLEAP assignment. The results from theSTEM-wide survey also suggest that there exist impor-tant discrepancies in preferences of content types andcontent sequences among undergraduate students, grad- uate students, and professors. Sharing this result withcourse instruction teams, including professors and teach-ing assistants, can be an important step to improving in-structors’ awareness of how their students learn, leadingto the application of learning tools and strategies opti-mized for student learning.
ACKNOWLEDGMENTS
We would like to thank Chris Roderick, Janette Bar-rington, Marcy Slapcoff, V´eronique Brul´e, and RebeccaBrosseau for useful discussion, editing, and literature rec-ommendations. We would like to thank Zezhou Liu forprogramming support on the online learning platform.We acknowledge support from the Tomlinson Chair inUniversity Science Teaching at McGill. [1] A. Thambyah, European Journal of En-gineering Education , 35 (2011),https://doi.org/10.1080/03043797.2010.528559.[2] E. Brewe, J. E. Bartley, M. C. Riedel, V. Sawtelle,T. Salo, E. R. Boeving, E. I. Bravo, R. Odean,A. Nazareth, K. L. Bottenhorn, R. W. Laird, M. T.Sutherland, S. M. Pruden, and A. R. Laird, Frontiersin ICT , 10 (2018).[3] R. J. Sternberg, The Educational Forum , 47 (1995),https://doi.org/10.1080/00131729409336362.[4] B. G. Davis, Tools for teaching (John Wiley & Sons,2009).[5] B. F. Jones, A. S. Palincsar, D. S. Ogle, and E. G. Carr,
Strategic teaching and learning: Cognitive instruction inthe content areas. (ERIC, 1987).[6] E. F. Redish, American Journal of Physics , 796(1994).[7] N. Council, D. Education, B. Assessment, B. Education,and C. Education, Successful K-12 STEM Education:Identifying Effective Approaches in Science, Technol-ogy, Engineering, and Mathematics (National AcademiesPress, 2011).[8] L. W. Anderson, D. R. Krathwohl, and B. S. Bloom,
Ataxonomy for learning, teaching, and assessing: a revi-sion of Blooms taxonomy of educational objectives: com-plete edition (Longman, 2001).[9] S. Mcleod, “Kolb’s learning styles and experiential learn-ing cycle,” (2017).[10] H. Gardner,
Frames of mind: the theory of multiple in-telligences (Basic Books, 2011).[11] P. G. Hewitt,
Conceptual physics (Pearson Educaci´on,2002).[12] J. D. Jackson,
Classical electrodynamics (AAPT, 1999).[13] Z. Popovic and B. Popovic,
Introductory electromagnetics (Prentice˜ Hall, 2000).[14] L. O. Wilson, Five Basic Types of Questions (2016).[15] R. J. Shavelson, Journal of educational psychology ,225 (1972).[16] G. J. Posner, K. A. Strike, P. W. Hewson,and W. A. Gertzog, Science Education (1982), 10.1002/sce.3730660207.[17] T. K¨aser, N. R. Hallinen, and D. L. Schwartz, in Pro-ceedings of the Seventh International Learning Analytics& Knowledge Conference (ACM, 2017) pp. 31–40.[18] L. W. Anderson, Studies in Educational Evaluation ,102 (2005), measurement, Evaluation, and StatisticalAnalysis.[19] E. J. Murphy, The Journal of Continuing Higher Educa-tion , 64 (2007).[20] L. Waterhouse, Educational Psychologist , 207 (2006),https://doi.org/10.1207/s15326985ep4104 1.[21] J. Day, W. Adams, C. E. Wieman, D. L. Schwartz, andD. A. Bonn, Physics in Canada , 81 (2014).[22] M. Mylopoulos, R. Brydges, N. N. Woods, J. Manzone,and D. L. Schwartz, Medical education , 115 (2016).[23] S. Vosniadou, International Handbook of Research onConceptual Change (2007), 10.4324/9780203154472.ch1.[24] D. B. Clark, Cognition and Instruction , 467 (2006).[25] M. T. Chi and R. D. Roscoe, in Reconsidering conceptualchange: Issues in theory and practice (Springer, 2002)pp. 3–27.[26] G. ¨Ozdemir and D. B. Clark, Eurasia Journal of Mathe-matics, Science & Technology Education (2007).[27] C. Lindstrøm and M. D. Sharma, Physical Review Spe-cial Topics-Physics Education Research , 010109 (2011).[28] G. J. Posner and K. A. Strike, Review of EducationalResearch , 665 (1976).[29] J. Van Patten, C.-I. Chao, and C. M. Reigeluth, Reviewof Educational Research , 437 (1986).[30] N. S. Podolefsky and N. D. Finkelstein, Physical ReviewSpecial Topics-Physics Education Research , 010109(2007).[31] N. S. Podolefsky and N. D. Finkelstein, Physical ReviewSpecial Topics-Physics Education Research , 020104(2007).[32] D. Halliday, R. Resnick, and J. Walker, Fundamentalsof physics (John Wiley & Sons, 2011) pp. 911–912.[33] B. Andersson and F. Bach, Science Education , 196(2005).[34] J. E. Dowd, I. Araujo, and E. Mazur, Physical Review Special Topics-Physics Education Research , 010107(2015).[35] B. A. Brown and K. Ryoo, Journal of Research in ScienceTeaching , 529 (2008). [36] J. Blackmore, Studies in Higher Education , 857(2009).[37] L. Halim and S. M. M. Meerah, Research in Science &Technological Education , 215 (2002). Supplementary Material for Content Sequencing and its Impact onStudent Learning in Electromagnetism: Theory and Experiment
VI. SUPPLEMENTARY FIGURES
This section contains Supplementary figures. Fig. 11 (quiz question) and Fig. 12 (order of problems and questionsin an assignment) show more details on McLeap, Figs. 13 and 14 show test results per content order and dataset,and Figs. 15 - 17 show more details on the survey results.
FIG. 11. Student view of McLEAP showing Test 1. FIG. 12. Pages diagram of the McLEAP assignment interface; shows student progression through the assignment with theorder of presentation for both problems and questions. Each arrow represents a new webpage seen by the students.
CTE CET ECT ETC TEC TCE86420246810 D e v i a t i o n f r o m m e a n g r a d e ( % ) Intermediate test results per content order
Test 1Test 2Test 3
FIG. 13. Relative change from the mean grade of the intermediate tests grouped per content sequence and test (Data 2017-2018)showing a detailed breakdown of Fig. 5 in the main text. Standard deviations are high, reflecting both variability in learningand the small number of questions per test (two in 2017, three in 2018). CTE CET ECT ETC TEC TCE151050510 D e v i a t i o n f r o m m e a n g r a d e ( % ) Posttest results per content order and dataset
Part 1 2017Part 2 2017Part 1 2018Part 2 2018
FIG. 14. Relative change from the mean grade of the post-test grouped per content sequence and dataset (Data 2017-2018)showing a detailed breakdown of Fig. 6 in the main text. Standard deviations are higher in Data 2017, because only 5 insteadof 10 questions were asked in the posttest.
Undergraduatestudents Graduatestudents Faculty020406080100 C o rr e c t r e s p o n s e ( % ) Accuracy of content type labelling
ConceptExampleTheory
FIG. 15. Response to the instruction
Choose the description you think best describes the educational content shown , whichprovides the accuracy of different levels of people in physics at labelling content. Overall, a majority of people accurately labeleach content type with the same categorization that we assume in this experiment, though identifying an example is clearerthan the distinction between concept and theory. Undergraduatestudents Graduatestudents Faculty Falsely labelled012345 R a t i n g Rating of the content type labels
ConceptExampleTheory
FIG. 16. Response to the question
How accurate does the label ’Content type’ fit for the content above? where Content typewas either C,T or E, and the accuracy was measured on a scale from 1 to 5, 1 being very inaccurate and 5 being very accurate.This data yields the rating of the content labelling by different levels of people in physics. Overall positive results for eachlabel, distinctly worse rating for the falsely labelled content other than theory, which is harder to distinguish from concept.
Undergraduatestudents Graduatestudents Faculty Falsely labelled012345 R a t i n g Rating of the content type at describing content
ConceptExampleTheory
FIG. 17. Response to the question
How would you rate the quality of the content above in explaining a physical concept?
Thisdata provides us with the rating of the content type at describing the actual content by the different classes of participants.Overall concept has the highest rating, and each content type has a relatively positive rating. VII. SUPPLEMENTARY TABLES