Robert A. Reiser
Florida State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Robert A. Reiser.
Educational Technology Research and Development | 2001
Robert A. Reiser
This is the second of a two-part article that discusses the history of the field of instructional design and technology in the United States. The first part, which focused on the history of instructional media, appeared in the previous issue of this journal (volume 49, number 1). This part of the article focuses on the history of instructional design. Starting with a description of the efforts to develop training programs during World War II, and continuing on through the publication of some of the first instructional design models in the 1960s and 1970s, major events in the development of the instructional design process are described. Factors that have affected the field of instructional design over the last two decades, including increasing interest in cognitive psychology, microcomputers, performance technology, and constructivism, are also described.
Review of Educational Research | 1982
Robert A. Reiser; Robert M. Gagné
This paper identifies and evaluates the learning effectiveness of the major features found in media selection models. The 10 different models employed as examples are not described individually, as is done in some previous reviews. Instead, the article focuses on the characteristics noted across models. Features discussed include the physical forms the models take, the ways in which they classify media, and the media selection factors they consider. Selection factors embodied in models affect media choices. Characteristics of learners, setting, and task are identified as factors to be given primary consideration in media selection.
Educational Technology Research and Development | 1994
Robert A. Reiser; Harald W. Kegelmann
The purpose of this paper is to discuss the key features of various methods that software evaluation organizations employ to evaluate instructional software. Who conducts the evaluations, the processes they use for doing so, and the software features they evaluate are among the topics that are discussed. Some of the problems associated with many of the evaluation methods are also described, as is an alternate method designed to overcome some of the identified problems. Special emphasis is placed on the role of students as participants in the evaluation process, and the benefits that may be derived from placing students in that role.
Archive | 1984
Robert A. Reiser; Martin A. Tessmer; Pamela C. Phelps
This study examined whether children’s learning from “Sesame Street” could be improved by having adults ask the children questions and provide them with feedback while they watched the show. Subjects were 23 three- and four-year-old, white, middle-class children who were randomly assigned to one of two conditions. Children in both conditions watched three specially edited versions of “Sesame Street” with an adult. While they did so, children in the experimental condition were asked to name the letters and numbers shown on the programs. Results indicated that 3 days after watching the last program, children in the experimental condition were better able to name and identify the letters and numbers they had seen (p < .01). Three features of the experimental treatment that may have contributed to these results are discussed, as are the implications of the findings.
Educational Technology Research and Development | 1990
Robert A. Reiser; Walter Dick
This article describes a new model for evaluating instructional software. Also described is a study in which the new model was field tested. Unlike most such models, which focus on the instructional and technical characteristics of software, the model focuses on the extent to which students learn the skills a software package is intended to teach. It is argued that by using this approach, educators will be better able to reliably identify software that is instructionally effective.
Educational Technology Research and Development | 1997
Robert A. Reiser; Donald P. Ely
The purpose of this article is to examine how, over time, the major definitions of the field of educational technology have reflected changes in the field itself. Major definitions from the early 1900s through 1994 are reviewed and compared. Each definition is discussed in terms of the events and ideas that were current at that time. Major changes in the field, as reflected by the definitions, are identified and thoughts regarding future definitions are presented.
Educational Technology Research and Development | 1988
Robert A. Reiser; Naja Williamson; Katsuaki Suzuki
How can adults who watch “Sesame Street” with children facilitate the children’s recognition of the letters and numbers presented on the show? In order to examine this question, each of 95 preschool children watched three specially edited versions of “Sesame Street” with an adult who either (a) asked the child questions and provided feedback, (b) only asked questions, (c) directed the child’s attention to the screen, or (d) simply watched the shows with the child. Those children in the Questions + Feedback condition and the Questions condition scored significantly higher on a delayed posttest than did children who just watched the shows with an adult. There were no other significant differences among the treatment conditions. Results indicate that adults can increase children’s recognition of letters and numbers presented on “Sesame Street” by asking the children to name the letters and numbers as they are presented. Other interpretations are also discussed.
Educational Technology Research and Development | 1984
Robert A. Reiser
This study examined the effects of three pacing procedures on student withdrawal rate, rate of progress, final examination performance, and attitude in a personalized system of instruction course. Undergraduate students (N = 100) who were enrolled in an introductory speech communication course were randomly assigned to either a reward, penalty, or control condition. Those students in the penalty group proceeded through the course at a more rapid pace than students in the control group. There were no significant differences in student withdrawal rate, final examination performance, and attitude. Final examination performance was not affected by the interaction between pacing procedures and student perception of locus of control. The benefits of reducing student procrastination, and appropriate means of doing so, are discussed in light of these results.
Journal of Educational Research | 1977
Robert A. Reiser; Howard J. Sullivan
AbstractStudents enrolled in an introductory research course in political science were randomly assigned to either (1) a self-paced group in which each student was allowed to take each quiz in the course at whatever pace he chose or (2) an instructor-paced group in which students were required to pass each quiz by a target date set by the instructor. The number of students who withdrew from the course was significantly higher from the self-paced group than from the instructor-paced group. There were no significant differences between groups in attitudes or in achievement. Failure to maintain a steady quiz-taking pace and poor performance on the quizzes were the factors most closely associated with withdrawal from the course.
Educational Technology Research and Development | 1992
Jane E. Zahner; Robert A. Reiser; Walter Dick; Barbara J. Gill
This article describes the evaluation of a simplified version of the new software evaluation model proposed by Reiser and Dick (1990). This approach to evaluation of instructional software focuses upon the collection of performance data to determine the extent to which students learn the skills the software is designed to teach. In two field evaluations of instructional software packages, it was found that use of the simplified software evaluation model generally produced evaluative conclusions similar to those reached using the original, more complex model. It was also found that student learning data did not support the high ratings assigned by subjective software evaluation services. These findings support the use of the simplified model by teachers and media specialists who have limited time and resources, and stress the need for local on-site evaluation efforts.