Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nell B. Dale is active.

Publication


Featured researches published by Nell B. Dale.


technical symposium on computer science education | 1998

Conceptual models and cognitive learning styles in teaching recursion

Cheng Chih Wu; Nell B. Dale; Lowell J. Bethel

An experimental research design was implemented in an attempt to understand how different types of conceptual models and cognitive learning styles influence novice programmers when learning recursion. The results indicate that in teaching recursion to novice programmers:• concrete conceptual models are better than abstract conceptual models,• novices with abstract learning styles perform better than those with concrete learning styles,• abstract learners do not necessarily benefit more from abstract conceptual models, and• concrete learners do not necessarily benefit more from concrete conceptual models.


technical symposium on computer science education | 2005

Content and emphasis in CS1

Nell B. Dale

In the spring of 2004, 351 faculty members responded to a survey concerning the content and topic emphasis in the first course in computing. The survey targeted two different groups of faculty, one SIGCSE members and the other faculty who had contacted a medium-sized publisher of Computer Science textbooks. The questions fell into five categories: design methodology, general programming issues, object-oriented issues, software engineering issues, and other topics. The results are analyzed and, where possible, some conclusions are drawn.


integrating technology into computer science education | 1996

Evaluation: turning technology from toy to tool: report of the working group on evaluation

Vicki L. Almstrum; Nell B. Dale; Anders Berglund; Mary J. Granger; Joyce Currie Little; Diane M. Miller; Marian Petre; Paul Schragger; Frederick N. Springsteel

Evaluation is an educational process, not an end in itselfi we learn in order to help our students learn. This paper presents a pragmatic perspective on evaluation, viewing it as a matter of trade-offs. The space of possible evaluation approaches is analysed in terms of trade-offs among desired evidence, costs, and other constraints. This approach is illustrated with example scenarios and a list of selected resources is provided. Aim of the Working Group This working group set out to consider how pragmatic, empirical evaluation can be used to harness technology for teaching Computer Science and Information Systems. Educators reject the tendency to adopt ‘technology for technology’s sake’ and want to analyze technology in terms of its suitability for a teaching purpose and its impact—both costs and benefits—on teaching practice and outcomes. The question is not ‘Can we use technology in teaching?’, but ‘Can we use technology to enhance teaching and improve learning?’ Empirical evaluation and technology can form a powerful partnership to enhance teaching purposefully and usably. The working group explored the parameters of an effective partnership. Introduction Computer Science and Information Systems (CS/IS) are rife with examples of technology-driven projects that fail to address fundamental issues, with systems designed by introspection, with software evaluated by market share alone, with good ideas neglected after poor initial implementations. Evaluation is often Permission to make digitalmard copy of part or atl of this work for personal or classroom use is ranted without fee provided that copies are not made f or distributed for pro d or wmmercial advantage, the copyright notice, the titte of the publication and its date appear, and notice is given that copying is by permission of ACM, Inc. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Integrating Tech. into C.S.E. 6/96 Barcelona, Spain 01996 ACM 0-89791 -844-4/96/0009 ...


technical symposium on computer science education | 1993

Computerized adaptive testing in computer science: assessing student programming abilities

Angel Syang; Nell B. Dale

3.50 Diane M. Miller University of Southern Mississippi, USA dmmiller@medea. gp.usm.edu Marian Petre (joint chair) Open University, UK [email protected] Paul Schragger Villanova University, USA schragge@monet. vilI.edul Fred Springsteel University of Missouri, USA csfreds @mizzoul .missouri.edu seen as an expensive, time-consuming, esoteric process with little practical relevance. But principled, practical evaluation— empirical study of actual practice, perhaps within a tightly focused question or a particular task—can identify crucial issues, debunk inappropriate folklore, give substance to intuition, disambiguate causes, and make the difference between failure and success. The introduction of new technologies increases the importance of evaluation in order to untangle the snarl of factors and influences that impinge on how technology is used in context. Unless educational technology can address educational objectives, the ‘nifty’ ideas it encompasses are no more than fashion. Evaluators need to base their analyses and designers neecl to base thleir designs on real practice; not everything that is ‘intuitive’ or ‘sexy’ is appropriate within real teaching environments. Evaluation offers a means of putting technology into perspective, so that it is viewed as a tool for addressing real problems—a means, rather than an end in itself. Technology as toy and tool The current leading-edge technologies, such as videoconferencing, multi-media, software vi sualizatiou, and Internetenabled applications (World Wide Web, electronic mail, bulletin board systems, etc.), are perceived to have immediate potential for use as educational tools. However, it is all too easy to mis-aplply these technologies, using them as flashy toys or interesting playthings. Technology-led adoption follows a ‘we have it—let’s use it’ enthusiasm. But that can be a blind alley for evaluation: often the need for an answer expires before we have a chance to ask the question. We should pursue an education-led deliberation: ‘We have it—but is it appropriate for this purpose?’ Technology remains a toy when it is used merely because it is attractive and exciting, but its real potential is unexplored. Technology is often introduced into education to attract and excite, without any more than an assumption that it might be useful. But, if applied without deliberative study of its use in context and without the evaluation of the technology’s impact on this use, ‘educational’ technology remains a toy.


technical symposium on computer science education | 1999

The peer review process of teaching materials: report of the ITiCSE'99 working group on validation of the quality of teaching materials

Deborah Knox; Don Goelman; Sally Fincher; James Hightower; Nell B. Dale; Ken Loose; Elizabeth S. Adams; Frederick N. Springsteel

Current research on Intelligent Tutoring Systems (ITSS) has not presented a substantial student model which can be generalized to all ITSS. The purpose of this study was to design, tes~ and implement a quantitative student model (Syang, 1992) called the Angel Model to measure programming abilities of undergraduate students who have taken the first Computer Science course. This model (Angel Model) met the criteria for using an Item Response Theory (IRT) model; therefore, the data can be used for developing an IRT based Computerized Adaptive Testing (CAT) system to measure students’ programming abilities at the completion of a CS 1 course. An important implication of this study is that when students’ understanding of a domain (e.g., Computer Science, Chemistry, and Physics) can be built quantitatively, a computerized adaptive test can then be developed effectively to measure students’ abilities in that domain. Hence, the quantitative student model can provide an Intelligent Tutoring System with knowledge about students’ abilities in that domain. INTRODUCTION Intelligent Tutoring Systems (ITSS) are instructional systems that resemble what actually occurs when a student and a teacher sit down one-on-one and attempt to teach and learn together (Roberts& Park, 1983). Basically the components of an ITS are the subject matter to be taugh~ a method of representing what the student does and does not know, and the tutoring strategies. The subject matter component, called the “expertise module”, is not simply a listing of concepts and objects organized in textbook fashion. It is a knowledge structure that contains the knowledge source to be taught. Knowledge can be retrieved from the knowledge structure and presented to the student according to the student’s need. The expertise module is charged with the task of generating problems and evaluating the correctness of the student’s solutions. The second component of an ITS is called the “student model module.” This is a modeling module which represents the student’s understanding of the material to be Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. ACM-24thCSE-2/93 -lN,USA a 1993 ACM 0-89791 -566 -6/93 /000210053,..


technical symposium on computer science education | 2005

Building a sense of history: narratives and pathways of women computing educators

Vicki L. Almstrum; Lecia Barker; Barbara Boucher Owens; Elizabeth S. Adams; William Aspray; Nell B. Dale; Wanda Dann; Andrea W. Lawrence; Leslie Schwartzman

1 .50 taught and the student’s learning behaviors. The tutoring strategies component, called the “tutoring module,” can communicate with the student and provides appropriate instruction by analyzing the student model. Current research on ITS has not provided a substantial student model that can be generalized to all ITSS. If ITSS are to become more widely used, they must be generalized and therefore implemented with computer systems. In this research a quantitative student model was proposed and tested in real world classrooms. More specifically, a quantitative student model called the Angel Model was developed to measure computer programming abilities of undergraduate students who have taken ~and completed the first Computer Science course, The Angel Model is a testlet design in which questions are grouped into related units by associating each unit with a section of progmrn code. A testlet is a group of items related to a single content area that is developed as a unit and contains a fixed number of predetermined paths that an examinee may follow (Wainer & Kiely, 1987). For this research study the testlet design was defined as a method of designing a test which groups questions into related units rather than considering each question as an independent question. This study proposed a testlet design called the Angel Model which was applied in the domain of students’ programming abilities at the level of the completion of a CS 1 course. The goals of this research study were to detem~ine if a quantitative student model (Angel Model) is a feasible model which can be built to measure students’ programming abilities and to determine if the Angel Model meets the criteria for developing art Item Response Theory (IRT) based Computerized Adaptive Testing (CAT) system. COMPUTERIZED ADAPTIVE TESTING (CAT) A Computerized Adaptive Testing (CAT) system is a procedure which automatically administers an examination with questions (called items in the psychometrics literature) of appropriate difficulty for the student based on the ability of the student. In contrast to simple computerized testing which gives exam questions in a predetermined or random order, a CAT adapts the questions to the student. If the student answers a question correctly, the next test question is more difficult than the current one. Conversely, if the student’s answer for the current test question is incorrect, the next question is easier. As a result, different students receive different sets of test questions based on their responses to the questions.


international conference on management of data | 1977

Main schema-external schema interaction in hierarchically organized data bases

Alfred G. Dale; Nell B. Dale

When an instructor adopts teaching materials, he/she wants some measure of confidence that the resource is effective, correct, and robust. The measurement of the quality of a resource is an open problem. It is our thesis that the traditional evaluative approach to peer review is not appropriate to insure the quality of teaching materials, which are created with different contextual constraints. This Working Group report focuses on the evaluation process by detailing a variety of review models. The evolution of the development and review of teaching materials is outlined and the contexts for creation, assessment, and transfer are discussed. We present an empirical study of evaluation forms conducted at the ITiCSE 99 conference, and recommend at least one new review model for the validation of the quality of teaching resources.


Computer Science Education | 1990

A Classification of Data Types

Nell B. Dale; Henry M. Walker

This working group laid the groundwork for the collection and analysis of oral histories of women computing educators. This endeavor will eventually create a body of narratives to serve as role models to attract students, in particular women, to computing; it will also serve to preserve the history of the female pioneers in computing education. Pre-conference work included administration of a survey to assess topical interest. The working group produced aids for conducting interviews, including an opening script, an outline of topics to be covered, guidelines for conducting interviews, and a set of probing questions to ensure consistency in the interviews. The group explored issues such as copyright and archival that confront the large-scale implementation of the project and suggested extensions to this research. This report includes an annotated bibliography of resources. The next steps will include training colleagues in how to conduct interviews and establishing guidelines for archival and use of the interviews.


technical symposium on computer science education | 2002

Increasing interest in CS ed research

Nell B. Dale

A class of external schemas derivable from a tree structured main schema is identified. It is shown that the properties of this class of schemas permit the construction of a processing interface such that predicates defined on an external schema can be evaluated in an occurrence structure disciplined by the main schema.


international conference on computational linguistics | 1965

Automatic linguistic classification

Eugene D. Pendergraft; Nell B. Dale

There is considerable variation in the terminology that is used in discussing the subject of (abstract) data types. Further, discussions of individual data types often combine several types into unnecessarily complex or interlinked structures and sometimes refer to a single data type in inconsistent ways. This paper resolves many of these problems by proposing a unified classification of a wide range of data types.

Collaboration


Dive into the Nell B. Dale's collaboration.

Top Co-Authors

Avatar

Chip Weems

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark R. Headington

University of Wisconsin–La Crosse

View shared research outputs
Top Co-Authors

Avatar

Alfred G. Dale

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Harriet G. Taylor

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar

John W. McCormick

State University of New York at Plattsburgh

View shared research outputs
Researchain Logo
Decentralizing Knowledge