Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Keith Barker is active.

Publication


Featured researches published by Keith Barker.


Assessment & Evaluation in Higher Education | 1986

Dilemmas at a Distance.

Keith Barker

ABSTRACT This paper addresses some of the situations and problems to be found in distance learning for both the teacher and learner. It looks at the general needs of distance learning and, implicitly, on the strategies used to provide feedback in both directions. It uses as examples courses given by live television to branch campus students at distances of 30 to 120 miles from the instructor. Some solutions are suggested to the difficulties found in this specific situation, which should be able to be translated into other learning situations.


Computer Science Education | 1988

Laboratory Experiences in Computer Science and Engineering

Keith Barker; David L. Soldan; Gordon E. Stokes

This paper discusses the teaching of computer science and engineering in the laboratory. A case is made for the use of laboratory experiences together with the classification of the types of laboratories both in and out of the university environment. Emphasis is placed on the design component. The current problems in establishing, developing, and maintaining laboratory programs are addressed and the response to these from the IEEE Computer Society and the ACM are presented. Typical costs are given to set up the components of a laboratory program for a full curriculum.


frontiers in education conference | 1996

Teaching IPPD and teamwork in an engineering design course

Brian MacKay; Karl R. Wurst; Keith Barker

The emphasis of many companies is moving towards integrated product and process development (IPPD), in which engineers work in multi-disciplinary teams in order to create a product with higher reliability, shorter production time, lower cost and a concern for the environment. In response to the needs of industry, college/university courses must help engineering students to be effective members of diverse teams, to be aware of industry practice and to understand the product life-cycle. This paper highlights our new teamwork-oriented curriculum, our methods for assessment and our current results.


European Journal of Marketing | 1986

The Market Research Terminal and Developments in Survey Research

Gwyn Rowley; Keith Barker; Victor Callaghan

Reports on survey‐behavioural research in a major and fundamental development ‐ the Questronic project based at the University of Sheffield (UK), and its first product, the Ferranti Market Research Terminal (MRT). States that the MRT is a battery‐operated, hand‐held data‐capture terminal and it is a replacement for the usual questionnaire necessity ‐ clipboard and pencil. Describes the MRT and its functions including keyboard and electronic storage, so aiding survey research, both economic and operational. Lists out the operations and benefits in detail enabling the user a fast, modern aid for use with questionnaires. Goes on to give further developing procedures and includes a contact address for further information regarding the importance of development MRT routines in survey research.


Journal of Microcomputer Applications | 1982

SAS-an experimental tool for dynamic program structure acquisition and analysis

Victor Callaghan; Keith Barker

Abstract This paper describes an experimental microprocessor-based tool, SAS (Software Analysis System), which has been developed to enable dynamic program structure acquistion and analysis to be made on digital computing machines. The system uses a universal hardware extraction technique to obtain branch vectors which are used to analyse and display the structure of the software being monitored. A display, especially designed for small instrument screens, is used to present this structure. Emphasis has been directed towards development of methods with high degrees of machine independence and it is envisaged that such techniques could either be integrated into the new generation of logic analysers or form part of a universal tool for computer programmers. Initial research has been guided towards the application of these techniques to compiled, assembled, or machine coded systems and in this context a number of techniques are described. The motivation for this research has been provided by the present escalating software costs, in particular those in post development which account for approximately 75% of the total software expenditure.


technical symposium on computer science education | 1993

Evaluating effectiveness in computer science education

Barry L. Kurtz; Nell B. Dale; Jerry Engel; James E. Miller; Keith Barker; Harriet G. Taylor

A wide variety of evaluation techniques are presented in the literature of computer science education. Many papers only present observations and anecdotal evidence. Others may present results from an examination or an attitude survey. A few papers report on studies that use the classical educational evaluation model of a hypothesis being tested with a control group and one or more treatment groups. Sometimes the widespread use of a particular approach can indicate educational effectiveness, as evidenced by the switch horn programming languages like Basic and Fortran to block structured languages in the late 1970s. Formative evaluation of a particular approach as it is being developed can help provide mid-course corrections that result in a better final product. Summative evaluation can be used to convince others that this particular approach has proven to be effective. The necessary evidence may depend on the person considering adoption of the proposed approach. Although anecdotal evidence may be sufficient to convince a close colleague, more formal techniques are necessary for more general audiences. This panel will explore various approaches to evaluation in computer science education and the need for measurable objectives that go beyond simplistic statements such as “the students liked it.” Jerry Engel is currently working at the National Science Foundation where he considers evaluation plans in grant proposals and reports of project results. Jim Miller is the editor of the SIGCSE Bulletin and Keith Barker is the editor of Computer Science Education. Nell Dale and Barry Kurtz have been authors of introducto~ textbooks and are active researchers in computer science education. Harriet Taylor is using formal evaluation techniques to determine the long range effects of the choice of the introducto~ programming language. A lively exchange of viewpoints can be expected.


technical symposium on computer science education | 1997

Distance education (panel): promise and reality

Keith Barker; Judith Gal-Ezer; Pamela B. Lawhead; Kurt Maly; James E. Miller; Pete Thomas; Elizabeth S. Adams

I am the Timothy E. Wirth Professor of Learning Technologies at Harvard’s Graduate School of Education. From 2001-2004, I also served as Chair of the Learning & Teaching department in the School. My research interests span emerging technologies for learning, educational policy, and leadership in educational innovation. My funded research includes a grant from the U.S. Department of Education Star Schools initiative to development and study augmented reality simulations using wireless mobile devices, and three grants from the National Science Foundation to (1) aid middle school students learning science via multi-user virtual environments, (2) develop a research agenda for online teacher professional development, and (3) explore the feasibility of a “scalability index” for assessing the potential transferability of a locally successful educational innovation to a wide range of contexts.


frontiers in education conference | 1997

Panel Discussion: Computer Science Accreditation - Past, Present and Future

Lawrence G. Jones; Keith Barker; D. Lidtke; Susan E. Conry

Accreditation of undergraduate programs in computing within the United States began in 1984. A Joint Task Force of the Computer Society of the Institute of Electrical and Electronics Engineers (IEEE-CS) and the Association for Computing Machinery (ACM) established the Computing Sciences Accreditation Board (CSAB) to oversee these accreditation activities. The Computer Science Accreditation Commission (CSAC), currently the only commission of the CSAB, administers the accreditation process for programs in computer science . Currently, many professional accrediting bodies, including CSAB/CSAC, are taking a fresh look at their evaluative criteria and processes for accreditation. The purpose of this panel is to present a perspective on computer science accreditation, where it has been and where it is going. A major focus of this panel is to present proposed changes to the evaluative criteria as part of a public review and comment process. The panel will consist of a series of short presentations followed by discussion. The first presentation will give a history and current status of computer science accreditation in the United States. The second will describe the current CSAB/CSAC evaluative criteria. The third presentation will describe the current accreditation process and some changes that are being considered. The final presentation will describe proposed changes in structure and content to the CSAB/CSAC criteria for accreditation. Ample time will be provided for questions and discussion with the audience. Audience comments will be recorded for consideration by CSAB/CSAC.


technical symposium on computer science education | 1995

Visions of breadth in introductory computing curricula (abstract)

Doug Baldwin; Jerry Mead; Keith Barker; Allen B. Tucker; Lynn Ziegler

Starting with the release of Denning et al’s “Computing as a Discipline” report, there has been a growing recognition that computing curricula need to expose students early to the breadth of the field. “Breadth” is generally taken to mean exposure to a variety of the sub-areas in which computing professionals work, the issues with which they concern themselves, or the methods of inquiry that they practice. Beyond this basic outline, however, there is a startling diversity of ideas about what form breadth should take in the introductory curriculum. Approaches to breadth range from labs that introduce certain sub-areas in otherwise traditional courses, to single courses that survey multiple sub-areas and issues, to course sequences that explore sub-areas and issues in relative depth. A faculty workshop at Bowdoin College in the summer of 1992 examined several early implementations of breadth in introductory computing curricula. Subsequent workshops in 1993 and ’94 monitored the evolution of these implementations and the development of new ones created by workshop participants.1 This panel is based on results from these workshops. The panelists, all workshop participants, will describe their visions of breadth and the ways in which they have translated those visions into courses and course materials. Each panelist will briefly (10 to 12 minutes) describe their work, after which the audience will ask questions and describe their own 1 The workshopswere organiz~ by Allen Tucker of Bowdoin College,A. JoeTurner of ClemsonUniversity, andKeith Barkerof theUniversity of Connecticut,andweresupportedby theNational ScienceFoundationundergrantnumberCDA9121315.


technical symposium on computer science education | 1994

Class testing the breadth-first curriculum (abstract): summary results for courses I–IV

Keith Barker; Andrew Bernat; Robert D. Cupper; Charles Kelemen; Allen B. Tucker

Several different undergraduate programs have been designing and class-testing alternative curricula for their f~st four courses using the 7-course “breadth-first” approach described in the ACM/IEEE-CS report Computing Curricula 1991 [1]. These courses have several major goals: 1. Broad subject matter coverage, beginning with the first course; 2. Integration of mathematics, science, and engineering points of view with the subject matter; 3. Inclusion of social issues (such as the risks and liabilities that surround software failures); and 4. Weekly coordinated laboratory activities. The goals of this approach, generally speaking, are to provide an introduction to the discipline of computing that more directly reflects its nature and breadth than does the traditional approach, especially in its fwst four courses. A complete set of teaching materials for the first four courses in the breadth-f~st curriculum has been developed and class–tested. These four courses are titled: Course I: Logic, Problem Solving, Programs, and Computers Course II: Abstraction, Data Structures, and Large Soflware Systems Course III: Levels of Architecture, Lunguages, and Machines Course A? Algorithms, Concurrency, and the Limits of Computation This panel session will focus on these four courses in the breadth-first curriculum, which have been class-tested in a variety of different institutional settings (including Bowdoin, UConn, UTEP, Allegheny, and Swarthrnore) during the 1991-92, 1992–93, and 1993–94 academic years. The panelists will present the results of class–testing these courses and address the topics summarized in the paragraphs below. There will be time for questions and discussion between presentations. 1. Course L Origination, class-testing, and revision (Bernat). Experience with Course I in the breadtl-first curriculum has led to several modifications: stronger integration of the mathematics and programming methodology, in particular to provide motivation for the introduction and development of logic; stronger integration of specifications as a design tool, in particular to motivate the need for precision in specifications; stronger emphasis on abstraction, in particular as adesigntool for handling detail. At the same time we retain the strong computer science emphasis, the understanding of societal issues, and the suitability for use as an introduction of computer science for non–majors. 2. Courses Z1 Object-orientation, data structures, and operating systems (Cupper). The goals of course II are reflected in the title of the tex~ ZWzdamenrals of Computing 11; Abstraction, Data Structures, and Large So@are Systems [2]. Object~rientation provides a natural and efllcient vehicle for accomplishment of these goals. The course begins with an overview of the principles of software design. Object~rientation is presented as an appropriate way to meet these principles of software design.

Collaboration


Dive into the Keith Barker's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew Bernat

University of Texas at El Paso

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James E. Miller

University of West Florida

View shared research outputs
Top Co-Authors

Avatar

Lawrence G. Jones

Software Engineering Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge