Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Canfield Smith is active.

Publication


Featured researches published by David Canfield Smith.


Communications of The ACM | 1994

KidSim: programming agents without a programming language

David Canfield Smith; Allen Cypher; James C. Spohrer

oftware agents are our besf hope during the 1990s for obtaining more power and utility from personal computers. Agents have the potential to partiti


IEEE Computer | 1989

The Xerox Star: a retrospective

Jeff Johnson; Teresa L. Roberts; William L. Verplank; David Canfield Smith; Charles H. Irby; Marian H. Beard; Kevin J. Mackey

xzte nrtively in accomplishing tasks, rather than serving as passive tools as do today’s applications. However, people do not want generic agents-they want help with lhtir jobs, their tasks, their goals. Agents must be flexible enough to be tailored to each individual. The most flexible way to tailor a software entity is to program it. The problem is that programming is too difficult for most people today. Consider:


human factors in computing systems | 1995

KidSim: end user programming of simulations

Allen Cypher; David Canfield Smith

A description is given of the Xerox 8010 Star information system, which was designed as an office automation system. The idea was that professionals in a business or organization would have workstations on their desks and would use them to produce, retrieve, distribute, and organize documentation, presentations, memos, and reports. All of the workstations in an organization would be connected via Ethernet and would share access to file servers, printers, etc. The distinctive features of Star are identified, and changes to the original design are examined. A history of Star development is included. Some lessons learned from designing Star are related.<<ETX>>


Communications of The ACM | 2000

Programming by example: novice programming comes of age

David Canfield Smith; Allen Cypher; Larry Tesler

KidSim is an environment that allows children to create their own simulations. They create their own characters, and they create rules that specify how the chammters are to behave and interact. KidSim is programmed by demonstration, so that users do not need to learn a conventional programming language or scripting language.


Your wish is my command | 2001

Novice programming comes of age

David Canfield Smith; Allen Cypher; Larry Tesler

first commercial product based on PBD technology—Stagecast Creator, introduced in March 1999—enabling even children to create their own interactive stories, games, and simulations. Here, we describe this approach, offer independent evidence that it works for novices, and discuss why it works when other approaches haven’t and, more important, can’t. The computer is the most powerful tool ever devised for processing information, promising to make people’s lives richer (in several senses). But much of this potential is unrealized. Today, the only way most people are able to interact with computers is through programs or applications written by other people. This limited interaction represents a myopic and procrustean view of computers—like Alice looking at the garden in Wonderland through a keyhole. Until nonprogrammers can program computers themselves, they’ll be able to exploit only a fraction of a computer’s power. The limits of conventional interaction has


Interactions | 1996

Making programming easier for children

David Canfield Smith; Allen Cypher; Kurt J. Schmucker

Publisher Summary The limits of conventional interaction have long motivated researchers in end-user programming. The term “novice programmer” is used to describe end users who want to program computers. Since the late 1960s, programming language designers have been trying to develop approaches to programming computers that would succeed with novices. However, none of them has gained widespread acceptance. This chapter discusses a different approach in which traditional programming languages are eliminated in favor of a combination of two other technologies: programming by demonstration (PBD) and visual before-after rules. This approach is now available as a product named Stagecast Creator, which was introduced in March 1999. It is one of the first commercial uses of PBD. Stagecast Creator also enables children to create their own interactive stories, games, and simulations.


Journal of Visual Languages and Computing | 2001

Integrating Learning Supports into the Design of Visual Programming Systems

Chris DiGiano; Kenneth M. Kahn; Allen Cypher; David Canfield Smith

A conceptual gap exists between the # representations that people use in their minds when thinking about a problem and the representations that computers will accept when they are programmed. Introduction During the past 30 years there have been many attempts to enable ordinary people-people who are not professional programmers to program computers. Researchers have invented languages such as Logo, Smalltalk, BASIC, Pascal, and HyperTalk. They have developed techniques such as struc-tured programming. They have approached programming from a pedagogical perspective with technology, such as the goal-plan editor, [17] and from an engineering perspective, with CASE tools. Each of these is a brilliant advance in its own right. Today, however, only a small percentage of people program computers , probably less than 1 percent. A single-digit percentile is not success. We believe that there are at least two reasons for this low rate. First, traditional programming forces someone to learn a new language. Learning another language is difficult for most people. Consider the years of effort that it takes to master a foreign language. Second, programming languages are artificial languages rather than natural languages. They have a different epistemology. They deal with the unfamiliar world of computer data structures and algorithms. This makes them even less tractable for novices. The solution is to make programming more like thinking. In this paper we will show how a research project at Apple Computer has attempted to do this for childrens programming. The key ideas are to use representations in the computer that are analogous to the objects being represented and to allow those representations to be directly manipulated in the process of programming.


human factors in computing systems | 1991

Demonstrational interfaces: Coming soon?

Brad A. Myers; Allen Cypher; David Maulsby; David Canfield Smith; Ben Shneiderman

The success of a programming system depends as much on the learnability of its language concepts as the usability of its interface. We argue that learnability can be significantly improved by integrating into the programming systemlearning supports that allow individuals to educate themselves about the syntax, semantics and applications of a language. Reflecting on our experience with developing novice programming systems, we identify infrastructural characteristics of such systems that can make the integration of learning supports practical. We focus on five core facilities: annotatability, scriptability, monitorability, supplementability and constrainability. Our hope is that our examination of these technical facilities and their tradeoffs can inform the design of future programming systems that better address the educational needs of their users.


Journal of Nervous and Mental Disease | 1971

MACHINE-MEDIATED INTERVIEWING,

Franklin Dennis Hilf; Kenneth Mark Colby; David Canfield Smith; William K. Wittner; William A. Hall

generalized instructions from direct manipulation. The problem is how to get instructions from the use~ Should she write them out? Should the system infer them from her behavior? Should direct manipulation interfaces include memory icons, expression boxes and event labels? I view the demonstrational interface as “instructible,” combining inference and direct manipulation to disambiguate action — to identify the relevant constraints on selecting and inserting data. In this case, the problem (for users and researchers) is to find instructions that specify constraints. Pure demonstration is activity without explanation. Typically, a machine learning system must see several examples before converging on the right constraint (for instance, “VI .ps” and “v2.ps” imply “v#.ps”; add “wow.ps” and you get “*.ps”). A direct instruction, “select file names ending in ‘ .ps’ ‘‘, eliminates inference. A focusing instruction, selecting or typing” .ps”, makes “*.ps” one of the first guesses to be tried, focusing is crucial if the user expects automation after very few examples. Finally, augmenting a direct manipulation interface with memory icons provides additional means of instruction. The other important problem is to identify control structure: loops, branches and subroutines. Loops and subroutines are inferred by matching and predicting actions; branches, by finding differences in preconditions before bad predictions. Direct instruction of a loop or subroutine requixes some means of selecting a sequence of events from history. Focusing instructions indicate approximate ranges of history. Direct and focusing commands help identify constraints as branching conditions. Tuwy: Experiments in Demonstration Protocol Turvy is a ‘‘ Wizard of Oz’ ‘, an instructible system portrayed by a human being. I chose this approach for two reasons: first, I wanted to use real applications like Word, MacDraw and HyperCard, whose internal data structures cannot be accessed. Second, I wanted experimental subjects to set their own protocol for instructing; hence Turvy has to speak and recognize language. Initially, the subject is told only that Turvy “watches what you do and understands a bit of English.” Owing to its familiarity with the tasks, Turvy appears to have programmatic rules of inference and limited background knowledge of basic spatial relations (contact, sequence and containment), basic structures of text and hypermedia (words, paragraphs, cards, fiekis, etc.), and that some characters are delimiters. The subject is allowed to practice an editing task, then invited to “teach” Turvy. At some point during a demonstration, the subject asks Turvy (or Turvy offers) to take over. My goal is to find the instructions people give with little or no training, and the types of prompts and feedback needed. Subjects invited to “tell” about a task either assume a human level of knowledge or give too much detail; those asked to “show” Turvy fare better. Users often teach kmps by demonstrating one iteration, then asking Turvy to “do it again” — without delimiting “it” exactly. In pilot studies I found that prompts for focusing confuse the use~ Turvy elicits instructions indirectly by verbalizing. Its choice of constraints sometimes appears incorrect, but people quickly learn Turvy’s “syntactic-level” bias. I have found a small set of useful instruction~ “watch what I do,” “you take over”, “ok”, “no — stop,” “do the next one,” “do the res


Communications of The ACM | 2000

Building personal tools by programming

David Canfield Smith

” “look at this,” “put it there;” “ this is an exception,” “skip this one,” “I made a mistake.” Useful prompts artx “looking for <constraints> — ok?”, “show me what to do,” “is this an exception because econstraints>?”, “is this <selects a structure> a useful concept? — if so, name i~” The use of instructions and responses to prompts have been highly consistent across subjects and tasks. The next step in my research is to replicate such interaction in a real system.

Collaboration


Dive into the David Canfield Smith's collaboration.

Researchain Logo
Decentralizing Knowledge