Kim Binsted
University of Hawaii
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kim Binsted.
Humor: International Journal of Humor Research | 1997
Kim Binsted; Graeme Ritchie
Riddles based on simple puns can be dassified according to the patterns of word, syllable or phrase similarity they depend upon. We have devised a formal model ofthe semantic and syntactic regularities underlying some of the simpler types of punning riddle. We have also implemented thispreliminary theory in a Computer program which can generate riddles from a lexicon containing general data about words andphrases; that is, the lexicon content is not customized to produce jokes. An informal, formative evaluation of the programs results by a set of human judges suggests that the riddles produced by this program are of comparable quality to those in general circulation among school children.
hawaii international conference on system sciences | 2005
Chuck Jorgensen; Kim Binsted
Subvocal electromyogram (EMG) signal classification is used to control a modified web browser interface. Recorded surface signals from the larynx and sublingual areas below the jaw are filtered and transformed into features using a complex dual quad tree wavelet transform. Feature sets for six subvocally pronounced control words, 10 digits, 17 vowel phonemes and 23 consonant phonemes are trained using a scaled conjugate gradient neural network. The subvocal signals are classified and used to initiate web browser queries through a matrix based alphabet coding scheme. Hyperlinks on web pages returned by the browser are numbered sequentially and queried using digits only. Classification methodology, accuracy, and feasibility for scale up to real world human machine interface tasks are discussed in the context of vowel and consonant recognition accuracy.
ieee international workshop on wireless and mobile technologies in education | 2005
Samuel R. H. Joseph; Kim Binsted; Daniel D. Suthers
PhotoStudy is a system that supports vocabulary study on both wired and wireless devices. It is designed to make it simple to annotate content with multimedia such as images and audio recorded on these devices. This paper presents a prototype system that uses wireless markup languages and Java MIDlets. User evaluations have been conducted, and are being continued in our iterative design approach. We report the results from questionnaire evaluations, observational studies and interviews.
Ai Magazine | 2000
Elisabeth André; Kim Binsted; Kumiko Tanaka-Ishii; Sean Luke; Gerd Herzog; Thomas Rist
� Three systems that generate real-time natural language commentary on the RoboCup simulation league are presented, and their similarities, differences, and directions for the future discussed. Although they emphasize different aspects of the commentary problem, all three systems take simulator data as input and generate appropriate, expressive, spoken commentary in real time.
artificial intelligence in medicine in europe | 1995
Kim Binsted; Alison Cawsey; Ray Jones
This paper presents an approach for providing patients with personalised explanations of their medical record. Simple text planning techniques are used to construct relevant explanations based on information in the record and information in a general medical knowledge base. We discuss the results of the evaluation of our system with diabetes patients at three diabetes clinics in Scotland.
robot soccer world cup | 1999
Kim Binsted; Sean Luke
In this paper we present early work on an animated talking head commentary system called Byrne. The goal of this project is to develop a system which can take the output from the RoboCup soccer simulator, and generate appropriate affective speech and facial expressions, based on the characters personality, emotional state, and the state of play. Here we describe a system which takes pre-analysed simulator output as input, and which generates text marked-up for use by a speech generator and a face animation system. We make heavy use of inter-system standards, so that future versions of Byrne will be able to take advantage of advances in the technologies that it incorporates.
conference on multimedia modeling | 2000
Shigeo Morishima; Tatsuo Yotsukura; Kim Binsted; Frank Nielsen; Claudio S. Pinhanez
HyperMask is a system which projects an animated face onto a physical mask, worn by an actor. As the mask moves within a prescribed area, its position and orientation are detected by a camera, and the projected image changes with respect to the viewpoint of the audience. The lips of the projected face are automatically synthesized in real time with the voice of the actor, who also controls the facial expressions. As a theatrical tool, HyperMask enables a new style of storytelling. As a prototype system, we propose to put a self-contained HyperMask system in a trolley (disguised as a linen cart), so that it projects onto the mask worn by the actor pushing the trolley. Keyword: Talking Face, Mask Tracking, Facial Expression Synthesis, Lip Synchronization, Interactive Entertainment, HYPERMASK TALKING HEAD PROJECTED ONTO REAL OBJECT SHIGEO MORISHIMA AND TATSUO YOTSUKURA Faculty of Engineering, Seikei University 3-3-1 Kichijoji-Kitamachi, Musashino, Tokyo 180-8633 JAPAN E-mail: {shigeo, yotsu}@ee.seikei.ac.jp KIM BINSTED AND FRANK NIELSEN Sony Computer Science Laboratories 3-14-13 Higashi-Gotanda Shinagawa-ku Tokyo 141 E-mail: {kimbm, nielsen}@csl.sony.co.jp CLAUDIO PINHANEZ IBM T.J. Watson Research 30 Saw Mill River Rd. (Route 9A) Hawthorne, NY 10532 E-mail: [email protected] HYPERMASK is a system which projects an animated face onto a physical mask, worn by an actor. As the mask moves within a prescribed area, its position and orientation are detected by a camera, and the projected image changes with respect to the viewpoint of the audience. The lips of the projected face are automatically synthesized in real time with the voice of the actor, who also controls the facial expressions. As a theatrical tool, HYPERMASK enables a new style of storytelling. As a prototype system, we propose to put a self-contained HYPERMASK system in a trolley (disguised as a linen cart), so that it projects onto the mask worn by the actor pushing the trolley.
Interacting with Computers | 2006
Bradley J. Betts; Kim Binsted; Charles Jorgensen
We present results of electromyographic (EMG) speech recognition on a small vocabulary of 15 English words. EMG speech recognition holds promise for mitigating the effects of high acoustic noise on speech intelligibility in communication systems, including those used by first responders (a focus of this work). We collected 150 examples per word of single-channel EMG data from a male subject, speaking normally while wearing a firefighters self-contained breathing apparatus. The signal processing consisted of an activity detector, a feature extractor, and a neural network classifier. Testing produced an overall average correct classification rate on the 15 words of 74% with a 95% confidence interval of (71%, 77%). Once trained, the subject used a classifier as part of a real-time system to communicate to a cellular phone and to control a robotic device. These tasks were performed under an ambient noise level of approximately 95 decibels. We also describe ongoing work on phoneme-level EMG speech recognition.
The Visual Computer | 2002
Tatsuo Yotsukura; Shigeo Morishima; Frank Nielsen; Kim Binsted; Claudio S. Pinhanez
1 Faculty of Engineering, Seikei University, 3-3-1 Kichijoji-Kitamachi, Musashino-shi, Toyko 180-8633, Japan E-mail: {yotsu,shigeo}@ee.seikei.ac.jp 2 Sony Computer Science Laboratories Inc., 3-14-13 Higashi-Gotanda, Shinagawa-ku, Tokyo 141-0022, Japan E-mail: [email protected] 3 I-chara Inc., 2-34-1 Uehara, Shibuya-ku, Tokyo 151-0064, Japan E-mail: [email protected] 4 IBM Research, Watson, Route 134, P.O. Box 218, Yorktown Heights, N.Y. 10598, USA E-mail: [email protected]
intelligent technologies for interactive entertainment | 2005
Jeff Stark; Kim Binsted; Benjamin K. Bergen
Here we present a model of a subtype of one-line jokes (not puns) that describes the relationship between the connector (part of the set-up) and the disjunctor (often called the punchline). This relationship is at the heart of what makes this common type of joke humorous. We have implemented this model in a system, DisS (Disjunctor Selector), which, given a joke set-up, can select the best disjunctor from a list of alternatives. DisS agrees with human judges on the best disjunctor for one typical joke, and we are currently testing it on other jokes of the same sub-type.