Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kevin Gold is active.

Publication


Featured researches published by Kevin Gold.


intelligent robots and systems | 2004

Motion-based robotic self-recognition

Philipp Michel; Kevin Gold; Brian Scassellati

We present a method for allowing a humanoid robot to recognize its own motion in its visual field, thus enabling it to distinguish itself from other agents in the vicinity. Our approach consists of learning a characteristic time window between the initiation of motor movement and the perception of arm motions. The method has been implemented and evaluated on an infant humanoid platform. Our results demonstrate the effectiveness of using the delayed temporal contingency in the action-perception loop as a basis for simple self-other discrimination. We conclude by suggesting potential applications in social robotics and in generating forward models of motion.


Artificial Intelligence | 2009

Robotic vocabulary building using extension inference and implicit contrast

Kevin Gold; Marek W. Doniec; Christopher Crick; Brian Scassellati

TWIG (“Transportable Word Intension Generator”) is a system that allows a robot to learn compositional meanings for new words that are grounded in its sensory capabilities. The system is novel in its use of logical semantics to infer which entities in the environment are the referents (extensions) of unfamiliar words; its ability to learn the meanings of deictic (“I,” “this”) pronouns in a real sensory environment; its use of decision trees to implicitly contrast new word definitions with existing ones, thereby creating more complex definitions than if each word were treated as a separate learning problem; and its ability to use words learned in an unsupervised manner in complete grammatical sentences for production, comprehension, or referent inference. In an experiment with a physically embodied robot, TWIG learns grounded meanings for the words “I” and “you,” learns that “this” and “that” refer to objects of varying proximity, that “he” is someone talked about in the third person, and that “above” and “below” refer to height differences between objects. Follow-up experiments demonstrate the system’s ability to learn different conjugations of “to be”; show that removing either the extension inference or implicit contrast components of the system results in worse definitions; and demonstrate how decision trees can be used to model shifts in meaning based on context in the case of color words.


Connection Science | 2006

Learning acceptable windows of contingency

Kevin Gold; Brian Scassellati

By learning a range of possible times over which the effect of an action can take place, a robot can reason more effectively about causal and contingent relationships in the world. An algorithm is presented for learning the interval of possible times during which a response to an action can take place. The algorithm was implemented on a physical robot for the domains of visual self-recognition and auditory social-partner recognition. The environment model assumes that natural environments generate Poisson distributions of random events at all scales. A linear-time algorithm called Poisson threshold learning can generate a threshold T that provides an arbitrarily small rate of background events λ (T), if such a threshold exists for the specified error rate.


international conference on development and learning | 2008

What prosody tells infants to believe

Elizabeth S. Kim; Kevin Gold; Brian Scassellati

We examined whether evidence for prosodic signals about shared belief can be quantitatively found within the acoustic signal of infant-directed speech. Two transcripts of infant-directed speech for infants aged 1;4 and 1;6 were labeled with distinct speaker intents to modify shared beliefs, based on Pierrehumbert and Hirschbergpsilas theory of the meaning of prosody [1]. Acoustic predictions were made from intent labels first within a simple single-tone model that reflected only whether the speaker intended to add a wordpsilas information to the discourse (high tone, H*) or not (low tone, L*). We also predicted pitch within a more complicated five-category model that added intents to suggest a word as one of several possible alternatives (L*+H), a contrasting alternative (L+H*), or something about which the listener should make an inference (H*+L). The acoustic signal was then manually segmented and automatically classified based solely on whether the pitches at the beginning, end, and peak intensity points of stressed syllables in salient words, were closer to the utterancepsilas pitch minimum or maximum on a log scale. Evidence supporting our intent-based pitch predictions was found for L*, H*, and L*+H accents, but not for L+H* or H*+L. No evidence was found to support the hypothesis that infant-directed speech simplifies two-tone into single-tone pitch accents.


human-robot interaction | 2007

Young researchers' views on the current and future state of HRI

Kevin Gold; Ian R. Fasel; Nathan G. Freier; Cristen Torrey

This paper presents the results of a panel discussion titled “The Future of HRI,” held during an NSF workshop for graduate students on human-robot interaction in August 2006. The panel divided the workshop into groups tasked with inventing models of the field, and then asked these groups their opinions on the future of the field. In general, the workshop participants shared the belief that HRI can and should be seen as a single scientific discipline, despite the fact that it encompasses a variety of beliefs, methods, and philosophies drawn from several “core” disciplines in traditional areas of study. HRI researchers share many interrelated goals, participants felt, and enhancing the lines of communication between different areas would help speed up progress in the field. Common concerns included the unavailability of common robust platforms, the emphasis on human perception over robot perception, and the paucity of longitudinal real-world studies. The authors point to the current lack of consensus on research paradigms and platforms to argue that the field is not yet in the phase that philosopher Thomas Kuhn would call “normal science,” but believe the field shows signs of approaching that phase.


Robotics and Autonomous Systems | 2009

Using probabilistic reasoning over time to self-recognize

Kevin Gold; Brian Scassellati


national conference on artificial intelligence | 2007

A robot that uses existing vocabulary to infer non-visual word meanings from observation

Kevin Gold; Brian Scassellati


Proceedings of the Annual Meeting of the Cognitive Science Society | 2007

A Bayesian Robot That Distinguishes "Self" from "Other"

Kevin Gold; Brian Scassellati


international conference on development and learning | 2007

Learning grounded semantics with word trees: Prepositions and pronouns

Kevin Gold; Marek W. Doniec; Brian Scassellati


Proceedings of the Annual Meeting of the Cognitive Science Society | 2006

Audio Speech Segmentation Without Language-Specific Knowledge

Kevin Gold; Brian Scassellati

Collaboration


Dive into the Kevin Gold's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cristen Torrey

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ian R. Fasel

University of California

View shared research outputs
Top Co-Authors

Avatar

Juyang Weng

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Nathan G. Freier

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Philipp Michel

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge