Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Deb Roy is active.

Publication


Featured researches published by Deb Roy.


Science | 2009

Computational Social Science

David Lazer; Alex Pentland; Lada A. Adamic; Sinan Aral; Albert-László Barabási; Devon Brewer; Nicholas A. Christakis; Noshir Contractor; James H. Fowler; Myron P. Gutmann; Tony Jebara; Gary King; Michael W. Macy; Deb Roy; Marshall W. Van Alstyne

A field is emerging that leverages the capacity to collect and analyze data at a scale that may reveal patterns of individual and group behaviors.


Science | 2009

Life in the network: the coming age of computational social science

David Lazer; Alex Pentland; Lada A. Adamic; Sinan Aral; Albert-László Barabási; Devon Brewer; Nicholas A. Christakis; Noshir Contractor; James H. Fowler; Myron P. Gutmann; Tony Jebara; Gary King; Michael W. Macy; Deb Roy; Marshall W. Van Alstyne

A field is emerging that leverages the capacity to collect and analyze data at a scale that may reveal patterns of individual and group behaviors.


Cognitive Science | 1999

Learning words from sights and sounds: a computational model

Deb Roy; Alex Pentland

This paper presents an implemented computational model of word acquisition which learns directly from raw multimodal sensory input. Set in an information theoretic framework, the model acquires a lexicon by finding and statistically modeling consistent cross-modal structure. The model has been implemented in a system using novel speech processing, computer vision, and machine learning algorithms. In evaluations the model successfully performed speech segmentation, word discovery and visual categorization from spontaneous infant-directed speech paired with video images of single objects. These results demonstrate the possibility of using state-of-the-art techniques from sensory pattern recognition and machine learning to implement cognitive models which can process raw sensor data without the need for human transcription or labeling.


Science | 2009

Social science. Computational social science.

David Lazer; Alex Pentland; Lada A. Adamic; Sinan Aral; Albert-László Barabási; Devon Brewer; Nicholas A. Christakis; Noshir Contractor; James H. Fowler; Myron P. Gutmann; Tony Jebara; Gary King; Michael W. Macy; Deb Roy; Van Alstyne M

A field is emerging that leverages the capacity to collect and analyze data at a scale that may reveal patterns of individual and group behaviors.


Artificial Intelligence | 2005

Semiotic schemas: a framework for grounding language in action and perception

Deb Roy

A theoretical framework for grounding language is introduced that provides a computational path from sensing and motor action to words and speech acts. The approach combines concepts from semiotics and schema theory to develop a holistic approach to linguistic meaning. Schemas serve as structured beliefs that are grounded in an agents physical environment through a causal-predictive cycle of action and perception. Words and basic speech acts are interpreted in terms of grounded schemas. The framework reflects lessons learned from implementations of several language processing robots. It provides a basis for the analysis and design of situated, multimodal communication systems that straddle symbolic and non-symbolic realms.


Artificial Intelligence | 2005

Connecting language to the world

Deb Roy; Ehud Reiter

1 Language in the World How does language relate to the non-linguistic world? If an agent is able to communicate linguistically and is also able to directly perceive and/or act on the world, how do perception, action, and language interact with and influence each other? Such questions are surely amongst the most important in Cognitive Science and Artificial Intelligence (AI). Language, after all, is a central aspect of the human mind ‐ indeed it may be what distinguishes us from other species. There is sometimes a tendency in the academic world to study language in isolation, as a formal system with rules for well-constructed sentences; or to focus on how language relates to formal notations such as symbolic logic. But language did not evolve as an isolated system or as a way of communicating symbolic logic; it presumably evolved as a mechanism for exchanging information about the world, ultimately providing the medium for cultural transmission across generations. Motivated by these observations, the goal of this special issue is to bring together research in AI that focuses on relating language to the physical world. Language is of course also used to communicate about non-physical referents, but the ubiquity of physical metaphor in language [21] suggests that grounding in the physical world provides the foundations of semantics.


Journal of Artificial Intelligence Research | 2004

Grounded semantic composition for visual scenes

Peter Gorniak; Deb Roy

We present a visually-grounded language understanding model based on a study of how people verbally describe objects in scenes. The emphasis of the model is on the combination of individual word meanings to produce meanings for complex referring expressions. The model has been implemented, and it is able to understand a broad range of spatial referring expressions. We describe our implementation of word level visually-grounded semantics and their embedding in a compositional parsing framework. The implemented system selects the correct referents in response to natural language expressions for a large percentage of test cases. In an analysis of the systems successes and failures we reveal how visual context influences the semantics of utterances and propose future extensions to the model that take such context into account.


EELC'06 Proceedings of the Third international conference on Emergence and Evolution of Linguistic Communication: symbol Grounding and Beyond | 2006

The human speechome project

Deb Roy; Rupal Patel; Philip DeCamp; Rony Kubat; Michael Fleischman; Brandon Cain Roy; Nikolaos Mavridis; Stefanie Tellex; Alexia Salata; Jethran Guinness; Michael Levit; Peter Gorniak

The Human Speechome Project is an effort to observe and computationally model the longitudinal course of language development for a single child at an unprecedented scale. We are collecting audio and video recordings for the first three years of one childs life, in its near entirety, as it unfolds in the childs home. A network of ceiling-mounted video cameras and microphones are generating approximately 300 gigabytes of observational data each day from the home. One of the worlds largest single-volume disk arrays is under construction to house approximately 400,000 hours of audio and video recordings that will accumulate over the three year study. To analyze the massive data set, we are developing new data mining technologies to help human analysts rapidly annotate and transcribe recordings using semi-automatic methods, and to detect and visualize salient patterns of behavior and interaction. To make sense of large-scale patterns that span across months or even years of observations, we are developing computational models of language acquisition that are able to learn from the childs experiential record. By creating and evaluating machine learning systems that step into the shoes of the child and sequentially process long stretches of perceptual experience, we will investigate possible language learning strategies used by children with an emphasis on early word learning.


IEEE Transactions on Multimedia | 2003

Grounded spoken language acquisition: experiments in word learning

Deb Roy

Language is grounded in sensory-motor experience. Grounding connects concepts to the physical world enabling humans to acquire and use words and sentences in context. Currently most machines which process language are not grounded. Instead, semantic representations are abstract, pre-specified, and have meaning only when interpreted by humans. We are interested in developing computational systems which represent words, utterances, and underlying concepts in terms of sensory-motor experiences leading to richer levels of machine understanding. A key element of this work is the development of effective architectures for processing multisensory data. Inspired by theories of infant cognition, we present a computational model which learns words from untranscribed acoustic and video input. Channels of input derived from different sensors are integrated in an information-theoretic framework. Acquired words are represented in terms of associations between acoustic and visual sensory experience. The model has been implemented in a real-time robotic system which performs interactive language learning and understanding. Successful learning has also been demonstrated using infant-directed speech and images.


Science | 2018

The spread of true and false news online

Soroush Vosoughi; Deb Roy; Sinan Aral

Lies spread faster than the truth There is worldwide concern over false news and the possibility that it can influence political, economic, and social well-being. To understand how false news spreads, Vosoughi et al. used a data set of rumor cascades on Twitter from 2006 to 2017. About 126,000 rumors were spread by ∼3 million people. False news reached more people than the truth; the top 1% of false news cascades diffused to between 1000 and 100,000 people, whereas the truth rarely diffused to more than 1000 people. Falsehood also diffused faster than the truth. The degree of novelty and the emotional reactions of recipients may be responsible for the differences observed. Science, this issue p. 1146 A large-scale analysis of tweets reveals that false rumors spread further and faster than the truth. We investigated the differential diffusion of all of the verified true and false news stories distributed on Twitter from 2006 to 2017. The data comprise ~126,000 stories tweeted by ~3 million people more than 4.5 million times. We classified news as true or false using information from six independent fact-checking organizations that exhibited 95 to 98% agreement on the classifications. Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information. We found that false news was more novel than true news, which suggests that people were more likely to share novel information. Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust. Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.

Collaboration


Dive into the Deb Roy's collaboration.

Top Co-Authors

Avatar

Soroush Vosoughi

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alex Pentland

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Brandon Cain Roy

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Fleischman

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter Gorniak

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jeff Orkin

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nikolaos Mavridis

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Prashanth Vijayaraghavan

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kai-yuh Hsiao

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge