Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ian Perera is active.

Publication


Featured researches published by Ian Perera.


ieee virtual reality conference | 2011

CRAM it! A comparison of virtual, live-action and written training systems for preparing personnel to work in hazardous environments

Catherine Stocker; Benjamin Sunshine-Hill; John Drake; Ian Perera; Joseph T. Kider; Norman I. Badler

In this paper we investigate the utility of an interactive, desktop-based virtual reality (VR) system for training personnel in hazardous working environments. Employing a novel software model, CRAM (Course Resource with Active Materials), we asked participants to learn a specific aircraft maintenance task. The evaluation sought to identify the type of familiarization training that would be most useful prior to hands on training, as well as after, as skill maintenance. We found that participants develop an increased awareness of hazards when training with stimulating technology — in particular (1) interactive, virtual simulations and (2) videos of an instructor demonstrating a task — versus simply studying (3) a set of written instructions. The results also indicate participants desire to train with these technologies over the standard written instructions. Finally, demographic data collected during the evaluation elucidates future directions for VR systems to develop a more robust and stimulating hazard training environment.


conference on computational natural language learning | 2015

Quantity, Contrast, and Convention in Cross-Situated Language Comprehension

Ian Perera; James F. Allen

Typically, visually-grounded language learning systems only accept feature data about objects in the environment that are explicitly mentioned, whether through annotation labels or direct reference through natural language. We show that when objects are described ambiguously using natural language, a system can use a combination of the pragmatic principles of Contrast and Conventionality, and multiple-instance learning to learn from ambiguous examples in an online fashion. Applying child language learning strategies to visual learning enables more effective learning in real-time environments, which can lead to enhanced teaching interactions with robots or grounded systems in multi-object environments.


international conference on computer vision | 2011

Recognizing manipulation actions in arts and crafts shows using domain-specific visual and textual cues

Benjamin Sapp; Rizwan Chaudhry; Xiaodong Yu; Gautam Singh; Ian Perera; Francis Ferraro; Evelyne Tzoukermann; Jana Kosecka; Jan Neumann

We present an approach for automatic annotation of commercial videos from an arts-and-crafts domain with the aid of textual descriptions. The main focus is on recognizing both manipulation actions (e.g. cut, draw, glue) and the tools that are used to perform these actions (e.g. markers, brushes, glue bottle). We demonstrate how multiple visual cues such as motion descriptors, object presence, and hand poses can be combined with the help of contextual priors that are automatically extracted from associated transcripts or online instructions. Using these diverse features and linguistic information we propose several increasingly complex computational models for recognizing elementary manipulation actions and composite activities, as well as their temporal order. The approach is evaluated on a novel dataset of comprised of 27 episodes of PBS Sprout TV, each containing on average 8 manipulation actions.


human-robot interaction | 2012

Situation understanding bot through language and environment

Daniel J. Brooks; Cameron Finucane; Adam Norton; Constantine Lignos; Vasumathi Raman; Hadas Kress-Gazit; Mikhail S. Medvedev; Ian Perera; Abraham Shultz; Sean McSheehy; Mitch Marcus; Holly A. Yanco

This video shows a demonstration of a fully autonomous robot, an iRobot ATRV-JR, which can be given commands using natural language. Users type commands to the robot on a tablet computer, which are then parsed and processed using semantic analysis. This information is used to build a plan representing the high level autonomous behaviors the robot should perform [2] [1]. The robot can be given commands to be executed immediately (e.g., “Search the floor for hostages.”) as well as standing orders for use over the entire run (e.g., “Let me know if you see any bombs.”). In the scenario shown in the video, the robot is asked to identify and defuse bombs, as well as to report if it finds any hostages or bad guys. Users can also query the robot through this interface. The robot conveys information to the user through text and a graphical interface on a tablet computer. The system can add icons to the map displayed and highlight areas of the map to convey concepts such as “I am here.” The video contains segments taken from a continuous 20 minute long run, shown at 4× speed. This work is a demonstration of a larger project called Situation Understanding Bot Through Language and Environment (SUBTLE). For more information, see www.subtlebot.org.


national conference on artificial intelligence | 2012

Make it So: Continuous, Flexible Natural Language Interaction with an Autonomous Robot

Daniel J. Brooks; Constantine Lignos; Cameron Finucane; Mikhail S. Medvedev; Ian Perera; Vasumathi Raman; Hadas Kress-Gazit; Mitch Marcus; Holly A. Yanco


national conference on artificial intelligence | 2013

SALL-E: situated agent for language learning

Ian Perera; James F. Allen


national conference on artificial intelligence | 2011

Language models for semantic extraction and filtering in video action recognition

Evelyne Tzoukermann; Jan Neumann; Jana Kosecka; Cornelia Fermüller; Ian Perera; Frank Ferraro; Benjamin Sapp; Rizwan Chaudhry; Gautam Singh


national conference on artificial intelligence | 2012

Learning names for RFID-tagged objects in activity videos

Ian Perera; James F. Allen


annual meeting of the special interest group on discourse and dialogue | 2018

A Situated Dialogue System for Learning Structural Concepts in Blocks World.

Ian Perera; James F. Allen; Choh Man Teng; Lucian Galescu


annual meeting of the special interest group on discourse and dialogue | 2018

Cogent: A Generic Dialogue System Shell Based on a Collaborative Problem Solving Model.

Lucian Galescu; Choh Man Teng; James F. Allen; Ian Perera

Collaboration


Dive into the Ian Perera's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lucian Galescu

Florida Institute for Human and Machine Cognition

View shared research outputs
Top Co-Authors

Avatar

Adam Dalton

Florida Institute for Human and Machine Cognition

View shared research outputs
Top Co-Authors

Avatar

Choh Man Teng

Florida Institute for Human and Machine Cognition

View shared research outputs
Top Co-Authors

Avatar

Archna Bhatia

Florida Institute for Human and Machine Cognition

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel J. Brooks

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gautam Singh

George Mason University

View shared research outputs
Researchain Logo
Decentralizing Knowledge