Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cameron Finucane is active.

Publication


Featured researches published by Cameron Finucane.


intelligent robots and systems | 2010

LTLMoP: Experimenting with language, Temporal Logic and robot control

Cameron Finucane; Gangyuan Jing; Hadas Kress-Gazit

The Linear Temporal Logic MissiOn Planning (LTLMoP) toolkit is a software package designed to assist in the rapid development, implementation, and testing of high-level robot controllers. In this toolkit, structured English and Linear Temporal Logic are used to write high-level reactive task specifications, which are then automatically transformed into correct robot controllers that can be used to drive either a simulated or a real robot. LTLMoPs modular design makes it ideal for research in areas such as controller synthesis, semantic parsing, motion planning, and human-robot interaction.


Autonomous Robots | 2015

Provably correct reactive control from natural language

Constantine Lignos; Vasumathi Raman; Cameron Finucane; Mitchell P. Marcus; Hadas Kress-Gazit

This paper presents an integrated system for generating, troubleshooting, and executing correct-by-construction controllers for autonomous robots using natural language input, allowing non-expert users to command robots to perform high-level tasks. This system unites the power of formal methods with the accessibility of natural language, providing controllers for implementable high-level task specifications, easy-to-understand feedback on those that cannot be achieved, and natural language explanation of the reason for the robot’s actions during execution. The natural language system uses domain-general components that can easily be adapted to cover the vocabulary of new applications. Generation of a linear temporal logic specification from the user’s natural language input uses a novel data structure that allows for subsequent mapping of logical propositions back to natural language, enabling natural language feedback about problems with the specification that are only identifiable in the logical form. We demonstrate the robustness of the natural language understanding system through a user study where participants interacted with a simulated robot in a search and rescue scenario. Automated analysis and user feedback on unimplementable specifications is demonstrated using an example involving a robot assistant in a hospital.


robotics: science and systems | 2013

Sorry Dave, I'm Afraid I Can't Do That: Explaining Unachievable Robot Tasks using Natural Language

Vasumathi Raman; Constantine Lignos; Cameron Finucane; Kenton C.T. Lee; Mitchell P. Marcus; Hadas Kress-Gazit

Abstract : This paper addresses the challenge of enabling non-expert users to command robots to perform complex highleveltasks using natural language. It describes an integrated system that combines the power of formalmethods with the accessibility of natural language, providing correct-by-construction controllers for high-levelspecifications that can be implemented, and easy-to-understand feedback to the user on those that cannot be achieved.This is among the first works to close this feedback loop, enabling users to interact with the robot in order to identifya succinct cause of failure and obtain the desired controller. The supported language and logical capabilities areillustrated using examples involving a robot assistant in a hospital.


IEEE Transactions on Robotics | 2015

Timing Semantics for Abstraction and Execution of Synthesized High-Level Robot Control

Vasumathi Raman; Nir Piterman; Cameron Finucane; Hadas Kress-Gazit

The use of formal methods for synthesis has recently enabled the automated construction of verifiable high-level robot control. Most approaches use a discrete abstraction of the underlying continuous domain, and make assumptions about the physical execution of actions given a discrete implementation; examples include when actions will complete relative to each other, and possible changes in the robots environment while it is performing various actions. Relaxing these assumptions give rise to a number of challenges during the continuous implementation of automatically synthesized hybrid controllers. This paper presents several distinct timing semantics for controller synthesis, and compares them with respect to the assumptions they make on the execution of actions. It includes a discussion of when each set of assumptions is reasonable, and the computational tradeoffs inherent in relaxing them at synthesis time.


international conference on robotics and automation | 2012

Correct high-level robot control from structured English

Gangyuan Jing; Cameron Finucane; Vasumathi Raman; Hadas Kress-Gazit

The Linear Temporal Logic MissiOn Planning (LTLMoP) toolkit is a software package designed to generate a controller that guarantees a robot satisfies a task specification written by the user in structured English. The controller can be implemented on either a simulated or physical robot. This video illustrates the use of LTLMoP to generate a correct-by-construction robot controller. Here, an Aldebaran Nao humanoid robot carries out tasks as a worker in a simplified grocery store scenario.


intelligent robots and systems | 2012

Temporal logic robot mission planning for slow and fast actions

Vasumathi Raman; Cameron Finucane; Hadas Kress-Gazit

This paper addresses the challenge of creating correct-by-construction controllers for robots whose actions are of varying execution durations. Recently, Linear Temporal Logic synthesis has been used to construct robot controllers for performing high-level tasks. During continuous execution of these controllers by a physical robot, one or more low-level controllers are invoked simultaneously. If these low-level behaviors take different lengths of time to complete, the system will pass through several potentially unsafe intermediate states. This paper presents an algorithm that either generates a hybrid controller such that every continuous behavior of the robot is safe, or determines at synthesis time that the behavior may be unsafe. The proposed approach is implemented within the LTLMoP toolkit for reactive mission planning.


intelligent robots and systems | 2013

Provably-correct robot control with LTLMoP, OMPL and ROS

Kai Weng Wong; Cameron Finucane; Hadas Kress-Gazit

This paper illustrates the Linear Temporal Logic MissiOn Planning (LTLMoP) toolkit. LTLMoP is an open source software package that transforms high-level specifications for robot behavior, captured using a structured English grammar, into a robot controller that guarantees the robot will complete its task, if the task is feasible. If the task cannot be guaranteed, LTLMoP provides feedback to the user as to what the problem is. Due to its modular nature, users can control a variety of different robots using LTLMoP, both simulated and physical, with the same specification. It shows an example robot waiter scenario, with LTLMoP controlling both a PR2 in simulation (using Gazebo), showcasing the interface between LTLMoP and the Robot Operating System (ROS)2, as well as an Aldebaran Nao humanoid in the lab.


international conference on robotics and automation | 2014

Open-world mission specification for reactive robots

Spyros Maniatopoulos; Matthew W. Blair; Cameron Finucane; Hadas Kress-Gazit

Recent advances have enabled the automatic generation of correct-by-construction robot controllers from high-level mission specifications. However, most current approaches operate under the closed-world assumption, i.e., only elements of the world explicitly modeled a priori can be taken into account during execution. In this paper, we tackle the problem of specifying and automatically updating the missions of robots operating in worlds that are open with respect to new elements, such as new objects and regions of interest. We demonstrate our approach in a scenario featuring a robotic courier whose world is open with respect to letters addressed to new recipients.


human-robot interaction | 2012

Situation understanding bot through language and environment

Daniel J. Brooks; Cameron Finucane; Adam Norton; Constantine Lignos; Vasumathi Raman; Hadas Kress-Gazit; Mikhail S. Medvedev; Ian Perera; Abraham Shultz; Sean McSheehy; Mitch Marcus; Holly A. Yanco

This video shows a demonstration of a fully autonomous robot, an iRobot ATRV-JR, which can be given commands using natural language. Users type commands to the robot on a tablet computer, which are then parsed and processed using semantic analysis. This information is used to build a plan representing the high level autonomous behaviors the robot should perform [2] [1]. The robot can be given commands to be executed immediately (e.g., “Search the floor for hostages.”) as well as standing orders for use over the entire run (e.g., “Let me know if you see any bombs.”). In the scenario shown in the video, the robot is asked to identify and defuse bombs, as well as to report if it finds any hostages or bad guys. Users can also query the robot through this interface. The robot conveys information to the user through text and a graphical interface on a tablet computer. The system can add icons to the map displayed and highlight areas of the map to convey concepts such as “I am here.” The video contains segments taken from a continuous 20 minute long run, shown at 4× speed. This work is a demonstration of a larger project called Situation Understanding Bot Through Language and Environment (SUBTLE). For more information, see www.subtlebot.org.


national conference on artificial intelligence | 2012

Make it So: Continuous, Flexible Natural Language Interaction with an Autonomous Robot

Daniel J. Brooks; Constantine Lignos; Cameron Finucane; Mikhail S. Medvedev; Ian Perera; Vasumathi Raman; Hadas Kress-Gazit; Mitch Marcus; Holly A. Yanco

Collaboration


Dive into the Cameron Finucane's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vasumathi Raman

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel J. Brooks

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Holly A. Yanco

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar

Ian Perera

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

Mikhail S. Medvedev

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kenton C.T. Lee

University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge