Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Monica N. Nicolescu is active.

Publication


Featured researches published by Monica N. Nicolescu.


adaptive agents and multi-agents systems | 2003

Natural methods for robot task learning: instructive demonstrations, generalization and practice

Monica N. Nicolescu; Maja J. Matarić

Among humans, teaching various tasks is a complex process which relies on multiple means for interaction and learning, both on the part of the teacher and of the learner. Used together, these modalities lead to effective teaching and learning approaches, respectively. In the robotics domain, task teaching has been mostly addressed by using only one or very few of these interactions. In this paper we present an approach for teaching robots that relies on the key features and the general approach people use when teaching each other: first give a demonstration, then allow the learner to refine the acquired capabilities by practicing under the teachers supervision, involving a small number of trials. Depending on the quality of the learned task, the teacher may either demonstrate it again or provide specific feedback during the learners practice trial for further refinement. Also, as people do during demonstrations, the teacher can provide simple instructions and informative cues, increasing the performance of learning. Thus, instructive demonstrations, generalization over multiple demonstrations and practice trials are essential features for a successful human-robot teaching approach. We implemented a system that enables all these capabilities and validated these concepts with a Pioneer 2DX mobile robot learning tasks from multiple demonstrations and teacher feedback.


systems man and cybernetics | 2001

Learning and interacting in human-robot domains

Monica N. Nicolescu; Maja J. Matarić

We focus on a robotic domain in which a human acts both as a teacher and a collaborator to a mobile robot. First, we present an approach that allows a robot to learn task representations from its own experiences of interacting with a human. While most approaches to learning from demonstration have focused on acquiring policies (i.e., collections of reactive rules), we demonstrate a mechanism that constructs high-level task representations based on the robots underlying capabilities. Next, we describe a generalization of the framework to allow a robot to interact with humans in order to handle unexpected situations that can occur in its task execution. Without using explicit communication, the robot is able to engage a human to aid it during certain parts of task execution. We demonstrate our concepts with a mobile robot learning various tasks from a human and, when needed, interacting with a human to get help performing them.


human-robot interaction | 2008

Understanding human intentions via hidden markov models in autonomous mobile robots

Richard Kelley; Alireza Tavakkoli; Christopher King; Monica N. Nicolescu; Mircea Nicolescu; George Bebis

Understanding intent is an important aspect of communication among people and is an essential component of the human cognitive system. This capability is particularly relevant for situations that involve collaboration among agents or detection of situations that can pose a threat. In this paper, we propose an approach that allows a robot to detect intentions of others based on experience acquired through its own sensory-motor capabilities, then using this experience while taking the perspective of the agent whose intent should be recognized. Our method uses a novel formulation of Hidden Markov Models designed to model a robots experience and interaction with the world. The robots capability to observe and analyze the current scene employs a novel vision-based technique for target detection and tracking, using a non-parametric recursive modeling approach. We validate this architecture with a physically embedded robot, detecting the intent of several people performing various activities.


machine vision applications | 2009

Non-parametric statistical background modeling for efficient foreground region detection

Alireza Tavakkoli; Mircea Nicolescu; George Bebis; Monica N. Nicolescu

Most methods for foreground region detection in videos are challenged by the presence of quasi-stationary backgrounds—flickering monitors, waving tree branches, moving water surfaces or rain. Additional difficulties are caused by camera shake or by the presence of moving objects in every image. The contribution of this paper is to propose a scene-independent and non-parametric modeling technique which covers most of the above scenarios. First, an adaptive statistical method, called adaptive kernel density estimation (AKDE), is proposed as a base-line system that addresses the scene dependence issue. After investigating its performance we introduce a novel general statistical technique, called recursive modeling (RM). The RM overcomes the weaknesses of the AKDE in modeling slow changes in the background. The performance of the RM is evaluated asymptotically and compared with the base-line system (AKDE). A wide range of quantitative and qualitative experiments is performed to compare the proposed RM with the base-line system and existing algorithms. Finally, a comparison of various background modeling systems is presented as well as a discussion on the suitability of each technique for different scenarios.


intelligent robots and systems | 2001

Experience-based representation construction: learning from human and robot teachers

Monica N. Nicolescu; Maja J. Matarić

In this paper we address the problem of teaching robots to perform various tasks. We present a behavior-based approach that extends the capabilities of robots, allowing them to learn representations of complex tasks from their own experiences of interacting with a human, and to use the acquired knowledge to teach other robots in turn. A learner robot follows a human or robot teacher and maps its own observations of the environment to its internal behaviors, building at run-time a representation of the experienced task in the form of a behavior network. To enable this, we introduce an architecture that allows the representation and execution of complex and flexible sequences of behaviors and an online algorithm that builds the task representation from observations. We demonstrate our approach in a set of human(teacher)-robot(learner) and robot(teacher)-robot(learner) experiments, in which the robots learn representations for multiple tasks and are able to execute them even in environments with distractor objects that could hinder the learning and the execution process.


Models and Mechanisms of Imitation and Social Learning in Robots, Humans and Animals: Behavioural, Social and Communicative Dimensions | 2005

Imitation and Social Learning in Robots, Humans and Animals: Task learning through imitation and human–robot interaction

Monica N. Nicolescu; Maja J. Matarić

behaviors embed representations of goals in the form of abstracted environmental states. This is a key feature critical for learning from experience. To learn a task, the robot must create a mapping between its perception (observations) and its own behaviors that achieve the observed effects. This process is enabled by abstract behaviors, the perceptual component of a behavior, which activate each time the robot’s observations match the goal(s) of a primitive behavior. This correlation enables the robot to identify its own behaviors that are relevant for the task being learned. Primitive behaviors execute the robot’s actions and achieve its goals. They are also used for communication and interaction. Acting in the environment is a form of implicit communication. By using evocative actions, people and other animals convey emotions, desires, interests, and intentions. Action-based communication has the advantage that it need not be restricted to robots or agents with a humanoid body or face: structural similarities between interacting agents are not required for successful interaction. Even if there is no direct mapping between the physical characteristics of the robot and its user, the robot can still use communication 1.4 Communication by Acting a Means for Robot-Human Interaction 5 through action to convey certain types of messages, drawing on human common sense [16]. 1.4 Communication by Acting a Means for Robot-Human Interaction Consider a prelinguistic child who wants an out-of-reach toy. The child will try to bring a grown-up to the toy and will then point and reach, indicating his desires. Similarly, a dog will run back and forth to induce its owner to come to a place where it has found something it desires. The ability of the child and the dog to demonstrate their desires and intentions by calling a helper and mock-executing actions is an expressive and natural way to communicate a problem and need for help. The human capacity to understand these intentions is also natural and inherent. We apply the same strategy in enabling robots to communicate their desires and intentions to people. The action-based communication approach we propose is general and can be applied on a variety tasks and physical bodies/platforms. The robot performs its task independently, but if it fails in a cognizant fashion, it searches for a human and attempts to induce him to follow it to the place where the failure occurred, and then demonstrates its intentions in hopes of obtaining help. Attracting a human to help is achieved through movement, using back-and-forth, cyclic actions. After capturing the human’s attention, the robot leads the human helper to the site of the task and attempts to resume its work from the point where it failed. To communicate the nature of the problem, the robot repeatedly tries to execute the failed behavior in front of its helper. This is a general strategy that can be employed for a wide variety of failures but, notably, not for all. Executing the previously failed behavior will likely fail again, effectively expressing the robot’s problem to the human observer. 1.4.1 Experiments in Communication by Acting We implemented and tested our concepts on a Pioneer 2-DX mobile robot, equipped with two sonar rings (8 front and 8 rear), a SICK laser range-finder, a pan-tilt-zoom color camera, a gripper, and on-board computation on a PC104 stack. The robot had a behavior set that allowed it to track cylindrical colored targets (Track (ColorOfTarget, GoalAngle, GoalDistance)), to pick up PickUp(ColorOfObject), and to drop small colored objects Drop. These behaviors were implemented in AYLLU [17]. In the validation experiments, we asked a person that had not worked with the robot before to be near-by during task execution and to expect to be engaged in an interaction. There is no initial assumption that people will be helpful or motivated to assist the robot. The robot is able to deal with unhelpful or misleading humans 6 Task Learning Through Imitation and Human-Robot Interaction by monitoring their presence along with its progress in the task. The following main categories of interactions emerged from the experiments: uninterested: the human was not interested, did not react to, or did not understand the robot’s need for help. As a result, the robot searched for another helper. interested but unhelpful: the human was interested and followed the robot for a while, but then abandoned it. As above, the robot searched for another helper. helpful: the human was interested, followed the robot to the location of the problem, and assisted the robot. In these cases, the robot was able to finish the task. (a) Going through a blocked gate (b) Picking up an inaccessible box (c) Visiting a missing target Fig. 1.3. The human-robot interaction experiments setup We purposefully constrained the environment used in the experiments to encourage human-robot interaction, as follows: Traversing blocked gates: the robot’s task was to pass through a gate formed by two closely placed colored targets (Figure 1.3(a)), but its path was blocked by a large box. The robot expressed its intentions by executing the Track behavior, making its way around one of the targets. Trying to reach the desired distance and angle to the target while being hindered by box resulted in its clear manifestation of the direction it wanted to pursue, blocked by the obstacle. Moving inaccessible located objects: the robot’s task was to pick up a small object which was made inaccessible by being placed in a narrow space between two large boxes (Figure 1.3(b)). The robot expressed its intentions by attempting to execute the PickUp behavior, lowering and opening its gripper and tilting its camera downward while approaching the object, and then moving backwards to avoid the boxes. 1.5 Learning from Imitation and Additional Cues 7 Visiting non-existing targets: the robot’s task was to visit a number of targets in a specific order (Green, Orange, Blue, Yellow, Orange, Green), in an environment where one of the targets had been removed (Figure 1.3(c)). After some time, the robot gave up searching for the missing target and sought out a human helper. The robot expressed its intentions by searching for a target, which appeared as aimless wandering. This behavior was not conducive for the human to infer the robot’s goal and problem. In this and similar situations, our framework would benefit from more explicit communication. 1.4.2 Discussion From the experimental results [18] and the interviews and report of the human subject who interacted with the robot, we derived the following conclusions about the robot’s social behavior: Capturing a human’s attention by approaching and then going back-and-forth is a behavior typically easily recognized and interpreted as soliciting help. Getting a human to follow by turning around and starting to go to the place where the problem occurred (after capturing the human’s attention) requires multiple trials in order for the human to follow the robot the entire way. Even if interested and realizing that the robot wants something from him, the human may have trouble understanding that following is the desired behavior. Also, after choosing to follow the robot, if wandering in search of the place with the problem takes too long, the human gives up not knowing whether the robot still needs him. Conveying intentions by repeating a failing behavior in front of a helper is effective for tasks in which the task components requiring help are observable to the human (such as the blocked gate). However, if some part of the task is not observable (such as the missing target), the human cannot infer it from the robot’s behavior and thus is not able to help (at least not without trial and error). 1.5 Learning from Imitation and Additional Cues Learning by observation and imitation are especially effective means of human skill acquisition. As skill or task complexity increases, however, teaching typically involves increased concurrent use of multiple instructional modalities, including demonstration, verbal instruction, attentional cues, and gestures. Students/learners are typically given one or a few demonstrations of the task, followed by a set of supervised practice trials. During those, the teacher provides feedback cues indicating needed corrections. The teacher may also provide additional demonstrations 8 Task Learning Through Imitation and Human-Robot Interaction that could be used for generalization. While most of these learning and teaching tools are typically overlooked in the majority of robot teaching approaches, considering them collectively improves the imitation learning process considerably. Toward this end, we developed a method for learning representations of high level tasks. Specifically, we augmented imitation learning by allowing the demonstrator to employ additional instructive activities (verbal commands and attentional cues) and by refining the learned representations through generalization from multiple learning experiences and through direct feedback from the teacher. In our work, the robot is equipped with a set of skills in the form of behaviors [13, 14]; we focus on a strategy that enables it to use those behaviors to construct a high-level task representation of a novel complex, sequentially structured task. We use learning by experienced demonstrations in which the robot actively participates in the demonstration provided by the teacher, experiencing the task through its own sensors, an essential characteristic of our approach. We assume that the teacher knows what behaviors the robot has, and also by what means (sensors) the robot can perceive demonstrations. The advantage of putting the robot through the task during the demonstration is that the robot is able to adjust its behaviors (via their parameters) using the information gathered through its own sensors: the values of all behaviors’ parameters are learned direct


International Journal of Humanoid Robotics | 2008

AN ARCHITECTURE FOR UNDERSTANDING INTENT USING A NOVEL HIDDEN MARKOV FORMULATION

Richard Kelley; Christopher King; Alireza Tavakkoli; Mircea Nicolescu; Monica N. Nicolescu; George Bebis

Understanding intent is an important aspect of communication among people and is an essential component of the human cognitive system. This capability is particularly relevant to situations that involve collaboration among multiple agents or detection of situations that can pose a particular threat. In this paper, we propose an approach that allows a physical robot to detect the intent of others based on experience acquired through its own sensory–motor capabilities, then use this experience while taking the perspective of the agent whose intent should be recognized. Our method uses a novel formulation of hidden Markov models (HMMs) designed to model a robot’s experience and interaction with the world when performing various actions. The robot’s capability to observe and analyze the current scene employs a novel vision-based technique for target detection and tracking, using a nonparametric recursive modeling approach. We validate this architecture with a physically embedded robot, detecting the intent of several people performing various activities.


Springer Handbook of Robotics, 2nd Ed. | 2016

Behavior-Based Systems

François Michaud; Monica N. Nicolescu

Nature is filled with examples of autonomous creatures capable of dealing with the diversity, unpredictability, and rapidly changing conditions of the real world. Such creatures must make decisions and take actions based on incomplete perception, time constraints, limited knowledge about the world, cognition, reasoning and physical capabilities, in uncontrolled conditions and with very limited cues about the intent of others. Consequently, one way of evaluating intelligence is based on the creature’s ability to make the most of what it has available to handle the complexities of the real world. The main objective of this chapter is to explain behavior-based systems and their use in autonomous control problems and applications. The chapter is organized as follows. Section 13.1 overviews robot control, introducing behavior-based systems in relation to other established approaches to robot control. Section 13.2 follows by outlining the basic principles of behavior-based systems that make them distinct from other types of robot control architectures. The concept of basis behaviors, the means of modularizing behavior-based systems, is presented in Sect. 13.3. Section 13.4 describes how behaviors are used as building blocks for creating representations for use by behavior-based systems, enabling the robot to reason about the world and about itself in that world. Section 13.5 presents several different classes of learning methods for behavior-based systems, validated on single-robot and multi-robot systems. Section 13.6 provides an overview of various robotics problems and application domains that have successfully been addressed or are currently being studied with behavior-based control. Finally, Sect. 13.7 concludes the chapter.


machine vision applications | 2009

Improving target detection by coupling it with tracking

Junxian Wang; George Bebis; Mircea Nicolescu; Monica N. Nicolescu; Ronald Miller

Target detection and tracking represent two fundamental steps in automatic video-based surveillance systems where the goal is to provide intelligent recognition capabilities by analyzing target behavior. This paper presents a framework for video-based surveillance where target detection is integrated with tracking to improve detection results. In contrast to methods that apply target detection and tracking sequentially and independently from each other, we feed the results of tracking back to the detection stage in order to adaptively optimize the detection threshold and improve system robustness. First, the initial target locations are extracted using background subtraction. To model the background, we employ Support Vector Regression (SVR) which is updated over time using an on-line learning scheme. Target detection is performed by thresholding the outputs of the SVR model. Tracking uses shape projection histograms to iteratively localize the targets and improve the confidence level of detection. For verification, additional information based on size, color and motion information is utilized. Feeding back the results of tracking to the detection stage restricts the range of detection threshold values, suppresses false alarms due to noise, and allows to continuously detect small targets as well as targets undergoing perspective projection distortions. We have validated the proposed framework in two different application scenarios, one detecting vehicles at a traffic intersection using visible video and the other detecting pedestrians at a university campus walkway using thermal video. Our experimental results and comparisons with frame-based detection and kernel-based tracking methods illustrate the robustness of our approach.


computational intelligence in bioinformatics and computational biology | 2007

Multiple Sequence Alignment using Fuzzy Logic

Sara Nasser; Gregory Vert; Monica N. Nicolescu; Alison E. Murray

DNA matching is a crucial step in sequence alignment. Since sequence alignment is an approximate matching process there is a need for good approximate algorithms. The process of matching in sequence alignment is generally finding longest common subsequences. However, finding a longest common subsequence may not be the best solution for either a database match or an assembly. An optimal alignment of subsequences is based on several factors, such as quality of bases, length of overlap, etc. Factors such as quality indicate if the data is an actual read or an experimental error. Fuzzy logic allows tolerance of inexactness or errors in sub sequence matching. We propose fuzzy logic for approximate matching of subsequences. Fuzzy characteristic functions are derived for parameters that influence a match. We develop a prototype for a fuzzy assembler. The assembler is designed to work with low quality data which is generally rejected by most of the existing techniques. We test the assembler on sequences from two genome projects namely, Drosophila melanogaster and Arabidopsis thaliana. The results are compared with other assemblers. The fuzzy assembler successfully assembled sequences and performed similar and in some cases better than existing techniques

Collaboration


Dive into the Monica N. Nicolescu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maja J. Matarić

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge