Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nikolaos Mavridis is active.

Publication


Featured researches published by Nikolaos Mavridis.


systems man and cybernetics | 2004

Mental imagery for a conversational robot

Deb Roy; Kai-yuh Hsiao; Nikolaos Mavridis

To build robots that engage in fluid face-to-face spoken conversations with people, robots must have ways to connect what they say to what they see. A critical aspect of how language connects to vision is that language encodes points of view. The meaning of my left and your left differs due to an implied shift of visual perspective. The connection of language to vision also relies on object permanence. We can talk about things that are not in view. For a robot to participate in situated spoken dialog, it must have the capacity to imagine shifts of perspective, and it must maintain object permanence. We present a set of representations and procedures that enable a robotic manipulator to maintain a mental model of its physical environment by coupling active vision to physical simulation. Within this model, imagined views can be generated from arbitrary perspectives, providing the basis for situated language comprehension and production. An initial application of mental imagery for spatial language understanding for an interactive robot is described.


EELC'06 Proceedings of the Third international conference on Emergence and Evolution of Linguistic Communication: symbol Grounding and Beyond | 2006

The human speechome project

Deb Roy; Rupal Patel; Philip DeCamp; Rony Kubat; Michael Fleischman; Brandon Cain Roy; Nikolaos Mavridis; Stefanie Tellex; Alexia Salata; Jethran Guinness; Michael Levit; Peter Gorniak

The Human Speechome Project is an effort to observe and computationally model the longitudinal course of language development for a single child at an unprecedented scale. We are collecting audio and video recordings for the first three years of one childs life, in its near entirety, as it unfolds in the childs home. A network of ceiling-mounted video cameras and microphones are generating approximately 300 gigabytes of observational data each day from the home. One of the worlds largest single-volume disk arrays is under construction to house approximately 400,000 hours of audio and video recordings that will accumulate over the three year study. To analyze the massive data set, we are developing new data mining technologies to help human analysts rapidly annotate and transcribe recordings using semi-automatic methods, and to detect and visualize salient patterns of behavior and interaction. To make sense of large-scale patterns that span across months or even years of observations, we are developing computational models of language acquisition that are able to learn from the childs experiential record. By creating and evaluating machine learning systems that step into the shoes of the child and sequentially process long stretches of perceptual experience, we will investigate possible language learning strategies used by children with an emphasis on early word learning.


intelligent robots and systems | 2006

Grounded Situation Models for Robots: Where words and percepts meet

Nikolaos Mavridis; Deb Roy

Our long-term objective is to develop robots that engage in natural language-mediated cooperative tasks with humans. To support this goal, we are developing an amodal representation and associated processes which is called a grounded situation model (GSM). We are also developing a modular architecture in which the GSM resides in a centrally located module, around which there are language, perception, and action-related modules. The GSM acts as a sensor-updated structured blackboard, that serves as a workspace with contents similar to a theatrical stage in the robots mind, which might be filled in with present, past or imagined situations. Two main desiderata drive the design of the GSM: first, parsing situations into ontological types and relations that reflect human language semantics, and second, allowing bidirectional translation between sensory-derived data/expectations and linguistic descriptions. We present an implemented system that allows of a range of conversational and assistive behavior by a manipulator robot. The robot updates beliefs (held in the GSM) about its physical environment, the human user, and itself, based on a mixture of linguistic, visual and proprioceptive evidence. It can answer basic questions about the present or past and also perform actions through verbal interaction. Most importantly, a novel contribution of our approach is the robots ability for seamless integration of both language- and sensor-derived information about the situation: for example, the system can acquire parts of situations either by seeing them or by imagining them through descriptions given by the user: There is a red ball at the left. These situations can later be used to create mental imagery and sensory expectations, thus enabling the aforementioned bidirectionality


intelligent robots and systems | 2003

Coupling perception and simulation: steps towards conversational robotics

Kai-yuh Hsiao; Nikolaos Mavridis; Deb Roy

Human cognition makes extensive use of visualization and imagination. As a first step towards giving a robot similar abilities, we have built a robotic system that uses a perceptually-coupled physical simulator to produce an internal world model of the robots environment. Real-time perceptual coupling ensures that the model is constantly kept in synchronization with the physical environment as the robot moves and obtains new sense data. This model allows the robot to be aware of objects no longer in its field of view (a form of object permanence), as well as to visualize its environment through the eyes of the user by enabling virtual shifts in point of view using synthetic vision operating within the simulator. This architecture provides a basis for our long term goals of developing conversational robots that can ground the meaning of spoken language in terms of sensorimotor representations.


north american chapter of the association for computational linguistics | 2003

Conversational robots: building blocks for grounding word meaning

Deb Roy; Kai-yuh Hsiao; Nikolaos Mavridis

How can we build robots that engage in fluid spoken conversations with people, moving beyond canned responses to words and towards actually understanding? As a step towards addressing this question, we introduce a robotic architecture that provides a basis for grounding word meanings. The architecture provides perceptual, procedural, and affordance representations for grounding words. A perceptually-coupled on-line simulator enables sensory-motor representations that can shift points of view. Held together, we show that this architecture provides a rich set of data structures and procedures that provide the foundations for grounding the meaning of certain classes of words.


advanced information networking and applications | 2017

Microservice-Based IoT for Smart Buildings

Kevin Khanda; Dilshat Salikhov; Kamill Marselevich Gusmanov; Manuel Mazzara; Nikolaos Mavridis

A large percentage of buildings in domestic orspecial-purpose is expected to become increasingly smarterin the future, due to the immense benefits in terms of en-ergy saving, safety, flexibility, and comfort, that relevant newtechnologies offer. As concerns hardware, software, or platformlevel, however, no clearly dominant standards currently exist. Such standards, would ideally, fulfill a number of importantdesiderata, which are to be touched upon in this paper. Here, we will present a prototype platform for supporting multipleconcurrent applications for smart buildings, which is utilizing anadvanced sensor network as well as a distributed microservicesarchitecture, centrally featuring the Jolie programming language. The architecture and benefits of our system are discussed, as wellas a prototype containing a number of nodes and a user interface, deployed in a real-world academic building environment. Ourresults illustrate the promising nature of our approach, as wellas open avenues for future work towards its wider and largerscale applicability.


intelligent robots and systems | 2015

VISPEC: A graphical tool for elicitation of MTL requirements

Bardh Hoxha; Nikolaos Mavridis; Georgios E. Fainekos

One of the main barriers preventing widespread use of formal methods is the elicitation of formal specifications. Formal specifications facilitate the testing and verification process for safety critical robotic systems. However, handling the intricacies of formal languages is difficult and requires a high level of expertise in formal logics that many system developers do not have. In this work, we present a graphical tool designed for the development and visualization of formal specifications by people that do not have training in formal logic. The tool enables users to develop specifications using a graphical formalism which is then automatically translated to Metric Temporal Logic (MTL). In order to evaluate the effectiveness of our tool, we have also designed and conducted a usability study with cohorts from the academic student community and industry. Our results indicate that both groups were able to define formal requirements with high levels of accuracy. Finally, we present applications of our tool for defining specifications for operation of robotic surgery and autonomous quadcopter safe operation.


Information Sciences | 2015

QTC3D: extending the qualitative trajectory calculus to three dimensions

Nikolaos Mavridis; Nicola Bellotto; Konstantinos Iliopoulos; Nico Van de Weghe

Spatial interactions between agents (humans, animals, or machines) carry information of high value to human or electronic observers. However, not all the information contained in a pair of continuous trajectories is important and thus the need for qualitative descriptions of interaction trajectories arises. The Qualitative Trajectory Calculus (QTC) (Van de Weghe, 2004) is a promising development towards this goal. Numerous variants of QTC have been proposed in the past and QTC has been applied towards analyzing various interaction domains. However, an inherent limitation of those QTC variations that deal with lateral movements is that they are limited to two-dimensional motion; therefore, complex three-dimensional interactions, such as those occurring between flying planes or birds, cannot be captured. Towards that purpose, in this paper QTC3D is presented: a novel qualitative trajectory calculus that can deal with full three-dimensional interactions. QTC3D is based on transformations of the Frenet–Serret frames accompanying the trajectories of the moving objects. Apart from the theoretical exposition, including definition and properties, as well as computational aspects, we also present an application of QTC3D towards modeling bird flight. Thus, the power of QTC is now extended to the full dimensionality of physical space, enabling succinct yet rich representations of spatial interactions between agents.


intelligent robots and systems | 2006

Grounded Situation Models: Where Words and Percepts Meet

Nikolaos Mavridis; Deb Roy

This video illustrates some of the bahavioral capabilities achieved using an implementation of the proposed GSM architecture on the conversational robot Ripley, as introduced in the video, and as further described in the accompanying paper. The situation model acts as a theatrical stage in the robots mind, filled in with present, past or imagined situations. A special tripley-layered design enables bidirectional translation between the sensory data and linguistic descriptions. In the video, you will see the robot: 1) Answering questions and serving motor commands, and verbalising uncertainty: What color are the objects? 2) Imagining objects when informed about their existence through language, talking about them without having seen them. Later, you will see him matching his sensory expectations with existing objects, and integrating sensory-derived information about the objects with language-derived. Imagine a blue object on the left! 3) Remembering past events, resolving temporal referents, and answering questions about the past. How big was the blue object when your head started moving? More videos of the system at authors site.


Archive | 2005

Grounded Situation Models for Robots: Bridging Language, Perception, and Action

Nikolaos Mavridis; Deb Roy

Collaboration


Dive into the Nikolaos Mavridis's collaboration.

Top Co-Authors

Avatar

Deb Roy

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kai-yuh Hsiao

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexia Salata

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Brandon Cain Roy

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jethran Guinness

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Fleischman

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Gorniak

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Philip DeCamp

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Rony Kubat

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge