Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Muhammad Attamimi is active.

Publication


Featured researches published by Muhammad Attamimi.


intelligent robots and systems | 2010

Real-time 3D visual sensor for robust object recognition

Muhammad Attamimi; Akira Mizutani; Tomoaki Nakamura; Takayuki Nagai; Kotaro Funakoshi; Mikio Nakano

This paper presents a novel 3D measurement system, which yields both depth and color information in real time, by calibrating a time-of-flight and two CCD cameras. The problem of occlusions is solved by the proposed fast occluded-pixel detection algorithm. Since the system uses two CCD cameras, missing color information of occluded pixels is covered by one another. We also propose a robust object recognition using the 3D visual sensor. Multiple cues, such as color, texture and 3D (depth) information, are integrated in order to recognize various types of objects under varying lighting conditions. We have implemented the system on our autonomous robot and made the robot do recognition tasks (object learning, detection, and recognition) in various environments. The results revealed that the proposed recognition system provides far better performance than the previous system that is based only on color and texture information.


international conference on robotics and automation | 2010

Learning novel objects using out-of-vocabulary word segmentation and object extraction for home assistant robots

Muhammad Attamimi; Attamini Mizutani; Tomoaki Nakamura; Komei Sugiura; Takayuki Nagai; Naoto Iwahashi; Hiroyuki Okada; Takashi Omori

This paper presents a method for learning novel objects from audio-visual input. Objects are learned using out-of-vocabulary word segmentation and object extraction. The latter half of this paper is devoted to evaluations. We propose the use of a task adopted from the RoboCup@Home league as a standard evaluation for real world applications. We have implemented proposed method on a real humanoid robot and evaluated it through a task called “Supermarket”. The results reveal that our integrated system works well in the real application. In fact, our robot outperformed the maximum score obtained in RoboCup@Home 2009 competitions.


intelligent robots and systems | 2014

Physical embodied communication between robots and children: An approach for relationship building by holding hands

Chie Hieida; Kasumi Abe; Muhammad Attamimi; Takayuki Shimotomai; Takayuki Nagai; Takashi Omori

The influence of holding hands on the relationship building process between children and robots is investigated in this study. In particular, at the first meeting, it is difficult for a child to be open if he/she starts to rebuff the robot partner. This significantly degrades the possibility of the child forming a friendship relationship with the robot. Thus, the initial approach of the robot to the child in the early stage of the relationship building process should be appropriate. We hypothesize that physical embodied communication, such as walking hand in hand, improves the relationship between children and robots. A holding hands system was implemented in a real robot, and an experiment was conducted at a kindergarten to validate our hypothesis. The results strongly support our hypothesis.


human-agent interaction | 2014

Toward playmate robots that can play with children considering personality

Kasumi Abe; Chie Hieida; Muhammad Attamimi; Takayuki Nagai; Takayuki Shimotomai; Takashi Omori; Natsuki Oka

It is difficult to design robotic playmates for introverted children. Therefore, we examined how a robot should play with such shy children. In this study, we hypothesized and tested an effective play strategy for building a good relationship with shy children. We conducted an experiment with 5- to 6-year-old children and a humanoid robot teleoperated by a preschool teacher. We developed a valid play strategy for shy children.


Advanced Robotics | 2016

Learning word meanings and grammar for verbalization of daily life activities using multilayered multimodal latent Dirichlet allocation and Bayesian hidden Markov models

Muhammad Attamimi; Yuji Ando; Tomoaki Nakamura; Takayuki Nagai; Daichi Mochihashi; Ichiro Kobayashi; Hideki Asoh

Intelligent systems need to understand and respond to human words to enable them to interact with humans in a natural way. Several studies attempted to realize these abilities by investigating the symbol grounding problem. For example, we proposed multilayered multimodal latent Dirichlet allocation (mMLDA) to enable the formation of various concepts and inference using grounded concepts. We previously reported on the issue of connecting words to various hierarchical concepts and also proposed a simple preliminary algorithm for generating sentences. This paper proposes a novel method that enables a sensing system to verbalize an everyday scene it observes. The method uses mMLDA and Bayesian hidden Markov models (BHMM) and the proposed algorithm improves the word inference of our previous work. The advantage of our approach is that grammar learning based on BHMM not only boosts concept selection results but enables our method to process functional words. The proposed verbalization algorithm produces results that are far superior to those of previous methods. Finally, we developed a system to obtain multimodal data from human everyday activities. We evaluate language learning and sentence generation as a complete process under this realistic setting. The results demonstrate the effectiveness of our method. Graphical Abstract


human-agent interaction | 2016

Attention Estimation for Child-Robot Interaction

Muhammad Attamimi; Masahiro Miyata; Tetsuji Yamada; Takashi Omori; Ryoma Hida

In this paper, we present a method of estimating a childs attention, one of the more important human mental states, in a free-play scenario of child-robot interaction. First, we developed a system that could sense a childs verbal and non- verbal multimodal signals such as gaze, facial expression, proximity, and so on. Then, the observed information was used to train a Support Vector Machine (SVM) to estimate a humans attention level. We investigated the accuracy of the proposed method by comparing with a human judges estimation, and obtained some promising results which we discuss here.


intelligent robots and systems | 2014

Integration of various concepts and grounding of word meanings using multi-layered multimodal LDA for sentence generation

Muhammad Attamimi; Muhammad Fadlil; Kasumi Abe; Tomoaki Nakamura; Kotaro Funakoshi; Takayuki Nagai

In the field of intelligent robotics, object handling by robots can be achieved by capturing not only the object concept through object categorization, but also other concepts (e.g., the movement while using the object), as well as the relationship between concepts. Moreover, capturing the concepts of places and people is also necessary to enable the robot to gain real-world understanding. In this study, we propose multi-layered multimodal latent Dirichlet allocation (mMLDA) to realize the formation of various concepts, and the integration of those concepts, by robots. Because concept formation and integration can be conducted by mMLDA, the formation of each concept affects others, resulting in a more appropriate formation. Another issue to be addressed in this paper is the language acquisition by the robots. We propose a method to infer which words are originally connected to a concept using mutual information between words and concepts. Moreover, the order of concepts in teaching utterances can be learned using a simple Markov model, which corresponds to grammar. This grammar can be used to generate sentences that represent the observed information. We report the results of experiments to evaluate the effectiveness of the proposed method.


intelligent robots and systems | 2012

A planning method for efficient mobile manipulation considering ambiguity

Muhammad Attamimi; Keisuke Ito; Tomoaki Nakamura; Takayuki Nagai

In this study, we propose a system to enable robots to navigate more efficiently to target objects in order to carry out mobile manipulation tasks. Because a robot arm has limited reach, a target object cannot be grasped if the distance to the object exceeds the reach. At the same time, navigation errors may increase as the robot moves around in an environment. To mitigate these problems, we propose the use of maps that express the arm reachability and navigation reachability. Places with minimum probability of navigation errors and maximum tolerable navigation errors for the reach of the arm can be determined by integrating these two maps. We also consider the fact that the target object position is not always known. To cope with this problem, we introduce an object existence map that represents the ambiguity of the target object position, considering the past position. If the ambiguity is large, it is possible to select a place from where the object searching task can be performed with a wide angle of view by the robot. Efficient object searching can be achieved by changing the task objective from searching to grasping and vice versa. We conducted some mobile manipulation tasks to evaluate the proposed method. The results showed that the proposed system can perform such tasks effectively.


human robot interaction | 2016

Modeling of Honest Signals for Human Robot Interaction

Muhammad Attamimi; Yusuke Katakami; Kasumi Abe; Takayuki Nagai; Tomoaki Nakamura

Recent studies have shown that human beings unconsciously use signals that represent their thoughts and/or intentions when communicating with each other. These signals are known as “honest signals.” This study involves the use of a sociometer to capture multimodal data resulting from the interaction between humans. These data are then used to model the interaction using a multimodal hierarchical Dirichlet process hidden Markov model, which is then implemented in the robot. The model enables robots to generate “honest signals” and to interact in a natural manner.


empirical methods in natural language processing | 2015

Learning Word Meanings and Grammar for Describing Everyday Activities in Smart Environments

Muhammad Attamimi; Yuji Ando; Tomoaki Nakamura; Takayuki Nagai; Daichi Mochihashi; Ichiro Kobayashi; Hideki Asoh

If intelligent systems are to interact with humans in a natural manner, the ability to describe daily life activities is important. To achieve this, sensing human activities by capturing multimodal information is necessary. In this study, we consider a smart environment for sensing activities with respect to realistic scenarios. We next propose a sentence generation system from observed multimodal information in a bottom up manner using multilayered multimodal latent Dirichlet allocation and Bayesian hidden Markov models. We evaluate the grammar learning and sentence generation as a complete process within a realistic setting. The experimental result reveals the effectiveness of the proposed method.

Collaboration


Dive into the Muhammad Attamimi's collaboration.

Top Co-Authors

Avatar

Takayuki Nagai

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar

Tomoaki Nakamura

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar

Kasumi Abe

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Komei Sugiura

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar

Naoto Iwahashi

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chie Hieida

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar

Daichi Mochihashi

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar

Hideki Asoh

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge