Motoyuki Ozeki
Kyoto Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Motoyuki Ozeki.
ieee international conference on pervasive computing and communications | 2010
Zhiwen Yu; Zhiyong Yu; Hideki Aoyama; Motoyuki Ozeki; Yuichi Nakamura
Human interaction is one of the most important characteristics of group social dynamics in meetings. In this paper, we propose an approach for capture, recognition, and visualization of human interactions. Unlike physical interactions (e.g., turn-taking and addressing), the human interactions considered here are incorporated with semantics, i.e., user intention or attitude toward a topic. We adopt a collaborative approach for capturing interactions by employing multiple sensors, such as video cameras, microphones, and motion sensors. A multimodal method is proposed for interaction recognition based on a variety of contexts, including head gestures, attention from others, speech tone, speaking time, interaction occasion (spontaneous or reactive), and information about the previous interaction. A support vector machines (SVM) classifier is used to classify human interaction based on these features. A graphical user interface called MMBrowser is presented for interaction visualization. Experimental results have shown the effectiveness of our approach.
international conference on multimodal interfaces | 2007
Zhiwen Yu; Motoyuki Ozeki; Yohsuke Fujii; Yuichi Nakamura
In this paper, we describe the enabling technologies to develop a smart meeting system based on a three layered generic model. From physical level to semantic level, it consists of meeting capturing, meeting recognition, and semantic processing. Based on the overview of underlying technologies and existing work, we propose a novel real-world smart meeting application, called MeetingAssistant. It is distinctive from previous systems in two aspects. First it provides the real-time browsing that allows a participant to instantly view the status of the current meeting. This feature is helpful in activating discussion and facilitating human communication during a meeting. Second, the context-aware browsing adaptively selects and displays meeting information according to users situational context, e.g., user purpose, which makes meeting viewing more efficient.
pacific rim conference on multimedia | 2002
Motoyuki Ozeki; Yuichi Nakamura; Yuichi Ohta
We propose a novel framework for automated video capturing and production for desktop manipulations. We focus on the systems ability to select relevant views by recognizing types of human behavior. Using this function, the obtained videos direct the audiences attention to the relevant portions of the video and enable more effective communication. We first discuss significant types of human behavior that are commonly expressed in presentations, and propose a simple and highly precise method for recognizing them. We then demonstrate the efficacy of our system experimentally by recording presentations in a desktop manipulation.
international conference on pattern recognition | 2002
Motoyuki Ozeki; Masatsugu Itoh; Yuichi Nakamura; Yuichi Ohta
We propose a novel method for detecting hands and hand-held objects in desktop manipulation situations. In order to achieve robust tracking under few constraints, we use multiple image sensors, that is, a RGB camera, a stereo camera, and an IR camera. By using these sensors, our system realized robust tracking without the prior knowledge of an object even if there are moving people or objects in the background. We experimentally verified the performance of object tracking by each of the three sensors and evaluated the effectiveness of their integration.
international conference on multimedia and expo | 2001
Motoyuki Ozeki; Yuichi Nakamura; Yuichi Ohta
In this paper, we introduce an intelligent system for video recording. First, we categorized targets and purposes of shooting, and discuss the cameraworks appropriate for them. Then, we propose camera control algorithms to realize such cameraworks. Based on this idea, we built a prototype pan-tilt camera control system, in which multiple cameras with different purposes automatically track and shoot the targets. We evaluated our system through recording of some presentations on desktop manipulation. The effectiveness of our algorithm was verified through some experiments.
international conference on multisensor fusion and integration for intelligent systems | 2003
Masatsugu Itoh; Motoyuki Ozeki; Yuichi Nakamura; Yuichi Ohta
We propose a simple and robust method for detecting hands and hand-held objects involved in desktop manipulation and its use for indexing the videos. In order to achieve robust tracking with few constraints, we use multiple image sensors, which is a RGB (red, green, blue) camera, a stereo camera, and an infrared (IR) camera. By integrating these sensors, our system realized robust tracking without prior knowledge of an object, even if there was movement whether of people or objects in the background. We experimentally verified the object tracking performance and evaluated the effectiveness of integration.
international conference on human system interactions | 2011
Motoyuki Ozeki; Yasuhiro Kashiwagi; Mariko Inoue; Natsuki Oka
A novel visual attention model based on a particle filter is that also has a filter-type feature, (2) a compact model independent of the high-level processes, and (3) a unitary model that naturally integrates top-down modulation and bottom-up processes. These features allow the model to be applied simply to robots and to be easily understood by the developers. In this paper, we first briefly discuss human visual attention, computational models for bottom-up attention, and attentional metaphors. We then describe the proposed model and its top-down control interface. Finally, three experiments demonstrate the potential of the proposed model as an attentional metaphor and top-down attention control interface.
acm multimedia | 2009
Hideki Aoyama; Motoyuki Ozeki; Yuichi Nakamura
This paper presents a novel idea for a smart cooking support system. The system is controlled by an Interaction Reproducing Model (IRM) that adjusts the system output to reproduce ideal interactions between the system and the users, and appropriate advice is provided at the appropriate time. This mechanism is based on the idea that simulation of past interactions between human test subjects and the IRM give the IRM-based support system the ability to provide good supports to the current user. Within this framework, we developed a prototype cooking support system and conducted preliminary experiments. The results show that the system provides supports appropriate to the users skill level.
international conference on knowledge-based and intelligent information and engineering systems | 2003
Motoyuki Ozeki; Masatsugu Itoh; Hidekatsu Izuno; Yuichi Nakamura; Yuichi Ohta
This paper presents a semi-automatic indexing method for “QUEVICO” — A QA model for video-based interactive media. We first provide an overview of QUEVICO, and then discuss which indices are acquired from the scenario and which must be acquired by manual or automated processing. To obtain these indices, we implemented a prototype system whose processes include human behavior recognition, object tracking, and speech recognition. Through some experiments applying the prototype system to actual indexing of QUEVICO video data, the strong potential of our framework are demonstrated.
software engineering artificial intelligence networking and parallel distributed computing | 2014
Tofig Hasanov; Motoyuki Ozeki; Natsuki Oka
The task of automated risk assessment is attracting significant attention in the light of the recent microloan popularity growth. The industry requires a real time method for the timely processing of the extensive number of applicants for short-term small loans. Owing to the vast number of applications, manual verification is not a viable option. In cooperation with a microloan company in Azerbaijan, we have researched automated risk assessment using crowdsourcing. The principal concept behind this approach is the fact that a significant amount of information relating to a particular applicant can be retrieved from the social networks. The suggested approach can be divided into three parts: First, applicant information is collected on social networks such as LinkedIn and Facebook. This can only occur with the applicants permission. Then, this data is processed using a program that extracts the relevant information segments. Finally, these information segments are evaluated using crowdsourcing. We attempted to evaluate the information segments using social networks. To that end, we automatically posted requests on the social networks regarding certain information segments and evaluated the community response by counting “likes” and “shares”. For example, we posted the status, “Do you think that a person who has worked at ABC Company is more likely to repay a loan? Please “like” this post if you agree.” From the results, we were able to estimate public opinion. Once evaluated, each information segment was then given a weight factor that was optimized using available loan-repay test data provided to us by a company. We then tested the proposed system on a set of 400 applicants. Using a second crowdsourcing approach, we were able to confirm that the resulting solution provided a 92.5% correct assessment, with 6.45% false positives and 11.11% false negatives, with an assessment duration of 24 hours.