Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Keisuke Doman is active.

Publication


Featured researches published by Keisuke Doman.


international conference on innovative computing, information and control | 2009

Construction of Cascaded Traffic Sign Detector Using Generative Learning

Keisuke Doman; Daisuke Deguchi; Tomokazu Takahashi; Yoshito Mekada; Ichiro Ide; Hiroshi Murase

We propose a method for construction of a cascaded traffic sign detector. Viola et al. have proposed a robust and extremely rapid object detection method based on a boosted cascade of simple feature classifiers. To obtain a high detection accuracy in real environment, it is necessary to train the classifier with a set of learning images which contain various appearances of detection targets. However, collecting the traffic sign images manually for training takes much cost. Therefore, we use a generative learning method for constructing the traffic sign detector. In this paper, shape, texture and color changes are considered in the generative learning. By this method, the performance of the traffic sign detection improves and the cost of collecting the training images is reduced at the same time. Experimental results using car-mounted camera images showed the effectiveness of the proposed method.


ieee intelligent vehicles symposium | 2010

Estimation of traffic sign visibility toward smart driver assistance

Keisuke Doman; Daisuke Deguchi; Tomokazu Takahashi; Yoshito Mekada; Ichiro Ide; Hiroshi Murase; Yukimasa Tamatsu

We propose a visibility estimation method for traffic signs as part of work for realization of nuisance-free driving safety support systems. Recently, the number of driving safety support systems in a car has been increasing. As a result, it is becoming important to select appropriate information from them for safe and comfortable driving because too much information may cause driver distraction and may increase the risk of a traffic accident. One of the approaches to avoid such a problem is to alert the driver only with information which could easily be missed. Therefore, to realize such a system, we focus on estimating the visibility of traffic signs. The proposed method is a model-based method that estimates the visibility of traffic signs focusing on the difference of image features between a traffic sign and its surrounding region. In this paper, we investigate the performance of the proposed method and show its effectiveness.


intelligent vehicles symposium | 2014

Estimation of traffic sign visibility considering local and global features in a driving environment

Keisuke Doman; Daisuke Deguchi; Tomokazu Takahashi; Yoshito Mekada; Ichiro Ide; Hiroshi Murase; Utsushi Sakai

This paper proposes a camera-based visibility estimation method for a traffic sign. The visibility here indicates how a visual target is easy to be detected and recognized by a human driver (not a machine). This research aims at realizing a nuisance-free driver assistance system which sorts out information depending on the visibility of a visual target, in order to prevent driver distraction. Our previous study on estimating the visibility of a traffic sign considered only the effect of the local region around a target, assuming the situation that a drivers gaze is around it. The proposed method integrates both the local features and global features in a driving environment without such an assumption. The global features evaluate the positional relationships between traffic signs and the appearance around the fixation point of a drivers gaze, which considers the effect of the drivers entire field of view. Experimental results showed the effectiveness of incorporating the global features for estimating the visibility of a traffic sign.


acm multimedia | 2013

Automatic authoring of a domestic cooking video based on the description of cooking instructions

Yasuhiro Hayashi; Keisuke Doman; Ichiro Ide; Daisuke Deguchi; Hiroshi Murase

Traditionally, a cooking recipe is mainly composed of text. Since it may be difficult to intuitively understand or imagine the overall cooking process from such a cooking recipe, it is preferable to publish it with a video that briefly explains the process and appropriately describes the cooking operations. Therefore, this paper proposes an automatic video authoring method that reduces both temporal and spatial redundancies from a cooking video taken at home according to the description of cooking instructions in a cooking recipe that will be published together. We conducted several kinds of subject experiments, and confirmed the effectiveness of the proposed method.


ieee intelligent vehicles symposium | 2011

Estimation of traffic sign visibility considering temporal environmental changes for smart driver assistance

Keisuke Doman; Daisuke Deguchi; Tomokazu Takahashi; Yoshito Mekada; Ichiro Ide; Hiroshi Murase; Yukimasa Tamatsu

We propose a visibility estimation method for traffic signs considering temporal environmental changes, as a part of work for the realization of nuisance-free driver assistance systems. Recently, the number of driver assistance systems in a vehicle is increasing. Accordingly, it is becoming important to sort out appropriate information provided from them, because providing too much information may cause driver distraction. To solve such a problem, we focus on a visibility estimation method for controlling the information according to the visibility of a traffic sign. The proposed method sequentially captures a traffic sign by an in-vehicle camera, and estimates its accumulative visibility by integrating a series of instantaneous visibility. By this way, even if the environmental conditions may change temporally and complicatedly, we can still accurately estimate the visibility that the driver perceives in an actual traffic scene. We also investigate the performance of the proposed method and show its effectiveness.


international conference on computer vision | 2010

Improvement of a traffic sign detector by retrospective gathering of training samples from in-vehicle camera image sequences

Daisuke Deguchi; Keisuke Doman; Ichiro Ide; Hiroshi Murase

This paper proposes a method for constructing an accurate traffic sign detector by retrospectively obtaining training samples from in-vehicle camera image sequences. To detect distant traffic signs from in-vehicle camera images, training samples of distant traffic signs are needed. However, since their sizes are too small, it is difficult to obtain them either automatically or manually. When driving a vehicle in a real environment, the distance between a traffic sign and the vehicle shortens gradually, and proportionally, the size of the traffic sign becomes larger. A large traffic sign is comparatively easy to detect automatically. Therefore, the proposed method automatically detects a large traffic sign, and then small traffic signs (distant traffic signs) are obtained by retrospectively tracking it back in the image sequence. By also using the retrospectively obtained traffic sign images as training samples, the proposed method constructs an accurate traffic sign detector automatically. From experiments using in-vehicle camera images, we confirmed that the proposed method could construct an accurate traffic sign detector.


international conference on multimedia and expo | 2015

Typicality analysis of the combination of ingredients in a cooking recipe for assisting the arrangement of ingredients

Satoshi Yokoi; Keisuke Doman; Takatsugu Hirayama; Ichiro Ide; Daisuke Deguchi; Hiroshi Murase

As the number of cooking recipes posted on the Web increases, it becomes difficult to find a cooking recipe that a user needs. Moreover, even if it can be done, it is still difficult for users to arrange the cooking recipe, for example, by replacing ingredients with different ones. To deal with such problems, we propose a framework for typicality analysis of the combination of ingredients. The framework calculates a typicality value for each combination of ingredients. The list of ingredients can be arranged by adjusting the typicality value by adding or removing ingredients iteratively. The effectiveness of the proposed framework was confirmed through subjective experiments.


international conference on image analysis and processing | 2015

Tastes and Textures Estimation of Foods Based on the Analysis of Its Ingredients List and Image

Hiroki Matsunaga; Keisuke Doman; Takatsugu Hirayama; Ichiro Ide; Daisuke Deguchi; Hiroshi Murase

Recently, the number of cooking recipes on the Web is increasing. However, it is difficult to search them by tastes or textures although they are actually important considering the nature of the contents. Therefore, we propose a method for estimating the tastes and the textures of a cooking recipe by analyzing them. Concretely, the proposed method refers to an ingredients feature from the “ingredients list” and image features from the “food image” in a cooking recipe. We confirmed the effectiveness of the proposed method through an experiment.


International Journal of Semantic Computing | 2012

SPEECH SHOT EXTRACTION FROM BROADCAST NEWS VIDEOS

Shogo Kumagai; Keisuke Doman; Tomokazu Takahashi; Daisuke Deguchi; Ichiro Ide; Hiroshi Murase

We propose a method for discriminating between a speech shot and a narrated shot to extract genuine speech shots from a broadcast news video. Speech shots in news videos contain a wealth of multimedia information of the speaker, and could thus be considered valuable as archived material. In order to extract speech shots from news videos, there is an approach that uses the position and size of a face region. However, it is difficult to extract them with only such an approach, since news videos contain non-speech shots where the speaker is not the subject that appears in the screen, namely, narrated shots. To solve this problem, we propose a method to discriminate between a speech shot and a narrated shot in two stages. The first stage of the proposed method directly evaluates the inconsistency between a subject and a speaker based on the co-occurrence between lip motion and voice. The second stage of the proposed method evaluates based on the intra- and inter-shot features that focus on the tendency of speech shots. With the combination of both stages, the proposed method accurately discriminates between a speech shot and a narrated shot. In the experiments, the overall accuracy of speech shots extraction by the proposed method was 0.871. Therefore, we confirmed the effectiveness of the proposed method.


international symposium on multimedia | 2011

Detection of Inconsistency Between Subject and Speaker Based on the Co-occurrence of Lip Motion and Voice Towards Speech Scene Extraction from News Videos

Shogo Kumagai; Keisuke Doman; Tomokazu Takahashi; Daisuke Deguchi; Ichiro Ide; Hiroshi Murase

We propose a method to detect the inconsistency between a subject and the speaker for extracting speech scenes from news videos. Speech scenes in news videos contain a wealth of multimedia information, and are valuable as archived material. In order to extract speech scenes from news videos, there is an approach that uses the position and size of a face region. However, it is difficult to extract them with only such approach, since news videos contain non-speech scenes where the speaker is not the subject, such as narrated scenes. To solve this problem, we propose a method to discriminate between speech scenes and narrated scenes based on the co-occurrence between a subjects lip motion and the speakers voice. The proposed method uses lip shape and degree of lip opening as visual features representing a subjects lip motion, and uses voice volume and phoneme as audio feature representing a speakers voice. Then, the proposed method discriminates between speech scenes and narrated scenes based on the correlations of these features. We report the results of experiments on videos captured in a laboratory condition and also on actual broadcast news videos. Their results showed the effectiveness of our method and the feasibility of our research goal.

Collaboration


Dive into the Keisuke Doman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tomokazu Takahashi

Gifu Shotoku Gakuen University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge