Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Noboru Babaguchi is active.

Publication


Featured researches published by Noboru Babaguchi.


IEEE Transactions on Multimedia | 2002

Event based indexing of broadcasted sports video by intermodal collaboration

Noboru Babaguchi; Yoshihiko Kawai; Tadahiro Kitahashi

In this paper, we propose event-based video indexing, which is a kind of indexing by its semantical contents. Because video data is composed of multimodal information streams such as visual, auditory, and textual [closed caption (CC)] streams, we introduce a strategy of intermodal collaboration, i.e., collaborative processing taking account of the semantical dependency between these streams. Its aim is to improve the reliability and efficiency in contents analysis of video. Focusing here on temporal correspondence between visual and CC streams, the proposed method attempts to seek for time spans in which events are likely to take place through extraction of keywords from the CC stream and then to index shots in the visual stream. The experimental results for broadcasted sports video of American football games indicate that intermodal collaboration is effective for video indexing by the events such as touchdown (TD) and field goal (FG).


IEEE Transactions on Multimedia | 2004

Personalized abstraction of broadcasted American football video by highlight selection

Noboru Babaguchi; Yoshihiko Kawai; Takehiro Ogura; Tadahiro Kitahashi

Video abstraction is defined as creating shorter video clips or video posters from an original video stream. In this paper, we propose a method of generating a personalized abstract of broadcasted American football video. We first detect significant events in the video stream by matching textual overlays appearing in an image frame with the descriptions of gamestats in which highlights of the game are described. Then, we select highlight shots which should be included in the video abstract from those detected events reflecting on their significance degree and personal preferences, and generate a video clip by connecting the shots augmented with related audio and text. An hour-length video can be compressed into a minute-length personalized abstract. We experimentally verified the effectiveness of this method by comparing man-made video abstracts.


international conference on multimedia and expo | 2005

Video Summarization for Large Sports Video Archives

Yoshimasa Takahashi; Naoko Nitta; Noboru Babaguchi

Video summarization is defined as creating a shorter video clip or a video poster which includes only the important scenes in the original video streams. In this paper, we propose two methods of generating a summary of arbitrary length for large sports video archives. One is to create a concise video clip by temporally compressing the amount of the video data. The other is to provide a video poster by spatially presenting the image keyframes which together represent the whole video content. Our methods deal with the metadata which has semantic descriptions of video content. Summaries are created according to the significance of each video segment which is normalized in order to handle large sports video archives. We experimentally verified the effectiveness of our methods by comparing the results with man-made video summaries


acm multimedia | 2000

Linking live and replay scenes in broadcasted sports video

Noboru Babaguchi; Yoshihiko Kawai; Yukinobu Yasugi; Tadahiro Kitahashi

Content based video organization requires to understand semantical relationships such as identity and similarity between video segments. In particular, it is of great interest to identify scenes of the same event which may differ in their appearances. For broadcasted sports video, such scenes correspond to live and replay scenes that appear at different temporal positions. In this paper, we propose a method of linking up live and replay scenes by focusing on the domain knowledge about producing TV programs of sports: most of the replay scenes are sandwiched in between specific digital video effects. The replay scenes are linked with live scenes when the game is in play based on salient image features. We clarify the effectiveness of our method through fundamental experiments for American football games.


conference on multimedia modeling | 2008

PriSurv: privacy protected video surveillance system using adaptive visual abstraction

Kenta Chinomi; Naoko Nitta; Yoshimichi Ito; Noboru Babaguchi

Recently, video surveillance has received a lot of attention as a technology to realize a secure and safe community. Video surveillance is useful for crime deterrence and investigations, but may cause the invasion of privacy. In this paper, we propose a video surveillance system named PriSurv, which is characterized by visual abstraction. This system protects the privacy of objects in a video by referring to their privacy policies which are determined according to closeness between objects in the video and viewers monitoring it. A prototype of PriSurv is able to protect the privacy adaptively through visual abstraction.


international conference on image processing | 2003

Intermodal collaboration: a strategy for semantic content analysis for broadcasted sports video

Noboru Babaguchi; Naoko Nitta

This paper presents intermodal collaboration: a strategy for semantic content analysis for broadcasted sports video. The broadcasted video can be viewed as a set of multimodal streams such as visual, auditory, text (closed caption) and graphics streams. Collaborative analysis for the multimodal streams is achieved based on temporal dependency between their streams, in order to improve the reliability and efficiency for semantic content analysis such as extracting highlight scenes from sports video and automatically generating annotations of specific scenes. A couple of case studies are shown to experimentally confirm the effectiveness of intermodal collaboration.


international conference on multimedia and expo | 2000

Towards abstracting sports video by highlights

Noboru Babaguchi

Recently, video abstraction has become a demanding application in multimedia computing. It is defined as creating shorter video clips or video posters from the original video stream. We present a basic approach towards abstracting sports video by highlights, dealing with American football games. Using event based indexing, we create an abstracted video clip automatically. To select the appropriate highlights of the game, an impact factor reflecting on the importance of the event is newly introduced. It was possible to make an approximate 5-minute clip from the 3-hour original video.


IEEE Transactions on Multimedia | 2009

Watermarked Movie Soundtrack Finds the Position of the Camcorder in a Theater

Yuta Nakashima; Ryuki Tachibana; Noboru Babaguchi

In recent years, the problem of camcorder piracy in theaters has become more serious due to technical advances in camcorders. In this paper, as a new deterrent to camcorder piracy, we propose a system for estimating the recording position from which a camcorder recording is made. The system is based on spread-spectrum audio watermarking for the multichannel movie soundtrack. It utilizes a stochastic model of the detection strength, which is calculated in the watermark detection process. Our experimental results show that the system estimates recording positions in an actual theater with a mean estimation error of 0.44 m. The results of our MUSHRA subjective listening tests show the method does not significantly spoil the subjective acoustic quality of the soundtrack. These results indicate that the proposed system is applicable for practical uses.


international conference on multimedia and expo | 2001

Generation of personalized abstract of sports video

Noboru Babaguchi; Yoshihiko Kawai; Tadahiro Kitahashi

Video abstraction is defined as creating a shorter video clip from an original video stream. In this paper, we propose a method of generating a personalized abstract of broadcasted sports video. We first detect significant events from the video stream by matching with gamestats in which highlights of the game are described. Textual information in an overlay appearing on an image frame is recognized for this matching. Then, we select highlight shots from these detected events, reflecting on personal preferences. Finally, we connect each shot augmented with related audio and text in temporal order. From experimental results, we verified that an hourlength video can be compressed into a minute-length personalized abstract.


international conference on pattern recognition | 2000

Extracting actors, actions and events from sports video -a fundamental approach to story tracking

Naoko Nitta; Noboru Babaguchi; Tadahiro Kitahashi

To effectively deal with the vast amount of videos, we need to construct a content-based representation for each video. As a step towards this goal, this paper proposes a method to automatically generate the semantical annotations for a sports video by integrating the text(c1osed-caption) and image stream. we first segment the text data and extract segments which are meaningful to grasp the story of the video, then extract the actors, the actions and the events of each scene which are useful for information retrieval by using the linguistic cues and the domain knowledge. We also segment the image stream so that each segment can associate with each text segment extracted above by using the image cues. Finally we can annotate the video by associating the text segments with the image segments. Some experimental results are presented and discussed in this paper.

Collaboration


Dive into the Noboru Babaguchi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kouzou Ohara

Aoyama Gakuin University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge