Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hidetaka Kuwano is active.

Publication


Featured researches published by Hidetaka Kuwano.


international conference on multimedia and expo | 2000

Telop-on-demand: video structuring and retrieval based on text recognition

Hidetaka Kuwano; Yukinobu Taniguchi; Hiroyuki Arai; Minoru Mori; Shoji Kurakake; Haruhiko Kojima

The paper presents a telop-on-demand system that anatomically recognizes texts in video frames to create the indices needed for content based video browsing and retrieval. Superimposed texts are important as they provide semantic information about scene contents. Their attributes such as fonts, size, and position in a frame are important as they are carefully designed by the video editor and so reflect the intent of captioning. In news programs, for instance, the headline text is displayed in larger fonts than the subtitles. Our system takes into account not only the texts themselves but also their attributes for structuring videos. We describe: (i) novel methods for detecting and extracting texts that are robust against the presence of complex backgrounds and intensity degradation of the character patterns, and (ii) a method for structuring a video based on the text attributes.


Storage and Retrieval for Image and Video Databases | 1997

Recognition and visual feature matching of text region in video for conceptual indexing

Shoji Kurakake; Hidetaka Kuwano; Kazumi Odaka

An indexing method for content-based image retrieval by using textual information in video is proposed. Indices extracted from textual information make it possible to retrieve video data by a conceptual query, such as a topic or a persons name, and organize flat video data into structured video data based on its conceptual content. To this end, we developed a text extraction and recognition algorithm and a visual feature matching algorithm for indexing and organizing video data at a conceptual level. The text extraction and recognition algorithm identifies frames in the video which contain text, extracts the text regions from the frame, finds text lines, and recognizes characters in the text line. The visual feature matching algorithm measures the similarity of frames containing text of find frames with similar appearances text, which can be considered topic change frames. Experiments using real video data showed that our algorithm can index textual information reliably and that it has good potential as a tool for making content-based conceptual-level queries to video databases.


Proceedings Workshop on Document Image Analysis (DIA'97) | 1997

Telop character extraction from video data

Hidetaka Kuwano; Shoji Kurakake; Kazumi Odaka

We introduce a telop character extraction method that consists of two steps: detecting a telop frame from a video sequence; and extracting telop character regions in the telop frame. Frames in which telops appear and disappear are detected regardless of background movement in the video sequence by selecting frames that have large intensity histogram differences from the previous frames and have large agreement in edge pixel positions with the subsequent frames. By segmenting the color space in a sequential manner, the method then quickly segments the detected telop frames without deteriorating segmentation results. It then filters out non-telop character regions obtained from the segmentation by using color, location, size and temporal movement features of the regions. We implement the proposed method on a PC and a workstation, and confirm that it can process live broadcast video data very effectively.


international conference on consumer electronics | 2002

A smart TV viewing and Web access interface based on video indexing techniques

Hidetaka Kuwano; Yukinobu Taniguchi; Kenichi Minami; Masashi Morimoto; Haruhiko Kojima

We propose JoyTV, an advanced TV viewing PC software application. It automatically detects scene changes and music, and recognizes superimposed texts in TV broadcasts. It also links TV contents and Web sites using image features. JoyTV provides users with a smart TV viewing interface based on these video indexes.


Storage and Retrieval for Image and Video Databases | 1998

Content-based live video retrieval by telop character recognition: TV on demand

Minoru Takahata; Hidetaka Kuwano; Shoji Kurakake; Chikashi Matsuda; Kazutoshi Nishimura

We have developed a TV-on-demand system, which provides playback of a television program after a period ranging from a few seconds to one week after broadcast, and have conducted usage trials in cooperation with a television station in Nagano Prefecture of Japan. This system has been achieved through the development of various technologies, such as automatic updating of stored television programs and contents retrieval by telop characters. Users in the trials can begin playback of a television program immediately after its broadcast has begun. The purpose of the trials was to evaluate the systems usability in applications, such as contents retrieval, selective viewing of commercials, and customer service at the television station. This paper presents applied technologies and some experimental results, and also addresses a new direction of information retrieval system based on the evaluation of the usage trials.


Archive | 2001

Scheme for extractions and recognitions of telop characters from video data

Hidetaka Kuwano; Hiroyuki Arai; Shoji Kurakake; Kenji Ogura; Toshiaki Sugimura; Minoru Mori; Minoru Takahata


Archive | 2006

Speech processing method and apparatus and program therefor

Kota Hidaka; Shinya Nakajima; Osamu Mizuno; Hidetaka Kuwano; Haruhiko Kojima


Archive | 2006

Speech processing method and apparatus for deciding emphasized portions of speech, and program therefor

Kota Hidaka; Shinya Nakajima; Osamu Mizuno; Hidetaka Kuwano; Haruhiko Kojima


Technical report of IEICE. PRMU | 1996

Telop Detection Method for Content-Based Video Data Retrieval

Hidetaka Kuwano; Shoji Kurakake; Kazumi Odaka


Technical report of IEICE. PRMU | 2006

Automatic metadata generation by applying heuristic rules to the results of media analysis and its use to digest video distribution service

Hidetaka Kuwano; Tomokazu Yamada; Katsuhiko Kawazoe

Collaboration


Dive into the Hidetaka Kuwano's collaboration.

Top Co-Authors

Avatar

Haruhiko Kojima

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar

Yukinobu Taniguchi

Tokyo University of Science

View shared research outputs
Top Co-Authors

Avatar

Kazutoshi Nishimura

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar

Kenichi Minami

Nippon Telegraph and Telephone

View shared research outputs
Researchain Logo
Decentralizing Knowledge