Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yin-Tzu Lin is active.

Publication


Featured researches published by Yin-Tzu Lin.


IEEE Transactions on Circuits and Systems for Video Technology | 2008

Semantic Analysis for Automatic Event Recognition and Segmentation of Wedding Ceremony Videos

Wen-Huang Cheng; Yung-Yu Chuang; Yin-Tzu Lin; Chi-Chang Hsieh; Shao-Yen Fang; Bing-Yu Chen; Ja-Ling Wu

Wedding is one of the most important ceremonies in our lives. It symbolizes the birth and creation of a new family. In this paper, we present a system for automatically segmenting a wedding ceremony video into a sequence of recognizable wedding events, e.g., the couples wedding kiss. Our goal is to develop an automatic tool that helps users to efficiently organize, search, and retrieve his/her treasured wedding memories. Furthermore, the obtained event descriptions could benefit and complement the current research in semantic video understanding. Based on the knowledge of wedding customs, a set of audiovisual features, relating to the wedding contexts of speech/music types, applause activities, picture-taking activities, and leading roles, are exploited to build statistical models for each wedding event. Thirteen wedding events are then recognized by a hidden Markov model, which takes into account both the fitness of observed features and the temporal rationality of event ordering to improve the segmentation accuracy. We conducted experiments on a collection of wedding videos and the promising results demonstrate the effectiveness of our approach. Comparisons with conditional random fields show that the proposed approach is more effective in this application domain.


multimedia information retrieval | 2007

Semantic-event based analysis and segmentation of wedding ceremony videos

Wen-Huang Cheng; Yung-Yu Chuang; Bing-Yu Chen; Ja-Ling Wu; Shao-Yen Fang; Yin-Tzu Lin; Chi-Chang Hsieh; Chen-Ming Pan; Wei-Ta Chu; Min-Chun Tien

Wedding is one of the most important ceremonies in our lives. It symbolizes the birth and creation of a new family. In this paper, we present a system for automatically segmenting a wedding ceremony video into a sequence of recognized wedding events, e.g., the couples wedding kiss. Our goal is to develop an automatic tool for users to efficiently organize, search, and retrieve his/her treasured wedding memories. Furthermore, the event descriptions could benefit and complement the current research in semantic video understanding. Technically, three kinds of event features, i.e., the speech/music discriminator, flashlight detector, and bride indicator, are exploited to build statistical models for each wedding event. Events are then recognized by a hidden Markov model, which takes into account both the fitness of observed features and the temporal rationality of event ordering to improve the segmentation accuracy. We conducted experiments on a rich set of wedding videos, and the results demonstrate the effectiveness of our approach.


acm multimedia | 2009

Sports wizard: sports video browsing based on semantic concepts and game structure

Ming-Chun Tien; Yin-Tzu Lin; Ja-Ling Wu

A convenient video browsing system for popular sports such as baseball, tennis and billiards is proposed in this work. A sports video is automatically segmented into video clips and annotated with semantic concepts in terms of event type. For sports with specific game structure, e.g. baseball and tennis, we further analyze the entire video of the match with the aid of caption information, webcast information, and the domain knowledge of the corresponding sport. Each segmented video clip with semantic annotation is then mapped to a certain part of the game structure. Finally, the proposed system, name as Sports Wizard, provides a favorable way for the user to browse sports videos based on semantic concepts or game structure.


conference on multimedia modeling | 2014

Semantic Based Background Music Recommendation for Home Videos

Yin-Tzu Lin; Tsung Hung Tsai; Min Chun Hu; Wen-Huang Cheng; Ja-Ling Wu

In this paper, we propose a new background music recommendation scheme for home videos and two new features describing the short-term motion/tempo distribution in visual/aural content. Unlike previous researches that merely matched the visual and aural contents through a perceptual way, we incorporate the textual semantics and content semantics while determining the matching degree of a video and a song. The key idea is that the recommended music should contain semantics that relate to the ones in the input video and that the rhythm of the music and the visual motion of the video should be harmonious enough. As a result, a few user-given tags and automatically annotated tags are used to compute their relation to the lyrics of the songs for selecting candidate musics. Then, we use the proposed motion-direction histogram (MDH) and pitch tempo pattern (PTP) to do the second-run selection. The user preference to the music genre is also taken into account as a filtering mechanism at the beginning. The primitive user evaluation shows that the proposed scheme is promising.


acm multimedia | 2014

Event Detection in Broadcasting Video for Halfpipe Sports

Hao-Kai Wen; Wei-Che Chang; Chia-Hu Chang; Yin-Tzu Lin; Ja-Ling Wu

In this work, a low-cost and efficient system is proposed to automatically analyze the halfpipe (HP) sports videos. In addition to the court color ratio information, we find the player region by using salient object detection mechanisms to face the challenge of motion blurred scenes in HP videos. Besides, a novel and efficient method for detecting the spin event is proposed on the basis of native motion vectors existing in a compressed video. Experimental results show that the proposed system is effective in recognizing the hard-to-be-detected spin events in HP videos.


international conference on computer graphics and interactive techniques | 2009

A comparison of three methods of face recognition for home photos

Che-Hua Yeh; Pei-Ruu Shih; Yin-Tzu Lin; Kuan-Ting Liu; Huang-Ming Chang; Ming Ouhyoung

This poster presents experimental results of three face recognition methods -- Support Vector Machine (SVM), Local Binary Pattern (LBP)-based, and Sparse Represented-based Classification (SRC). We will show the experimental results based on AR face database and on home photos. The experiments show that the three algorithms can achieve over 85% recognition rate in AR database. However, the recognition rate is extremely reduced in home photos. SVM and SRC-based method encounter challenges of selecting training model while LBP-based method encounters the challenge of merging over scattered clusters. Our goal is to improve the accuracy and efficiency especially in home photos based on the three methods.


IEEE MultiMedia | 2015

Bridging Music Using Sound-Effect Insertion

Yin-Tzu Lin; Chuan-Lung Lee; Jyh-Shing Roger Jang; Ja-Ling Wu

In this article, the authors offer a general overview of audio music concatenation, then deliberate on how to connect not-so-coherent music clips in response to the rising awareness of user-preference in music recomposition. In particular, they introduce sound-effect insertion into the proposed bridging process to make the transition natural and euphonious. To systematically verify the feasibility of the proposed music concatenation methods, they conducted specifically designed experiments to collect subjective opinions and reduce the cognitive loads of the participants. The results indicate that using suitable sound effects greatly enhances the listening experiences among various subjects.


acm multimedia | 2014

MSVA: Musical Street View Animator: An Effective and Efficient Way to Enjoy the Street Views of Your Journey

Yin-Tzu Lin; Po-Nien Chen; Chia-Hu Chang; Ja-Ling Wu

Google Maps with Street View (GSV) provides ways to explore the world but it lacks efficient ways to present a journey. Hyperlapse provides another ways for quick glimpsing the street-views along the route; however, its viewing experience is also not comfortable and could be tedious when the route is long-distance. In this paper, we provide an efficient and enjoyable way to present street view sequences of long journey. Street view journey video accompanied with locally listened music will be produced by the proposed approach. During the move between locations, we use the speed control techniques for animation production to improve the viewing experience. User evaluation results show that the proposed method increases the satisfaction of users in viewing the street view sequences.


conference on multimedia modeling | 2012

U-drumwave: an interactive performance system for drumming

Yin-Tzu Lin; Shuen-Huei Guan; Yuan-Chang Yao; Wen-Huang Cheng; Ja-Ling Wu

In this paper, we share our experience of applying the modern multimedia technologies to the traditional performing art in a drumming performance project, U-Drumwave. By deploying an interactive system on the drumming stage, the audience will see augmented visual objects moving on the stage in accord with the performers drumming rhythms. The creation and display of the visual objects are integrated with the concept of story intensity curve in order to vary the perceptual degree of tension given to the audience during the performance.


international symposium/conference on music information retrieval | 2009

Music Paste: Concatenating Music Clips based on Chroma and Rhythm Features.

Heng-Yi Lin; Yin-Tzu Lin; Ming-Chun Tien; Ja-Ling Wu

Collaboration


Dive into the Yin-Tzu Lin's collaboration.

Top Co-Authors

Avatar

Ja-Ling Wu

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Wen-Huang Cheng

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Chuan-Lung Lee

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bing-Yu Chen

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Chi-Chang Hsieh

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Chia-Hu Chang

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

I-Ting Liu

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Ming-Chun Tien

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Shao-Yen Fang

National Taiwan University

View shared research outputs
Researchain Logo
Decentralizing Knowledge