Jianyu Fan
Simon Fraser University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jianyu Fan.
advances in computer entertainment technology | 2016
Jianyu Fan; William Li; Jim Bizzocchi; Justine Bizzocchi; Philippe Pasquier
A music video (MV) is a videotaped performance of a recorded popular song, usually accompanied by dancing and visual images. In this paper, we outline the design of a generative music video system, which automatically generates an audio-video mashup for a given target audio track. The system performs segmentation for the given target song based on beat detection. Next, according to audio similarity analysis and color heuristic selection methods, we obtain generated video segments. Then, these video segments are truncated to match the length of audio segments and are concatenated as the final music video. An evaluation of our system has shown that users are receptive to this novel presentation of music videos and are interested in future developments.
audio mostly conference | 2015
Miles Thorogood; Jianyu Fan; Philippe Pasquier
Segmentation and classification is an important but time consuming part of the process of using soundscape recordings in sound design and research. Background and foreground are general classes referring to a signals perceptual attributes, and used as a criteria by sound designers when segmenting sound files. We establish the background / foreground classification task within a musicological and production-related context, and present a method for automatic segmentation of soundscape recordings based on this task. We created a soundscape corpus with ground truth data obtained from a human perception study. An analysis of the corpus showed an average agreement of each class - background 92.5%, foreground 80.8%, and background with foreground 75.3%. We then used the corpus to train a machine learning technique using a Support Vector Machines classifier. An analysis of the classifier demonstrated similar results to the average human performance (background 96.7%, foreground 80%, and background with foreground 86.7%). We then report an experiment evaluating the classifier with different analysis windows sizes, which demonstrates how smaller window sizes result in a diminishing performance of the classifier.
human factors in computing systems | 2018
Min Fan; Jianyu Fan; Sheng Jin; Alissa Nicole Antle; Philippe Pasquier
Childrens emotional skills are important for their success. However, children with Autism Spectrum Disorders have difficulties in understanding social contexts and recognizing and expressing facial expressions. In this paper, we present the design of EmoStory, a game-based interactive narratives system that supports childrens emotional development. The system uses animation and emotional sounds to teach children six basic emotions and facial expressions in various social contexts, and also provides multi-level games for children to systematically practice the learnt skills. Through using facial expression recognition technique and designing animated visual cue for important facial movement features, the system helps children to practice facial expressions and provides them with explicit guides during the tasks.
Proceedings of the 5th International Conference on Movement and Computing | 2018
William Li; Omid Alemi; Jianyu Fan; Philippe Pasquier
Affect estimation consists of building a predictive model of the perceived affect given stimuli. In this study, we are looking at the perceived affect in full-body motion capture data of various movements. There are two parts to this study. In the first part, we conduct groundtruthing on affective labels of motion capture sequences by hosting a survey on a crowdsourcing platform where participants from all over the world ranked the relative valence and arousal of one motion capture sequences to another. In the second part, we present our experiments with training a machine learning model for pairwise ranking of motion capture data using RankNet. Our analysis shows a reasonable strength in the inter-rater agreement between the participants. The evaluation of the RankNet demonstrates that it can learn to rank the motion capture data, with higher confidence in the arousal dimension compared to the valence dimension.
international conference of design, user experience, and usability | 2017
Jianyu Fan; Philippe Pasquier; Luciane Maria Fadel; Jim Bizzocchi
Video editors are facing the challenge of montage editing when dealing with massive amount of video shots. The major problem is selecting the feature they want to use for building repetition patterns in montage editing. It is time-consuming when testing various features for repetitions and watching videos one by one. A visualization tool for video features could be useful for assisting montage editing. Such a visualization tool is not currently available. We present the design of ViVid, an interactive system for visualizing video features for particular target videos. ViVid is a generic tool for computer-assisted montage and for the design of generative video arts, which could take advantage of the information of video features for rendering the piece. The system computes sand visualizes the color information, motion and texture information data. Instead of visualizing original feature data frame by frame, we re-arranged the data and used both statistics of video feature data and frame level data to represent the video. The system uses dashboards to visualize multiple dimensional data in multiple views. We used the project of Seasons as a case study for testing the tool. Our feasibility study shows that users are satisfied with the visualization tool.
Journal of The Audio Engineering Society | 2016
Jianyu Fan; Miles Thorogood; Philippe Pasquier
audio mostly conference | 2015
Jianyu Fan; Miles Thorogood; Bernhard E. Riecke; Philippe Pasquier
international symposium/conference on music information retrieval | 2017
Jianyu Fan; Kıvanç Tatar; Miles Thorogood; Philippe Pasquier
affective computing and intelligent interaction | 2017
Jianyu Fan; Miles Thorogood; Philippe Pasquier
Journal of The Audio Engineering Society | 2016
Miles Thorogood; Jianyu Fan; Philippe Pasquier