Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dianting Liu is active.

Publication


Featured researches published by Dianting Liu.


International Journal of Multimedia Data Engineering and Management | 2015

Spatio-Temporal Analysis for Human Action Detection and Recognition in Uncontrolled Environments

Dianting Liu; Yilin Yan; Mei Ling Shyu; Guiru Zhao; Min Chen

Understanding semantic meaning of human actions captured in unconstrained environments has broad applications in fields ranging from patient monitoring, human-computer interaction, to surveillance systems. However, while great progresses have been achieved on automatic human action detection and recognition in videos that are captured in controlled/constrained environments, most existing approaches perform unsatisfactorily on videos with uncontrolled/unconstrained conditions e.g., significant camera motion, background clutter, scaling, and light conditions. To address this issue, the authors propose a robust human action detection and recognition framework that works effectively on videos taken in controlled or uncontrolled environments. Specifically, the authors integrate the optical flow field and Harris3D corner detector to generate a new spatial-temporal information representation for each video sequence, from which the general Gaussian mixture model GMM is learned. All the mean vectors of the Gaussian components in the generated GMM model are concatenated to create the GMM supervector for video action recognition. They build a boosting classifier based on a set of sparse representation classifiers and hamming distance classifiers to improve the accuracy of action recognition. The experimental results on two broadly used public data sets, KTH and UCF YouTube Action, show that the proposed framework outperforms the other state-of-the-art approaches on both action detection and recognition.


international symposium on multimedia | 2012

Effective Moving Object Detection and Retrieval via Integrating Spatial-Temporal Multimedia Information

Dianting Liu; Mei Ling Shyu

In the area of multimedia semantic analysis and video retrieval, automatic object detection techniques play an important role. Without the analysis of the object-level features, it is hard to achieve high performance on semantic retrieval. As a branch of object detection study, moving object detection also becomes a hot research field and gets a great amount of progress recently. This paper proposes a moving object detection and retrieval model that integrates the spatial and temporal information in video sequences and uses the proposed integral density method (adopted from the idea of integral images) to quickly identify the motion regions in an unsupervised way. First, key information locations on video frames are achieved as maxima and minima of the result of Difference of Gaussian (DoG) function. On the other hand, a motion map of adjacent frames is obtained from the diversity of the outcomes from Simultaneous Partition and Class Parameter Estimation (SPCPE) framework. The motion map filters key information locations into key motion locations (KMLs) where the existence of moving objects is implied. Besides showing the motion zones, the motion map also indicates the motion direction which guides the proposed integral density approach to quickly and accurately locate the motion regions. The detection results are not only illustrated visually, but also verified by the promising experimental results which show the concept retrieval performance can be improved by integrating the global and local visual information.


international symposium on multimedia | 2011

Moving Object Detection under Object Occlusion Situations in Video Sequences

Dianting Liu; Mei Ling Shyu; Qiusha Zhu; Shu-Ching Chen

It is a great challenge to detect an object that is overlapped or occluded by other objects in images. For moving objects in a video sequence, their movements can bring extra spatio-temporal information of successive frames, which helps object detection, especially for occluded objects. This paper proposes a moving object detection approach for occluded objects in a video sequence with the assist of the SPCPE (Simultaneous Partition and Class Parameter Estimation) unsupervised video segmentation method. Based on the preliminary foreground estimation result from SPCPE and object detection information from the previous frame, an n-steps search (NSS) method is utilized to identify the location of the moving objects, followed by a size-adjustment method that adjusts the bounding boxes of the objects. Several experimental results show that our proposed approach achieves good detection performance under object occlusion situations in serial frames of a video sequence.


information reuse and integration | 2013

Spatial-temporal motion information integration for action detection and recognition in non-static background

Dianting Liu; Mei Ling Shyu; Guiru Zhao

Various motion detection methods have been proposed in the past decade, but there are seldom attempts to investigate the advantages and disadvantages of different detection mechanisms so that they can complement each other to achieve a better performance. Toward such a demand, this paper proposes a human action detection and recognition framework to bridge the semantic gap between low-level pixel intensity change and the high-level understanding of the meaning of an action. To achieve a robust estimation of the region of action with the complexities of an uncontrolled background, we propose the combination of the optical flow field and Harris3D corner detector to obtain a new spatial-temporal estimation in the video sequences. The action detection method, considering the integrated motion information, works well with the dynamic background and camera motion, and demonstrates the advantage of the proposed method of integrating multiple spatial-temporal cues. Then the local features (SIFT and STIP) extracted from the estimated region of action are used to learn the Universal Background Model (UBM) for the action recognition task. The experimental results on KTH and UCF YouTube Action (UCF11) data sets show that the proposed action detection and recognition framework can not only better estimate the region of action but also achieve better recognition accuracy comparing with the peer work.


information reuse and integration | 2010

Integration of global and local information in videos for key frame extraction

Dianting Liu; Mei Ling Shyu; Chao Chen; Shu-Ching Chen

Key frame extraction methods aim to obtain a set of frames that can efficiently represent and summarize video contents and be reused in many video retrieval-related applications. An effective set of key frames, viewed as a high-quality summary of the video, should include the major objects and events of the video, and contain little redundancy and overlapped content. In this paper, a new key frame extraction method is presented, which not only is based on the traditional idea of clustering in the feature extraction phase but also effectively reduces redundant frames using the integration of local and global information in videos. Experimental results on the TRECVid 2007 test video dataset have demonstrated the effectiveness of our proposed key frame extraction method in terms of the compression rate and retrieval precision.


Journal of Information & Knowledge Management | 2011

Within and Between Shot Information Utilisation in Video Key Frame Extraction

Dianting Liu; Mei Ling Shyu; Chao Chen; Shu-Ching Chen

In consequence of the popularity of family video recorders and the surge of Web 2.0, increasing amounts of videos have made the management and integration of the information in videos an urgent and important issue in video retrieval. Key frames, as a high-quality summary of videos, play an important role in the areas of video browsing, searching, categorisation, and indexing. An effective set of key frames should include major objects and events of the video sequence, and should contain minimum content redundancies. In this paper, an innovative key frame extraction method is proposed to select representative key frames for a video. By analysing the differences between frames and utilising the clustering technique, a set of key frame candidates (KFCs) is first selected at the shot level, and then the information within a video shot and between video shots is used to filter the candidate set to generate the final set of key frames. Experimental results on the TRECVID 2007 video dataset have demonstrated the effectiveness of our proposed key frame extraction method in terms of the percentage of the extracted key frames and the retrieval precision.


Signal Processing | 2016

Near-Duplicate Segments based news web video event mining

Chengde Zhang; Dianting Liu; Xiao Wu; Guiru Zhao; Mei Ling Shyu; Qiang Peng

News web videos uploaded by general users usually include lots of post-processing effects (editing, inserted logo, etc.), which bring noise and affect the similarity comparison for news web video event mining. In this paper, a framework based on the concept of Near-Duplicate Segments (NDSs) which effectively integrates spatial and temporal information is proposed. After each video being divided into segments, those segments from different videos but sharing similar visual content are clustered into groups. Each group is named as an NDS, which infers the latent content relations among videos. The spatial-temporal local features are extracted and used to represent each video segment, which could effectively capture the main content of news web videos and omit the noise such as the disturbance/influence from video editing. Finally, the visual information is integrated with the textual information. The experiment demonstrates that our proposed framework is more effective than several existing methods with a significant improvement. HighlightsNDS which effectively integrates spatial and temporal information is proposed.A framework based on the concept of NDS is proposed.The AARM method is proposed to enhance the robustness of terms in MCA.


ieee international conference semantic computing | 2013

Evaluating E-Commerce System Security Using Fuzzy Multi-criterion Decision-Making

Wen Jiang; Zhenjian Li; Jia Jia; Dianting Liu

In the development of E-Commerce, security has always been the core and key issue. To aim at the assessment of E-Commerce security, in order to efficiently handle uncertain information in the process of decision making, a new fuzzy multi-criteria decision-making (MCDM) method is presented based on fuzzy sets theory (FST) and Dempster-Shafer evidence theory (DST). First, a hierarchical structure of E-Commerce security is established. Second, the ratings of the third level criteria and the importance weights of the first level criteria, given by the experts, are expressed in trapezoidal (or triangular) fuzzy numbers. The ratings can be transformed into basic probability assignments (BPA) based on an improved similarity measure of generalized fuzzy numbers we proposed, combining the BPAs from bottom to top using Dempsters combination rule. At last, the importance weights can be transformed into discounting coefficients and the BPAs of the first level criteria can be fused using the discounting rule to obtain the final assessment decision. An illustrative example is given to show that the proposed method can be applied to the security evaluation for the complex E-Commerce system and provides the scientific and consensus evaluated results as well.


International Journal of Semantic Computing | 2013

SEMANTIC MOTION CONCEPT RETRIEVAL IN NON-STATIC BACKGROUND UTILIZING SPATIAL-TEMPORAL VISUAL INFORMATION

Dianting Liu; Mei Ling Shyu


ieee international conference semantic computing | 2013

Semantic Retrieval for Videos in Non-static Background Using Motion Saliency and Global Features

Dianting Liu; Mei Ling Shyu

Collaboration


Dive into the Dianting Liu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shu-Ching Chen

Florida International University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guiru Zhao

China Earthquake Networks Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fausto C. Fleites

Florida International University

View shared research outputs
Top Co-Authors

Avatar

Yimin Yang

Florida International University

View shared research outputs
Top Co-Authors

Avatar

Chengde Zhang

Southwest Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Qiang Peng

Southwest Jiaotong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge