Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Haiman Tian is active.

Publication


Featured researches published by Haiman Tian.


web information systems engineering | 2015

Correlation-Based Deep Learning for Multimedia Semantic Concept Detection

Hsin-Yu Ha; Yimin Yang; Samira Pouyanfar; Haiman Tian; Shu-Ching Chen

Nowadays, concept detection from multimedia data is considered as an emerging topic due to its applicability to various applications in both academia and industry. However, there are some inevitable challenges including the high volume and variety of multimedia data as well as its skewed distribution. To cope with these challenges, in this paper, a novel framework is proposed to integrate two correlation-based methods, Feature-Correlation Maximum Spanning Tree (FC-MST) and Negative-based Sampling (NS), with a well-known deep learning algorithm called Convolutional Neural Network (CNN). First, FC-MST is introduced to select the most relevant low-level features, which are extracted from multiple modalities, and to decide the input layer dimension of the CNN. Second, NS is adopted to improve the batch sampling in the CNN. Using NUS-WIDE image data set as a web-based application, the experimental results demonstrate the effectiveness of the proposed framework for semantic concept detection, comparing to other well-known classifiers.


ieee international conference on multimedia big data | 2017

A Video-Aided Semantic Analytics System for Disaster Information Integration

Haiman Tian; Shu-Ching Chen

We present a novel web-based system and a video-aided mobile application that allows emergency management personnel to access to prompt and relevant disaster situation information. The system is able to semantically integrate text-based disaster situation reports with related disaster videos taken in the field. The system is adapted to the video concept detection model and automates the procedure of file deployment and data manipulation. In addition, through an intuitive and seamless Apple iPad application, the users are able to interact with the system in various places and conditions and thus provide a more effective response. The system is demonstrated via its iPad application, which aims to provide relevant and feasible information for a disaster situation of interest.


ieee international conference on multimedia big data | 2017

MCA-NN: Multiple Correspondence Analysis Based Neural Network for Disaster Information Detection

Haiman Tian; Shu-Ching Chen

This paper proposes a semantic content analysis framework for reliable video event detection. In this work, we target to improve the concept detection results by feeding the learnt results from individual shallow learning models into a generic model to dig out of the similarities in deeper layers. Compared to the deep learning models, the shallow learning models are memorizing rather than understanding the features. The proposed framework tackles the issue in shallow learning by integrating the strength of Multiple Correspondence Analysis (MCA) and Multilayer Perceptron (MLP) neural network. The low-level features are taken as the initial inputs for MCA-based models to abstract higher-level feature values. The output values further involve interaction in the neural network for better understanding. It earns the ability to put forward the arguments. The framework provides final decisions of video classifications by analyzing the decisions of every single frame from the network outputs.


information reuse and integration | 2016

Domain Knowledge Assisted Data Processing for Florida Public Hurricane Loss Model (Invited Paper)

Yilin Yan; Samira Pouyanfar; Haiman Tian; Sheng Guan; Hsin Yu Ha; Shu-Ching Chen; Mei Ling Shyu; Shahid Hamid

Catastrophes have caused tremendous damages in human history and triggered record high post-disaster relief from the governments. The research of catastrophic modeling can help estimate the effects of natural disasters like hurricanes, floods, surges, and earthquakes. In every Atlantic hurricane season, the state of Florida in the United States has the potential to suffer economic and human losses from hurricanes. The Florida Public Hurricane Loss Model (FPHLM), funded by the Florida Office of Insurance Regulation, has assisted Florida and the residential insurance industry for more than a decade. How to process big data for historical hurricanes and insurance companies remains a challenging research topic for cat models. In this paper, the FPHLMs novel integrated domain knowledge assisted big data processing system is introduced and its effectiveness of data processing error prevention is presented.


IEEE Transactions on Multimedia | 2018

IF-MCA: Importance Factor-Based Multiple Correspondence Analysis for Multimedia Data Analytics

Yimin Yang; Samira Pouyanfar; Haiman Tian; Min Chen; Shu-Ching Chen; Mei Ling Shyu

Multimedia concept detection is a challenging topic due to the well-known class imbalance issue, where the data instances are distributed unevenly across different classes. This problem becomes even more prominent when the minority class that contains an extremely small proportion of the data represents the concept of interest as has occurred in many real-world applications such as frauds in banking transactions and goal events in soccer videos. Traditional data mining approaches often have difficulty handling largely skewed data distributions. To address this issue, in this paper, an importance-factor (IF)-based multiple correspondence analysis (MCA) framework is proposed to deal with the imbalanced datasets. Specifically, a hierarchical information gain analysis method, which is inspired by the decision tree algorithm, is presented for critical feature selection and IF assignment. Then, the derived IF is incorporated with the MCA algorithm for effective concept detection and retrieval. The comparison results in video concept detection using the disaster dataset and the soccer dataset demonstrate the effectiveness of the proposed framework.


ieee international conference semantic computing | 2015

Integrated execution framework for catastrophe modeling

Yimin Yang; Daniel Lopez; Haiman Tian; Samira Pouyanfar; Fausto C. Fleites; Shu-Ching Chen; Shahid Hamid

Home insurance is a critical issue in the state of Florida, considering that residential properties are exposed to hurricane risk each year. To assess hurricane risk and project insured losses, the Florida Public Hurricane Loss Model (FPHLM) funded by the states insurance regulatory agency was developed. The FPHLM is an open and public model that offers an integrated complex computing framework that can be described in two phases: execution and validation. In the execution phase, all major components of FPHLM (i.e., data pre-processing, Wind Speed Correction (WSC), and Insurance Loss Model (ILM)) are seamlessly integrated and sequentially carried out by following a coordination workflow, where each component is modeled as an execution element governed by the centralized data-transfer element. In the validation phase, semantic rules provided by domain experts for individual component are applied to verify the validity of model output. This paper presents how the model efficiently incorporates the various components from multiple disciplines in an integrated execution framework to address the challenges that make the FPHLM unique.


information reuse and integration | 2017

FA-MCADF: Feature Affinity Based Multiple Correspondence Analysis and Decision Fusion Framework for Disaster Information Management

Haiman Tian; Shu-Ching Chen; Stuart Harvey Rubin; William K. Grefe

Multimedia semantic concept detection is one of the major research topics in multimedia data analysis in recent years. Disaster information management needs the assistance of multimedia data analysis to better utilize those disasterrelated information, which has been widely shared by people through the Internet. In this paper, a Feature Affinity based Multiple Correspondence Analysis and Decision Fusion (FAMCADF) framework is proposed to extract useful semantics from a disaster dataset. By utilizing the selected features and their affinities/ranks in each of the feature groups, the proposed framework is able to improve the concept detection results. Moreover, the decision fusion scheme further improves the accuracy performance. The experimental results demonstrate the effectiveness of the proposed framework and prove that the fusion of the decisions of the basic classifiers could make the framework outperform several existing approaches in the comparison.


World Wide Web | 2018

Multimodal deep learning based on multiple correspondence analysis for disaster management

Samira Pouyanfar; Yudong Tao; Haiman Tian; Shu-Ching Chen; Mei Ling Shyu

The fast and explosive growth of digital data in social media and World Wide Web has led to numerous opportunities and research activities in multimedia big data. Among them, disaster management applications have attracted a lot of attention in recent years due to its impacts on society and government. This study targets content analysis and mining for disaster management. Specifically, a multimedia big data framework based on the advanced deep learning techniques is proposed. First, a video dataset of natural disasters is collected from YouTube. Then, two separate deep networks including a temporal audio model and a spatio-temporal visual model are presented to analyze the audio-visual modalities in video clips effectively. Thereafter, the results of both models are integrated using the proposed fusion model based on the Multiple Correspondence Analysis (MCA) algorithm which considers the correlations between data modalities and final classes. The proposed multimodal framework is evaluated on the collected disaster dataset and compared with several state-of-the-art single modality and fusion techniques. The results demonstrate the effectiveness of both visual model and fusion model compared to the baseline approaches. Specifically, the accuracy of the final multi-class classification using the proposed MCA-based fusion reaches to 73% on this challenging dataset.


World Wide Web | 2018

Multimodal deep representation learning for video classification

Haiman Tian; Yudong Tao; Samira Pouyanfar; Shu-Ching Chen; Mei Ling Shyu

Real-world applications usually encounter data with various modalities, each containing valuable information. To enhance these applications, it is essential to effectively analyze all information extracted from different data modalities, while most existing learning models ignore some data types and only focus on a single modality. This paper presents a new multimodal deep learning framework for event detection from videos by leveraging recent advances in deep neural networks. First, several deep learning models are utilized to extract useful information from multiple modalities. Among these are pre-trained Convolutional Neural Networks (CNNs) for visual and audio feature extraction and a word embedding model for textual analysis. Then, a novel fusion technique is proposed that integrates different data representations in two levels, namely frame-level and video-level. Different from the existing multimodal learning algorithms, the proposed framework can reason about a missing data type using other available data modalities. The proposed framework is applied to a new video dataset containing natural disaster classes. The experimental results illustrate the effectiveness of the proposed framework compared to some single modal deep learning models as well as conventional fusion techniques. Specifically, the final accuracy is improved more than 16% and 7% compared to the best results from single modality and fusion models, respectively.


ACM Computing Surveys | 2018

A Survey on Deep Learning: Algorithms, Techniques, and Applications

Samira Pouyanfar; Saad Sadiq; Yilin Yan; Haiman Tian; Yudong Tao; Maria Presa Reyes; Mei Ling Shyu; Shu-Ching Chen; S. S. Iyengar

The field of machine learning is witnessing its golden era as deep learning slowly becomes the leader in this domain. Deep learning uses multiple layers to represent the abstractions of data to build computational models. Some key enabler deep learning algorithms such as generative adversarial networks, convolutional neural networks, and model transfers have completely changed our perception of information processing. However, there exists an aperture of understanding behind this tremendously fast-paced domain, because it was never previously represented from a multiscope perspective. The lack of core understanding renders these powerful methods as black-box machines that inhibit development at a fundamental level. Moreover, deep learning has repeatedly been perceived as a silver bullet to all stumbling blocks in machine learning, which is far from the truth. This article presents a comprehensive review of historical and recent state-of-the-art approaches in visual, audio, and text processing; social network analysis; and natural language processing, followed by the in-depth analysis on pivoting and groundbreaking advances in deep learning applications. It was also undertaken to review the issues faced in deep learning such as unsupervised learning, black-box models, and online learning and to illustrate how these challenges can be transformed into prolific future research avenues.

Collaboration


Dive into the Haiman Tian's collaboration.

Top Co-Authors

Avatar

Shu-Ching Chen

Florida International University

View shared research outputs
Top Co-Authors

Avatar

Samira Pouyanfar

Florida International University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yilin Yan

Florida International University

View shared research outputs
Top Co-Authors

Avatar

Yimin Yang

Florida International University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sheng Guan

Florida International University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge