Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhiguang Wang is active.

Publication


Featured researches published by Zhiguang Wang.


international conference on machine learning and applications | 2014

Time Warping Symbolic Aggregation Approximation with Bag-of-Patterns Representation for Time Series Classification

Zhiguang Wang; Tim Oates

Standard Symbolic Aggregation Approximation (SAX) is at the core of many effective time series data mining algorithms. Its combination with Bag-of-Patterns (BoP) has become the standard approach with state-of-the-art performance on standard datasets. However, standard SAX with the BoP representation might neglect internal temporal correlation embedded in the raw data. In this paper, we proposed time warping SAX, which extends the standard SAX with time delay embedding vector approaches to account for temporal correlations. We test time warping SAX with the BoP representation on 12 benchmark datasets from the UCR Time Series Classification/Clustering Collection. On 9 datasets, time warping SAX overtakes the state-of-the-art performance of the standard SAX. To validate our methods in real world applications, a new dataset of vital signs data collected from patients who may require blood transfusion in the next 6 hours was tested. All the results demonstrate that, by considering the temporal internal correlation, time warping SAX combined with BoP improves classification performance.


Journal of the Acoustical Society of America | 2017

Deep learning for unsupervised separation of environmental noise sources

Bryan Wilkinson; Charlotte L. Ellison; Edward T. Nykaza; Arnold P. Boedihardjo; Anton Netchaev; Zhiguang Wang; Steven L. Bunkley; Tim Oates; Matthew G. Blevins

With the advent of reliable and continuously operating noise monitoring systems, we are now faced with an unprecedented amount of noise monitor data. In the context of environmental noise monitoring, there is a need to automatically detect, separate, and classify all environmental noise sources. This is a complex task because sources can overlap, vary by location, and have an unbounded number of noise sources that a monitor device may record. In this study, we synthetically generate datasets that contain Gaussian noise and overlaps for several pre-labeled environmental noise monitoring datasets to examine how well deep learning methods (e.g., autoencoders) can separate environmental noise sources. In addition to examining performance, we also focus on understanding which signal features and separation metrics are useful to this problem.


Journal of the Acoustical Society of America | 2016

Deep learning for unsupervised feature extraction in audio signals: A pedagogical approach to understanding how hidden layers recreate, separate, and classify audio signals

Edward T. Nykaza; Arnold P. Boedihardjo; Zhiguang Wang; Tim Oates; Anton Netchaev; Steven L. Bunkley; Matthew G. Blevins

Deep learning is becoming ubiquitous; it is the underlying and driving force behind many technologies we use everyday (e.g., search engines, fraud detection warning systems, and social-media facial recognition algorithms). Over the past few years, there has been a steady increase in the number of audio and acoustics related applications of deep learning. But what is exactly going on under the hood? In this paper, we focus on deep learning algorithms for unsupervised feature learning. We take a pedagogical approach to understanding how the hidden layers recreate, separate, and classify audio signals. We begin with a simple pure tone dataset, and systematically increase the complexity of this dataset in both frequency and time. We end the presentation with some feature extraction examples from real-world environmental recordings, and find that these features are easier to interpret given the understanding developed from the simpler tone datasets. The unsupervised feature learning techniques explored in this...


Journal of the Acoustical Society of America | 2016

Deep learning for unsupervised feature extraction in audio signals: Monaural source separation

Edward T. Nykaza; Arnold P. Boedihardjo; Zhiguang Wang; Tim Oates; Anton Netchaev; Steven L. Bunkley; Matthew G. Blevins

Deep learning is becoming ubiquitous; it is the underlying and driving force behind many heavily embedded technologies in society (e.g., search engines, fraud detection warning systems, and social-media facial recognition algorithms). Over the past few years there has been a steady increase in the number of audio related applications of deep learning. Recently, Nykaza et al. presented a pedagogical approach to understanding how the hidden layers recreate, separate, and classify environmental noise signals. That work presented some feature extraction examples using simple pure tone, chord, and environmental noise datasets. In this paper, we build upon this recent analysis and expand the datasets to include more realistic representations of those datasets with the inclusion of noise and overlapping signals. Additionally, we consider other related architectures (e.g., variant-autoencoders, recurrent neural networks, and fixing hidden nodes/layers), explore their advantages/drawbacks, and provide insights on ...


international conference on artificial intelligence | 2015

Imaging time-series to improve classification and imputation

Zhiguang Wang; Tim Oates


Archive | 2014

Encoding Time Series as Images for Visual Inspection and Classification Using Tiled Convolutional Neural Networks

Zhiguang Wang; Tim Oates


the florida ai research society | 2015

Pooling SAX-BoP Approaches with Boosting to Classify Multivariate Synchronous Physiological Time Series Data

Zhiguang Wang; Tim Oates


national conference on artificial intelligence | 2016

Adaptive Normalized Risk-Averting Training for deep neural networks

Zhiguang Wang; Tim Oates; James Ting-Ho Lo


arXiv: Learning | 2015

Adopting Robustness and Optimality in Fitting and Learning

Zhiguang Wang; Tim Oates; James Ting-Ho Lo


arXiv: Distributed, Parallel, and Cluster Computing | 2017

Automated Cloud Provisioning on AWS using Deep Reinforcement Learning.

Zhiguang Wang; Chul Gwon; Tim Oates; Adam Iezzi

Collaboration


Dive into the Zhiguang Wang's collaboration.

Top Co-Authors

Avatar

Tim Oates

University of Maryland

View shared research outputs
Top Co-Authors

Avatar

Arnold P. Boedihardjo

United States Army Corps of Engineers

View shared research outputs
Top Co-Authors

Avatar

Edward T. Nykaza

Engineer Research and Development Center

View shared research outputs
Top Co-Authors

Avatar

Steven L. Bunkley

United States Army Corps of Engineers

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Charlotte L. Ellison

Engineer Research and Development Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge