Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hao Dong is active.

Publication


Featured researches published by Hao Dong.


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2017

DeepSleepNet: A Model for Automatic Sleep Stage Scoring Based on Raw Single-Channel EEG

Akara Supratak; Hao Dong; Chao Wu; Yike Guo

This paper proposes a deep learning model, named DeepSleepNet, for automatic sleep stage scoring based on raw single-channel EEG. Most of the existing methods rely on hand-engineered features, which require prior knowledge of sleep analysis. Only a few of them encode the temporal information, such as transition rules, which is important for identifying the next sleep stages, into the extracted features. In the proposed model, we utilize convolutional neural networks to extract time-invariant features, and bidirectional-long short-term memory to learn transition rules among sleep stages automatically from EEG epochs. We implement a two-step training algorithm to train our model efficiently. We evaluated our model using different single-channel EEGs (F4-EOG (left), Fpz-Cz, and Pz-Oz) from two public sleep data sets, that have different properties (e.g., sampling rate) and scoring standards (AASM and R&K). The results showed that our model achieved similar overall accuracy and macro F1-score (MASS: 86.2%−81.7, Sleep-EDF: 82.0%−76.9) compared with the state-of-the-art methods (MASS: 85.9%−80.5, Sleep-EDF: 78.9%−73.7) on both data sets. This demonstrated that, without changing the model architecture and the training algorithm, our model could automatically learn features for sleep stage scoring from different raw single-channel EEGs from different data sets without utilizing any hand-engineered features.


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2018

Mixed Neural Network Approach for Temporal Sleep Stage Classification

Hao Dong; Akara Supratak; Wei Pan; Chao Wu; Paul M. Matthews; Yike Guo

This paper proposes a practical approach to addressing limitations posed by using of single-channel electroencephalography (EEG) for sleep stage classification. EEG-based characterizations of sleep stage progression contribute the diagnosis and monitoring of the many pathologies of sleep. Several prior reports explored ways of automating the analysis of sleep EEG and of reducing the complexity of the data needed for reliable discrimination of sleep stages at lower cost in the home. However, these reports have involved recordings from electrodes placed on the cranial vertex or occiput, which are both uncomfortable and difficult to position. Previous studies of sleep stage scoring that used only frontal electrodes with a hierarchical decision tree motivated this paper, in which we have taken advantage of rectifier neural network for detecting hierarchical features and long short-term memory network for sequential data learning to optimize classification performance with single-channel recordings. After exploring alternative electrode placements, we found a comfortable configuration of a single-channel EEG on the forehead and have shown that it can be integrated with additional electrodes for simultaneous recording of the electro-oculogram. Evaluation of data from 62 people (with 494 hours sleep) demonstrated better performance of our analytical algorithm than is available from existing approaches with vertex or occipital electrode placements. Use of this recording configuration with neural network deconvolution promises to make clinically indicated home sleep studies practical.


Annual Conference on Medical Image Understanding and Analysis | 2017

Automatic Brain Tumor Detection and Segmentation Using U-Net Based Fully Convolutional Networks

Hao Dong; Guang Yang; Fangde Liu; Yuanhan Mo; Yike Guo

A major challenge in brain tumor treatment planning and quantitative evaluation is determination of the tumor extent. The noninvasive magnetic resonance imaging (MRI) technique has emerged as a front-line diagnostic tool for brain tumors without ionizing radiation. Manual segmentation of brain tumor extent from 3D MRI volumes is a very time-consuming task and the performance is highly relied on operator’s experience. In this context, a reliable fully automatic segmentation method for the brain tumor segmentation is necessary for an efficient measurement of the tumor extent. In this study, we propose a fully automatic method for brain tumor segmentation, which is developed using U-Net based deep convolutional networks. Our method was evaluated on Multimodal Brain Tumor Image Segmentation (BRATS 2015) datasets, which contain 220 high-grade brain tumor and 54 low-grade tumor cases. Cross-validation has shown that our method can obtain promising segmentation efficiently.


acm multimedia | 2017

TensorLayer: A Versatile Library for Efficient Deep Learning Development

Hao Dong; Akara Supratak; Luo Mai; Fangde Liu; Axel Oehmichen; Simiao Yu; Yike Guo

Recently we have observed emerging uses of deep learning techniques in multimedia systems. Developing a practical deep learning system is arduous and complex. It involves labor-intensive tasks for constructing sophisticated neural networks, coordinating multiple network models, and managing a large amount of training-related data. To facilitate such a development process, we propose TensorLayer which is a Python-based versatile deep learning library. TensorLayer provides high-level modules that abstract sophisticated operations towards neuron layers, network models, training data and dependent training jobs. In spite of offering simplicity, it has transparent module interfaces that allows developers to flexibly embed low-level controls within a backend engine, with the aim of supporting fine-grain tuning towards training. Real-world cluster experiment results show that TensorLayeris able to achieve competitive performance and scalability in critical deep learning tasks. TensorLayer was released in September 2016 on GitHub. Since after, it soon become one of the most popular open-sourced deep learning library used by researchers and practitioners.


Machine Learning for Health Informatics | 2016

Survey on Feature Extraction and Applications of Biosignals

Akara Supratak; Chao Wu; Hao Dong; Kai Sun; Yike Guo

Biosignals have become an important indicator not only for medical diagnosis and subsequent therapy, but also passive health monitoring. Extracting meaningful features from biosignals can help people understand the human functional state, so that upcoming harmful symptoms or diseases can be alleviated or avoided. There are two main approaches commonly used to derive useful features from biosignals, which are hand-engineering and deep learning. The majority of the research in this field focuses on hand-engineering features, which require domain-specific experts to design algorithms to extract meaningful features. In the last years, several studies have employed deep learning to automatically learn features from raw biosignals to make feature extraction algorithms less dependent on humans. These studies have also demonstrated promising results in a variety of biosignal applications. In this survey, we review different types of biosignals and the main approaches to extract features from the signal in the context of biomedical applications. We also discuss challenges and limitations of the existing approaches, and possible future research.


pacific rim conference on multimedia | 2018

Text-to-Image Synthesis via Visual-Memory Creative Adversarial Network

Shengyu Zhang; Hao Dong; Wei Hu; Yike Guo; Chao Wu; Di Xie; Fei Wu

Despite recent advances, text-to-image generation on complex datasets like MSCOCO, where each image contains varied objects, is still a challenging task. In this paper, we propose a method named visual-memory Creative Adversarial Network (vmCAN) to generate images depending on their corresponding narrative sentences. vmCAN appropriately leverages an external visual knowledge memory in both multi-modal fusion and image synthesis. By conditioning synthesis on both internally textual description and externally triggered “visual proposals”, our method boosts the inception score of the baseline method by 17.6% on the challenging COCO dataset.


IEEE Transactions on Information Forensics and Security | 2018

Dropping Activation Outputs With Localized First-Layer Deep Network for Enhancing User Privacy and Data Security

Hao Dong; Chao Wu; Zhen Wei; Yike Guo

Deep learning methods can play a crucial role in anomaly detection, prediction, and supporting decision making for applications like personal health-care, pervasive body sensing, and so on. However, current architecture of deep networks suffers the privacy issue that users need to give out their data to the model (typically hosted in a server or a cluster on Cloud) for training or prediction. This problem is getting more severe for those sensitive health-care or medical data (e.g., fMRI or body sensors measures like EEG signals). In addition to this, there is also a security risk of leaking these data during the data transmission from user to the model (especially when it is through the Internet). Targeting at these issues, in this paper, we proposed a new architecture for deep network in which users do not reveal their original data to the model. In our method, feed-forward propagation and data encryption are combined into one process: we migrate the first layer of deep network to users’ local devices and apply the activation functions locally, and then use the “dropping activation output” method to make the output non-invertible. The resulting approach is able to make model prediction without accessing users’ sensitive raw data. The experiment conducted in this paper showed that our approach achieves the desirable privacy protection requirement and demonstrated several advantages over the traditional approach with encryption/decryption.


international conference of the ieee engineering in medicine and biology society | 2016

A new soft material based in-the-ear EEG recording technique

Hao Dong; Paul M. Matthews; Yike Guo

Long-term electroencephalogram (EEG) is important for seizure detection, sleep monitoring and etc. In-the- ear EEG device makes such recording robust to noise and privacy protected (invisible to other people). However, the state-of-art techniques suffer from various drawbacks such as customization for specific users, manufacturing difficulties and short life cycle. To address these issues, we proposed silvered glass silicone based in-the-ear electrode which can be manufactured using conventional compression moulding. The material and in-the-ear EEG are evaluated separately, showing that the proposed method is durable, low-cost and easy-to-make.


IEEE Transactions on Medical Imaging | 2018

DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction

Guang Yang; Simiao Yu; Hao Dong; Gregory G. Slabaugh; Pier Luigi Dragotti; Xujiong Ye; Fangde Liu; Simon R. Arridge; Jennifer Keegan; Yike Guo; David N. Firmin


international conference on computer vision | 2017

Semantic Image Synthesis via Adversarial Learning

Hao Dong; Simiao Yu; Chao Wu; Yike Guo

Collaboration


Dive into the Hao Dong's collaboration.

Top Co-Authors

Avatar

Yike Guo

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Chao Wu

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Simiao Yu

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Fangde Liu

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Guang Yang

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Wei Pan

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge