Featured Researches

Signal Processing

Data-Driven Incident Detection in Power Distribution Systems

In a power distribution network with energy storage systems (ESS) and advanced controls, traditional monitoring and protection schemes are not well suited for detecting anomalies such as malfunction of controllable devices. In this work, we propose a data-driven technique for the detection of incidents relevant to the operation of ESS in distribution grids. This approach leverages the causal relationship observed among sensor data streams, and does not require prior knowledge of the system model or parameters. Our methodology includes a data augmentation step which allows for the detection of incidents even when sensing is scarce. The effectiveness of our technique is illustrated through case studies which consider active power dispatch and reactive power control of ESS.

Read more
Signal Processing

Data-Driven Transferred Energy Management Strategy for Hybrid Electric Vehicles via Deep Reinforcement Learning

Real-time applications of energy management strategies (EMSs) in hybrid electric vehicles (HEVs) are the harshest requirements for researchers and engineers. Inspired by the excellent problem-solving capabilities of deep reinforcement learning (DRL), this paper proposes a real-time EMS via incorporating the DRL method and transfer learning (TL). The related EMSs are derived from and evaluated on the real-world collected driving cycle dataset from Transportation Secure Data Center (TSDC). The concrete DRL algorithm is proximal policy optimization (PPO) belonging to the policy gradient (PG) techniques. For specification, many source driving cycles are utilized for training the parameters of deep network based on PPO. The learned parameters are transformed into the target driving cycles under the TL framework. The EMSs related to the target driving cycles are estimated and compared in different training conditions. Simulation results indicate that the presented transfer DRL-based EMS could effectively reduce time consumption and guarantee control performance.

Read more
Signal Processing

Database Assisted Nonlinear Least Squares Algorithm for Visible Light Positioning in NLOS Environments

We propose an indoor localization algorithm for visible light systems by considering effects of non-line-of-sight (NLOS) propagation. The proposed algorithm, named database assisted nonlinear least squares (DA-NLS), utilizes ideas from both the classical NLS algorithm and the fingerprinting algorithm to achieve accurate and robust localization performance in NLOS environments. In particular, a database is used to learn NLOS effects, and then an NLS algorithm is employed to estimate the position. The performance of the proposed algorithm is compared against that of the fingerprinting and NLS algorithms.

Read more
Signal Processing

Deep Bayesian U-Nets for Efficient, Robust and Reliable Post-Disaster Damage Localization

Post-disaster inspections are critical to emergency management after earthquakes. The availability of data on the condition of civil infrastructure immediately after an earthquake is of great importance for emergency management. Stakeholders require this information to take effective actions and to better recover from the disaster. The data-driven SHM has shown great promises to achieve this goal in near real-time. There have been several proposals to automate the inspection process from different sources of input using deep learning. The existing models in the literature only provide a final prediction output, while the risks of utilizing such models for safety-critical assessments should not be ignored. This paper is dedicated to developing deep Bayesian U-Nets where the uncertainty of predictions is a second output of the model, which is made possible through Monte Carlo dropout sampling in test time. Based on a grid-like data structure, the concept of semantic damage segmentation (SDS) is revisited. Compared to image segmentation, it is shown that a much higher level of precision is necessary for damage diagnosis. To validate and test the proposed framework, a benchmark dataset, 10,800 nonlinear response history analyses on a 10-story-10-bay 2D reinforced concrete moment frame, is utilized. Compared to the benchmark SDS model, Bayesian models exhibit superior robustness with enhanced global and mean class accuracies. Finally, the model's uncertainty output is studied by monitoring the softmax class variance of different predictions. It is shown that class variance correlates well with locations where the model makes mistakes. This output can be used in combination with the prediction results to increase the reliability of this data-driven framework in structural inspections.

Read more
Signal Processing

Deep Joint Source Channel Coding for WirelessImage Transmission with OFDM

We present a deep learning based joint source channel coding (JSCC) scheme for wireless image transmission over multipath fading channels with non-linear signal clipping. The proposed encoder and decoder use convolutional neural networks (CNN) and directly map the source images to complex-valued baseband samples for orthogonal frequency division multiplexing (OFDM) transmission. The proposed model-driven machine learning approach eliminates the need for separate source and channel coding while integrating an OFDM datapath to cope with multipath fading channels. The end-to-end JSCC communication system combines trainable CNN layers with non-trainable but differentiable layers representing the multipath channel model and OFDM signal processing blocks. Our results show that injecting domain expert knowledge by incorporating OFDM baseband processing blocks into the machine learning framework significantly enhances the overall performance compared to an unstructured CNN. Our method outperforms conventional schemes that employ state-of-the-art but separate source and channel coding such as BPG and LDPC with OFDM. Moreover, our method is shown to be robust against non-linear signal clipping in OFDM for various channel conditions that do not match the model parameter used during the training.

Read more
Signal Processing

Deep Learning Approach for Target Locating in Through-the-Wall Radar under Electromagnetic Complex Wall

In this paper, we used the deep learning approach to perform two-dimensional, multi-target locating in Throughthe-Wall Radar under conditions where the wall is modeled as a complex electromagnetic media. We have assumed 5 models for the wall and 3 modes for the number of targets. The target modes are single, double and triple. The wall scenarios are homogeneous wall, wall with airgap, inhomogeneous wall, anisotropic wall and inhomogeneous-anisotropic wall. For this purpose, we have used the deep neural network algorithm. Using the Python FDTD library, we generated a dataset, and then modeled it with deep learning. Assuming the wall as a complex electromagnetic media, we achieved 97:7% accuracy for single-target 2D locating, and for two-targets, three-targets we achieved an accuracy of 94:1% and 62:2%, respectively.

Read more
Signal Processing

Deep Learning Assisted Calibrated Beam Training for Millimeter-Wave Communication Systems

Huge overhead of beam training imposes a significant challenge in millimeter-wave (mmWave) wireless communications. To address this issue, in this paper, we propose a wide beam based training approach to calibrate the narrow beam direction according to the channel power leakage. To handle the complex nonlinear properties of the channel power leakage, deep learning is utilized to predict the optimal narrow beam directly. Specifically, three deep learning assisted calibrated beam training schemes are proposed. The first scheme adopts convolution neural network to implement the prediction based on the instantaneous received signals of wide beam training. We also perform the additional narrow beam training based on the predicted probabilities for further beam direction calibrations. The second scheme adopts long-short term memory (LSTM) network for tracking the movement of users and calibrating the beam direction according to the received signals of prior beam training, in order to enhance the robustness to noise. To further reduce the overhead of wide beam training, our third scheme, an adaptive beam training strategy, selects partial wide beams to be trained based on the prior received signals. Two criteria, namely, optimal neighboring criterion and maximum probability criterion, are designed for the selection. Furthermore, to handle mobile scenarios, auxiliary LSTM is introduced to calibrate the directions of the selected wide beams more precisely. Simulation results demonstrate that our proposed schemes achieve significantly higher beamforming gain with smaller beam training overhead compared with the conventional and existing deep-learning based counterparts.

Read more
Signal Processing

Deep Learning Based Antenna Selection for Channel Extrapolation in FDD Massive MIMO

In massive multiple-input multiple-output (MIMO) systems, the large number of antennas would bring a great challenge for the acquisition of the accurate channel state information, especially in the frequency division duplex mode. To overcome the bottleneck of the limited number of radio links in hybrid beamforming, we utilize the neural networks (NNs) to capture the inherent connection between the uplink and downlink channel data sets and extrapolate the downlink channels from a subset of the uplink channel state information. We study the antenna subset selection problem in order to achieve the best channel extrapolation and decrease the data size of NNs. The probabilistic sampling theory is utilized to approximate the discrete antenna selection as a continuous and differentiable function, which makes the back propagation of the deep learning feasible. Then, we design the proper off-line training strategy to optimize both the antenna selection pattern and the extrapolation NNs. Finally, numerical results are presented to verify the effectiveness of our proposed massive MIMO channel extrapolation algorithm.

Read more
Signal Processing

Deep Learning Based Channel Covariance Matrix Estimation with User Location and Scene Images

Channel covariance matrix (CCM) is one critical parameter for designing the communications systems. In this paper, a novel framework of the deep learning (DL) based CCM estimation is proposed that exploits the perception of the transmission environment without any channel sample or the pilot signals. Specifically, as CCM is affected by the user's movement, we design a deep neural network (DNN) to predict CCM from user location and user speed, and the corresponding estimation method is named as ULCCME. A location denoising method is further developed to reduce the positioning error and improve the robustness of ULCCME. For cases when user location information is not available, we propose an interesting way that uses the environmental 3D images to predict the CCM, and the corresponding estimation method is named as SICCME. Simulation results show that both the proposed methods are effective and will benefit the subsequent channel estimation.

Read more
Signal Processing

Deep Learning Optimized Sparse Antenna Activation for Reconfigurable Intelligent Surface Assisted Communication

To capture the communications gain of the massive radiating elements with low power cost, the conventional reconfigurable intelligent surface (RIS) usually works in passive mode. However, due to the cascaded channel structure and the lack of signal processing ability, it is difficult for RIS to obtain the individual channel state information and optimize the beamforming vector. In this paper, we add signal processing units for a few antennas at RIS to partially acquire the channels. To solve the crucial active antenna selection problem, we construct an active antenna selection network that utilizes the probabilistic sampling theory to select the optimal locations of these active antennas. With this active antenna selection network, we further design two deep learning (DL) based schemes, i.e., the channel extrapolation scheme and the beam searching scheme, to enable the RIS communication system. The former utilizes the selection network and a convolutional neural network to extrapolate the full channels from the partial channels received by the active RIS antennas, while the latter adopts a fully-connected neural network to achieve the direct mapping between the partial channels and the optimal beamforming vector with maximal transmission rate. Simulation results are provided to demonstrate the effectiveness of the designed DL-based schemes.

Read more

Ready to get started?

Join us today