Featured Researches

Signal Processing

Deep Learning based Antenna Selection and CSI Extrapolation in Massive MIMO Systems

A critical bottleneck of massive multiple-input multiple-output (MIMO) system is the huge training overhead caused by downlink transmission, like channel estimation, downlink beamforming and covariance observation. In this paper, we propose to use the channel state information (CSI) of a small number of antennas to extrapolate the CSI of the other antennas and reduce the training overhead. Specifically, we design a deep neural network that we call an antenna domain extrapolation network (ADEN) that can exploit the correlation function among antennas. We then propose a deep learning (DL) based antenna selection network (ASN) that can select a limited antennas for optimizing the extrapolation, which is conventionally a type of combinatorial optimization and is difficult to solve. We trickly designed a constrained degradation algorithm to generate a differentiable approximation of the discrete antenna selection vector such that the back-propagation of the neural network can be guaranteed. Numerical results show that the proposed ADEN outperforms the traditional fully connected one, and the antenna selection scheme learned by ASN is much better than the trivially used uniform selection.

Read more
Signal Processing

Deep Learning for Latent Events Forecasting in Twitter Aided Caching Networks

A novel Twitter context aided content caching (TAC) framework is proposed for enhancing the caching efficiency by taking advantage of the legibility and massive volume of Twitter data. For the purpose of promoting the caching efficiency, three machine learning models are proposed to predict latent events and events popularity, utilizing collect Twitter data with geo-tags and geographic information of the adjacent base stations (BSs). Firstly, we propose a latent Dirichlet allocation (LDA) model for latent events forecasting taking advantage of the superiority of the LDA model in natural language processing (NLP). Then, we conceive long short-term memory (LSTM) with skip-gram embedding approach and LSTM with continuous skip-gram-Geo-aware embedding approach for the events popularity forecasting. Lastly, we associate the predicted latent events and the popularity of the events with the caching strategy. Extensive practical experiments demonstrate that: (1) The proposed TAC framework outperforms the conventional caching framework and is capable of being employed in practical applications thanks to the associating ability with public interests. (2) The proposed LDA approach conserves superiority for natural language processing (NLP) in Twitter data. (3) The perplexity of the proposed skip-gram-based LSTM is lower compared with the conventional LDA approach. (4) Evaluation of the model demonstrates that the hit rates of tweets of the model vary from 50% to 65% and the hit rate of the caching contents is up to approximately 75\% with smaller caching space compared to conventional algorithms.

Read more
Signal Processing

Deep Learning for Short-Term Voltage Stability Assessment of Power Systems

To fully learn the latent temporal dependencies from post-disturbance system dynamic trajectories, deep learning is utilized for short-term voltage stability (STVS) assessment of power systems in this paper. First of all, a semi-supervised cluster algorithm is performed to obtain class labels of STVS instances due to the unavailability of reliable quantitative criteria. Secondly, a long short-term memory (LSTM) based assessment model is built through learning the time dependencies from the post-disturbance system dynamics. Finally, the trained assessment model is employed to determine the systems stability status in real time. The test results on the IEEE 39-bus system suggest that the proposed approach manages to assess the stability status of the system accurately and timely. Furthermore, the superiority of the proposed method over traditional shallow learning-based assessment methods has also been proved.

Read more
Signal Processing

Deep Learning-based Beam Tracking for Millimeter-wave Communications under Mobility

In this paper, we propose a deep learning-based beam tracking method for millimeter-wave (mmWave)communications. Beam tracking is employed for transmitting the known symbols using the sounding beams and tracking time-varying channels to maintain a reliable communication link. When the pose of a user equipment (UE) device varies rapidly, the mmWave channels also tend to vary fast, which hinders seamless communication. Thus, models that can capture temporal behavior of mmWave channels caused by the motion of the device are required, to cope with this problem. Accordingly, we employa deep neural network to analyze the temporal structure and patterns underlying in the time-varying channels and the signals acquired by inertial sensors. We propose a model based on long short termmemory (LSTM) that predicts the distribution of the future channel behavior based on a sequence of input signals available at the UE. This channel distribution is used to 1) control the sounding beams adaptively for the future channel state and 2) update the channel estimate through the measurement update step under a sequential Bayesian estimation framework. Our experimental results demonstrate that the proposed method achieves a significant performance gain over the conventional beam tracking methods under various mobility scenarios.

Read more
Signal Processing

Deep Learning-based Phase Reconfiguration for Intelligent Reflecting Surfaces

Intelligent reflecting surfaces (IRSs), consisting of reconfigurable metamaterials, have recently attracted attention as a promising cost-effective technology that can bring new features to wireless communications. These surfaces can be used to partially control the propagation environment and can potentially provide a power gain that is proportional to the square of the number of IRS elements when configured in a proper way. However, the configuration of the local phase matrix at the IRSs can be quite a challenging task since they are purposely designed to not have any active components, therefore, they are not able to process any pilot signal. In addition, a large number of elements at the IRS may create a huge training overhead. In this paper, we present a deep learning (DL) approach for phase reconfiguration at an IRS in order to learn and make use of the local propagation environment. The proposed method uses the received pilot signals reflected through the IRS to train the deep feedforward network. The performance of the proposed approach is evaluated and the numerical results are presented.

Read more
Signal Processing

Deep Learning-based Power Control for Cell-Free Massive MIMO Networks

A deep learning (DL)-based power control algorithm that solves the max-min user fairness problem in a cell-free massive multiple-input multiple-output (MIMO) system is proposed. Max-min rate optimization problem in a cell-free massive MIMO uplink setup is formulated, where user power allocations are optimized in order to maximize the minimum user rate. Instead of modeling the problem using mathematical optimization theory, and solving it with iterative algorithms, our proposed solution approach is using DL. Specifically, we model a deep neural network (DNN) and train it in an unsupervised manner to learn the optimum user power allocations which maximize the minimum user rate. This novel unsupervised learning-based approach does not require optimal power allocations to be known during model training as in previously used supervised learning techniques, hence it has a simpler and flexible model training stage. Numerical results show that the proposed DNN achieves a performance-complexity trade-off with around 400 times faster implementation and comparable performance to the optimization-based algorithm. An online learning stage is also introduced, which results in near-optimal performance with 4-6 times faster processing.

Read more
Signal Processing

Deep Learning-based Symbolic Indoor Positioning using the Serving eNodeB

This paper presents a novel indoor positioning method designed for residential apartments. The proposed method makes use of cellular signals emitting from a serving eNodeB which eliminates the need for specialized positioning infrastructure. Additionally, it utilizes Denoising Autoencoders to mitigate the effects of cellular signal loss. We evaluated the proposed method using real-world data collected from two different smartphones inside a representative apartment of eight symbolic spaces. Experimental results verify that the proposed method outperforms conventional symbolic indoor positioning techniques in various performance metrics. To promote reproducibility and foster new research efforts, we made all the data and codes associated with this work publicly available.

Read more
Signal Processing

Deep Reinforcement Learning Aided Monte Carlo Tree Search for MIMO Detection

This paper proposes a novel multiple-input multiple-output (MIMO) symbol detector that incorporates a deep reinforcement learning (DRL) agent into the Monte Carlo tree search (MCTS) detection algorithm. We first describe how the MCTS algorithm, used in many decision-making problems, is applied to the MIMO detection problem. Then, we introduce a self-designed deep reinforcement learning agent, consisting of a policy value network and a state value network, which is trained to detect MIMO symbols. The outputs of the trained networks are adopted into a modified MCTS detection algorithm to provide useful node statistics and facilitate enhanced tree search process. The resulted scheme, termed the DRL-MCTS detector, demonstrates significant improvements over the original MCTS detection algorithm and exhibits favorable performance compared to other existing linear and DNN-based detection methods under varying channel conditions.

Read more
Signal Processing

Deep Reinforcement Learning-based Anti-jamming Power Allocation in a Two-cell NOMA Network

The performance of Non-orthogonal Multiple Access (NOMA) system dramatically decreases in the presence of inter-cell interference. This condition gets more challenging if a smart jammer is interacting in a network. In this paper, the NOMA power allocation of two independent Base Stations (BSs) against a smart jammer is, modeled as a sequential game. In this game, at first, each BS as a leader independently chooses its power allocation strategy. Then, the smart jammer as the follower selects its optimal strategy based on the strategies of the BSs. The solutions of this game are, derived under different conditions. Based on the game-theoretical analysis, three new schemes are proposed for anti-jamming NOMA power allocation in a two-cell scenario called a) Q-Learning based Unselfish (QLU) NOMA power allocation scheme, b) Deep Q-Learning based Unselfish (DQLU) NOMA power allocation scheme, and c) Hot Booting Deep Q-Learning based Unselfish (HBDQLU) NOMA power allocation scheme. In these methods the BSs do not coordinate with each other. But our analysis theoretically predicts that with high probability, the proposed methods will converge to the optimal strategy from the total network point of view. Simulation results show the convergence of the proposed schemes and also their outperformance with respect to the Q-Learning-based Selfish (QLS) NOMA power allocation method.

Read more
Signal Processing

Deep Residual Learning for Channel Estimation in Intelligent Reflecting Surface-Assisted Multi-User Communications

Channel estimation is one of the main tasks in realizing practical intelligent reflecting surface-assisted multi-user communication (IRS-MC) systems. However, different from traditional communication systems, an IRS-MC system generally involves a cascaded channel with a sophisticated statistical distribution. In this case, the optimal minimum mean square error (MMSE) estimator requires the calculation of a multidimensional integration which is intractable to be implemented in practice. To further improve the channel estimation performance, in this paper, we model the channel estimation as a denoising problem and adopt a deep residual learning (DReL) approach to implicitly learn the residual noise for recovering the channel coefficients from the noisy pilot-based observations. To this end, we first develop a versatile DReL-based channel estimation framework where a deep residual network (DRN)-based MMSE estimator is derived in terms of Bayesian philosophy. As a realization of the developed DReL framework, a convolutional neural network (CNN)-based DRN (CDRN) is then proposed for channel estimation in IRS-MC systems, in which a CNN denoising block equipped with an element-wise subtraction structure is specifically designed to exploit both the spatial features of the noisy channel matrices and the additive nature of the noise simultaneously. In particular, an explicit expression of the proposed CDRN is derived and analyzed in terms of Bayesian estimation to characterize its properties theoretically. Finally, simulation results demonstrate that the performance of the proposed method approaches that of the optimal MMSE estimator requiring the availability of the prior probability density function of channel.

Read more

Ready to get started?

Join us today