Featured Researches

Information Theory

Deep Learning for THz Drones with Flying Intelligent Surfaces: Beam and Handoff Prediction

We consider the problem of proactive handoff and beam selection in Terahertz (THz) drone communication networks assisted with reconfigurable intelligent surfaces (RIS). Drones have emerged as critical assets for next-generation wireless networks to provide seamless connectivity and extend the coverage, and can largely benefit from operating in the THz band to achieve high data rates (such as considered for 6G). However, THz communications are highly susceptible to channel impairments and blockage effects that become extra challenging when accounting for drone mobility. RISs offer flexibility to extend coverage by adapting to channel dynamics. To integrate RISs into THz drone communications, we propose a novel deep learning solution based on a recurrent neural network, namely the Gated Recurrent Unit (GRU), that proactively predicts the serving base station/RIS and the serving beam for each drone based on the prior observations of drone location/beam trajectories. This solution has the potential to extend the coverage of drones and enhance the reliability of next-generation wireless communications. Predicting future beams based on the drone beam/position trajectory significantly reduces the beam training overhead and its associated latency, and thus emerges as a viable solution to serve time-critical applications. Numerical results based on realistic 3D ray-tracing simulations show that the proposed deep learning solution is promising for future RIS-assisted THz networks by achieving near-optimal proactive hand-off performance and more than 90% accuracy for beam prediction.

Read more
Information Theory

Deep Learning-Aided 5G Channel Estimation

Deep learning has demonstrated the important roles in improving the system performance and reducing computational complexity for 5 G-and-beyond networks. In this paper, we propose a new channel estimation method with the assistance of deep learning in order to support the least squares estimation, which is a low-cost method but having relatively high channel estimation errors. This goal is achieved by utilizing a MIMO (multiple-input multiple-output) system with a multi-path channel profile used for simulations in the 5G networks under the severity of Doppler effects. Numerical results demonstrate the superiority of the proposed deep learning-assisted channel estimation method over the other channel estimation methods in previous works in terms of mean square errors.

Read more
Information Theory

Deep Reinforcement Learning for Energy-Efficient Beamforming Design in Cell-Free Networks

Cell-free network is considered as a promising architecture for satisfying more demands of future wireless networks, where distributed access points coordinate with an edge cloud processor to jointly provide service to a smaller number of user equipments in a compact area. In this paper, the problem of uplink beamforming design is investigated for maximizing the long-term energy efficiency (EE) with the aid of deep reinforcement learning (DRL) in the cell-free network. Firstly, based on the minimum mean square error channel estimation and exploiting successive interference cancellation for signal detection, the expression of signal to interference plus noise ratio (SINR) is derived. Secondly, according to the formulation of SINR, we define the long-term EE, which is a function of beamforming matrix. Thirdly, to address the dynamic beamforming design with continuous state and action space, a DRL-enabled beamforming design is proposed based on deep deterministic policy gradient (DDPG) algorithm by taking the advantage of its double-network architecture. Finally, the results of simulation indicate that the DDPG-based beamforming design is capable of converging to the optimal EE performance. Furthermore, the influence of hyper-parameters on the EE performance of the DDPG-based beamforming design is investigated, and it is demonstrated that an appropriate discount factor and hidden layers size can facilitate the EE performance.

Read more
Information Theory

Deep-Learned Approximate Message Passing for Asynchronous Massive Connectivity

This paper considers the massive connectivity problem in an asynchronous grant-free random access system, where a huge number of devices sporadically transmit data to a base station (BS) with imperfect synchronization. The goal is to design algorithms for joint user activity detection, delay detection, and channel estimation. By exploiting the sparsity on both user activity and delays, we formulate a hierarchical sparse signal recovery problem in both the single-antenna and the multiple-antenna scenarios. While traditional compressed sensing algorithms can be applied to these problems, they suffer high computational complexity and often require the perfect statistical information of channel and devices. This paper solves these problems by designing the Learned Approximate Message Passing (LAMP) network, which belongs to model-driven deep learning approaches and ensures efficient performance without tremendous training data. Particularly, in the multiple-antenna scenario, we design three different LAMP structures, namely, distributed, centralized and hybrid ones, to balance the performance and complexity. Simulation results demonstrate that the proposed LAMP networks can significantly outperform the conventional AMP method thanks to their ability of parameter learning. It is also shown that LAMP has robust performance to the maximal delay spread of the asynchronous users.

Read more
Information Theory

DeepMuD: Multi-user Detection for Uplink Grant-Free NOMA IoT Networks via Deep Learning

In this letter, we propose a deep learning-aided multi-user detection (DeepMuD) in uplink non-orthogonal multiple access (NOMA) to empower the massive machine-type communication where an offline-trained Long Short-Term Memory (LSTM)-based network is used for multi-user detection. In the proposed DeepMuD, a perfect channel state information (CSI) is also not required since it is able to perform a joint channel estimation and multi-user detection with the pilot responses, where the pilot-to-frame ratio is very low. The proposed DeepMuD improves the error performance of the uplink NOMA significantly and outperforms the conventional detectors (even with perfect CSI). Moreover, this gain becomes superb with the increase in the number of Internet of Things (IoT) devices. Furthermore, the proposed DeepMuD has a flexible detection and regardless of the number of IoT devices, the multi-user detection can be performed. Thus, an arbitrary number of IoT devices can be served without a signaling overhead, which enables the grant-free communication.

Read more
Information Theory

Delay Minimization in Sliced Multi-Cell Mobile Edge Computing (MEC) Systems

We consider the problem of jointly optimizing users' offloading decisions, communication and computing resource allocation in a sliced multi-cell mobile edge computing (MEC) network. We minimize the weighted sum of the gap between the observed delay at each slice and its corresponding delay requirement, where weights set the priority of each slice. Fractional form of the objective function, discrete subchannel allocation, considered partial offloading, and the interference incorporated in the rate function, make the considered problem a complex mixed integer non-linear programming problem. Thus, we decompose the original problem into two sub-problems: (i) offloading decision-making and (ii) joint computation resource, subchannel, and power allocation. We solve the first sub-problem optimally and for the second sub-problem, leveraging on novel tools from fractional programming and Augmented Lagrangian method, we propose an efficient algorithm whose computational complexity is proved to be polynomial. Using alternating optimization, we solve these two sub-problems iteratively until convergence is obtained. Simulation results demonstrate the convergence of our proposed algorithm and its effectiveness compared to existing schemes.

Read more
Information Theory

Delay-Phase Precoding for Wideband THz Massive MIMO

Benefiting from tens of GHz bandwidth, terahertz (THz) communication is considered to be a promising technology to provide ultra-high speed data rates for future 6G wireless systems. To compensate for the serious propagation attenuation of THz signals, massive multiple-input multiple-output (MIMO) with hybrid precoding can be utilized to generate directional beams with high array gains. However, the standard hybrid precoding architecture based on frequency-independent phase-shifters cannot cope with the beam split effect in THz massive MIMO systems, where the directional beams will split into different physical directions at different subcarrier frequencies. The beam split effect will result in a serious array gain loss across the entire bandwidth, which has not been well investigated in THz massive MIMO systems. In this paper, we first reveal and quantify the seriousness of the beam split effect in THz massive MIMO systems by analyzing the array gain loss it causes. Then, we propose a new precoding architecture called delay-phase precoding (DPP) to mitigate this effect. Specifically, the proposed DPP introduces a time delay network as a new precoding layer between radio-frequency chains and phase-shifters in the standard hybrid precoding architecture. In this way, conventional phase-controlled analog beamforming can be converted into delay-phase controlled analog beamforming. Unlike frequency-independent phase shifts, the time delay network introduced in the DPP can realize frequency-dependent phase shifts, which can be designed to generate frequency-dependent beams towards the target physical direction across the entire THz bandwidth. Due to the joint control of delay and phase, the proposed DPP can significantly relieve the array gain loss caused by the beam split effect. Furthermore, we propose a hardware structure by using true-time-delayers to realize the concept of DPP.

Read more
Information Theory

Design and Analysis of Wideband Full-Duplex FR2-IAB Networks

This paper develops a 3GPP-inspired design for the in-band-full-duplex (IBFD) integrated access and backhaul (IAB) networks in the frequency range 2 (FR2) band, which can enhance the spectral efficiency (SE) and coverage while reducing the latency. However, the self-interference (SI), which is usually more than 100 dB higher than the signal-of-interest, becomes the major bottleneck in developing these IBFD networks. We design and analyze a subarray-based hybrid beamforming FD-IAB system with the RF beamformers obtained via RF codebooks given by a modified Linde-Buzo-Gray (LBG) algorithm. The SI is canceled in three stages, where the antenna isolation forms the first stage. The second stage consists of the optical domain-based RF cancellation, where cancelers are connected with the RF chain pairs. The third stage is comprised of the digital cancellation via successive interference cancellation followed by minimum mean-squared error baseband receiver. Multiuser interference in the access link is canceled by zero-forcing at the IAB-node transmitter. Simulations show that our staged SI cancellation can enhance the SE. Moreover, the residual SI due to the hardware impairments and channel uncertainty can affect the SE of the FD scheme in the backhaul link.

Read more
Information Theory

Design of Full-Duplex Millimeter-Wave Integrated Access and Backhaul Networks

One of the key technologies for the future cellular networks is full duplex (FD)-enabled integrated access and backhaul (IAB) networks operating in the millimeter-wave (mmWave) frequencies. The main challenge in realizing FD-IAB networks is mitigating the impact of self-interference (SI) in the wideband mmWave frequencies. In this article, we first introduce the 3GPP IAB network architectures and wideband mmWave channel models. By utilizing the subarray-based hybrid precoding scheme at the FD-IAB node, multiuser interference is mitigated using zero-forcing at the transmitter, whereas the residual SI after successfully deploying antenna and analog cancellation is canceled by a minimum mean square error baseband combiner at the receiver. The spectral efficiency (SE) is evaluated for the RF insertion loss (RFIL) with different kinds of phase shifters and channel uncertainty. Simulation results show that, in the presence of the RFIL, the almost double SE, which is close to that obtained from fully connected hybrid precoding, can be achieved as compared to half duplex systems when the uncertainties are of low strength.

Read more
Information Theory

Design of Polar Code Lattices of Small Dimension

Polar code lattices are formed from binary polar codes using Construction D. In this paper, we propose a design technique for finite-dimension polar code lattices. The dimension n and target probability of decoding error are parameters for this design. To select the rates of the Construction D component codes, rather than using the capacity as in past work, we use the explicit finite-length properties of the polar code. Under successive cancellation decoding, density evolution allows choosing code rates that satisfy the equal error probability rule. At an error-rate of 10 ?? , a dimension n=128 polar code lattice achieves a VNR of 2.5 dB, within 0.2 dB of the best-known BCH code lattice, but with significantly lower decoding complexity.

Read more

Ready to get started?

Join us today