Featured Researches

Signal Processing

A Comprehensive Survey of Machine Learning Applied to Radar Signal Processing

Modern radar systems have high requirements in terms of accuracy, robustness and real-time capability when operating on increasingly complex electromagnetic environments. Traditional radar signal processing (RSP) methods have shown some limitations when meeting such requirements, particularly in matters of target classification. With the rapid development of machine learning (ML), especially deep learning, radar researchers have started integrating these new methods when solving RSP-related problems. This paper aims at helping researchers and practitioners to better understand the application of ML techniques to RSP-related problems by providing a comprehensive, structured and reasoned literature overview of ML-based RSP techniques. This work is amply introduced by providing general elements of ML-based RSP and by stating the motivations behind them. The main applications of ML-based RSP are then analysed and structured based on the application field. This paper then concludes with a series of open questions and proposed research directions, in order to indicate current gaps and potential future solutions and trends.

Read more
Signal Processing

A Comprehensive Survey of the Tactile Internet: State of the art and Research Directions

The Internet has made several giant leaps over the years, from a fixed to a mobile Internet, then to the Internet of Things, and now to a Tactile Internet. The Tactile Internet goes far beyond data, audio and video delivery over fixed and mobile networks, and even beyond allowing communication and collaboration among things. It is expected to enable haptic communication and allow skill set delivery over networks. Some examples of potential applications are tele-surgery, vehicle fleets, augmented reality and industrial process automation. Several papers already cover many of the Tactile Internet-related concepts and technologies, such as haptic codecs, applications, and supporting technologies. However, none of them offers a comprehensive survey of the Tactile Internet, including its architectures and algorithms. Furthermore, none of them provides a systematic and critical review of the existing solutions. To address these lacunae, we provide a comprehensive survey of the architectures and algorithms proposed to date for the Tactile Internet. In addition, we critically review them using a well-defined set of requirements and discuss some of the lessons learned as well as the most promising research directions.

Read more
Signal Processing

A Distributed Power Control Algorithm for Energy Efficiency Maximization in Wireless Cellular Networks

In this paper, we propose a distributed power control algorithm for addressing the global energy efficiency (GEE) maximization problem subject to satisfying a minimum target SINR for all user equipments (UEs) in wireless cellular networks. We state the problem as a multi-objective optimization problem which targets minimizing total power consumption and maximizing total throughput, simultaneously, while a minimum target SINR is guaranteed for all UEs. We propose an iterative scheme executed in the UEs to control their transmit power using individual channel state information (CSI) such that the GEE is maximized in a distributed manner. We prove the convergence of the proposed iterative algorithm to its corresponding unique fixed point also shown by our numerical results. Additionally, simulation results demonstrate that our proposed scheme outperforms other algorithms in the literature and performs like the centralized algorithm executed in the base station and maximizes the GEE using the global CSI.

Read more
Signal Processing

A General 3D Non-Stationary Massive MIMO GBSM for 6G Communication Systems

A general three-dimensional (3D) non-stationary massive multiple-input multiple-output (MIMO) geometry-based stochastic model (GBSM) for the sixth generation (6G) communication systems is proposed in the paper. The novelty of the model is that the model is designed to cover a variety of channel characteristics, including space-time-frequency (STF) non-stationarity, spherical wavefront, spatial consistency, channel hardening, etc. Firstly, the introduction of the twin-cluster channel model is given in detail. Secondly, the key statistical properties such as space-time-frequency correlation function (STFCF), space cross-correlation function (CCF), temporal autocorrelation function (ACF), frequency correlation function (FCF), and performance indicators, e.g., singular value spread (SVS), and channel capacity are derived. Finally, the simulation results are given and consistent with some measurements in relevant literatures, which validate that the proposed channel model has a certain value as a reference to model massive MIMO channel characteristics.

Read more
Signal Processing

A General 3D Non-Stationary Wireless Channel Model for 5G and Beyond

In this paper, a novel three-dimensional (3D) non-stationary geometry-based stochastic model (GBSM) for the fifth generation (5G) and beyond 5G (B5G) systems is proposed. The proposed B5G channel model (B5GCM) is designed to capture various channel characteristics in (B)5G systems such as space-time-frequency (STF) non-stationarity, spherical wavefront (SWF), high delay resolution, time-variant velocities and directions of motion of the transmitter, receiver, and scatterers, spatial consistency, etc. By combining different channel properties into a general channel model framework, the proposed B5GCM is able to be applied to multiple frequency bands and multiple scenarios, including massive multiple-input multiple-output (MIMO), vehicle-to-vehicle (V2V), high-speed train (HST), and millimeter wave-terahertz (mmWave-THz) communication scenarios. Key statistics of the proposed B5GCM are obtained and compared with those of standard 5G channel models and corresponding measurement data, showing the generalization and usefulness of the proposed model.

Read more
Signal Processing

A General Framework for RIS-Aided mmWave Communication Networks: Channel Estimation and Mobile User Tracking

Reconfigurable intelligent surface (RIS) has been widely discussed as new technology to improve wireless communication performance. Based on the unique design of RIS, its elements can reflect, refract, absorb, or focus the incoming waves toward any desired direction. These functionalities turned out to be a major solution to overcome millimeter-wave (mmWave)'s high propagation conditions including path attenuation and blockage. However, channel estimation in RIS-aided communication is still a major concern due to the passive nature of RIS elements, and estimation overhead that arises with multiple-input multiple-output (MIMO) system. As a consequence, user tracking has not been analyzed yet. This paper is the first work that addresses channel estimation, beamforming, and user tracking under practical mmWave RIS-MIMO systems. By providing the mathematical relation of RIS design with a MIMO system, a three-stage framework is presented. Starting with estimating the channel between a base station (BS) and RIS using hierarchical beam searching, followed by estimating the channel between RIS and user using an iterative resolution algorithm. Lastly, a popular tracking algorithm is employed to track channel parameters between the RIS and the user. System analysis demonstrates the robustness and the effectiveness of the proposed framework in real-time scenarios.

Read more
Signal Processing

A Grant-based Random Access Protocol in Extra-Large Massive MIMO System

Extra-large massive multiple-input multiple-output (XL-MIMO) systems is a new concept, where spatial non-stationarities allow activate a high number of user equipments (UEs). This paper focuses on a grant-based random access (RA) approach in the novel XL-MIMO channel scenarios. Modifications in the classical Strongest User Collision Resolution (SUCRe) protocol have been aggregated to explore the visibility regions (VRs) overlapping in XL-MIMO. The proposed grant-based RA protocol takes advantage of this new degree of freedom for improving the number of access attempts and accepted UEs. As a result, the proposed grant-based protocol for XL-MIMO systems is capable of reducing latency in the pilot allocation step.

Read more
Signal Processing

A Graph-Constrained Changepoint Learning Approach for Automatic QRS-Complex Detection

This study presents a new viewpoint on ECG signal analysis by applying a graph-based changepoint detection model to locate R-peak positions. This model is based on a new graph learning algorithm to learn the constraint graph given the labeled ECG data. The proposed learning algorithm starts with a simple initial graph and iteratively edits the graph so that the final graph has the maximum accuracy in R-peak detection. We evaluate the performance of the algorithm on the MIT-BIH Arrhythmia Database. The evaluation results demonstrate that the proposed method can obtain comparable results to other state-of-the-art approaches. The proposed method achieves the overall sensitivity of Sen = 99.64%, positive predictivity of PPR = 99.71%, and detection error rate of DER = 0.19.

Read more
Signal Processing

A Greedy Graph Search Algorithm Based on Changepoint Analysis for Automatic QRS Complex Detection

The electrocardiogram (ECG) signal is the most widely used non-invasive tool for the investigation of cardiovascular diseases. Automatic delineation of ECG fiducial points, in particular the R-peak, serves as the basis for ECG processing and analysis. This study proposes a new method of ECG signal analysis by introducing a new class of graphical models based on optimal changepoint detection models, named the graph-constrained changepoint detection (GCCD) model. The GCCD model treats fiducial points delineation in the non-stationary ECG signal as a changepoint detection problem. The proposed model exploits the sparsity of changepoints to detect abrupt changes within the ECG signal; thereby, the R-peak detection task can be relaxed from any preprocessing step. In this novel approach, prior biological knowledge about the expected sequence of changes is incorporated into the model using the constraint graph, which can be defined manually or automatically. First, we define the constraint graph manually; then, we present a graph learning algorithm that can search for an optimal graph in a greedy scheme. Finally, we compare the manually defined graphs and learned graphs in terms of graph structure and detection accuracy. We evaluate the performance of the algorithm using the MIT-BIH Arrhythmia Database. The proposed model achieves an overall sensitivity of 99.64%, positive predictivity of 99.71%, and detection error rate of 0.19 for the manually defined constraint graph and overall sensitivity of 99.76%, positive predictivity of 99.68%, and detection error rate of 0.55 for the automatic learning constraint graph.

Read more
Signal Processing

A High-Throughput Multi-Mode LDPC Decoder for 5G NR

This paper presents a partially parallel low-density parity-check (LDPC) decoder designed for the 5G New Radio (NR) standard. The design is using a multi-block parallel architecture with a flooding schedule. The decoder can support any code rates and code lengths up to the lifting size Zmax= 96. To compensate for the dropped throughput associated with the smaller Z values, the design can double and quadruple its parallelism when lifting sizes Z<= 48 and Z<= 24 are selected respectively. Therefore, the decoder can process up to eight frames and restore the throughput to the maximum. To simplify the design's architecture, a new variable node for decoding the extended parity bits present in the lower code rates is proposed. The FPGA implementation of the decoder results in a throughput of 2.1 Gbps decoding the 11/12 code rate. Additionally, the synthesized decoder using the 28 nm TSMC technology, achieves a maximum clock frequency of 526 MHz and a throughput of 13.46 Gbps. The core decoder occupies 1.03 mm2, and the power consumption is 229 mW.

Read more

Ready to get started?

Join us today