Featured Researches

Signal Processing

A Sequential Modelling Approach for Indoor Temperature Prediction and Heating Control in Smart Buildings

The rising availability of large volume data, along with increasing computing power, has enabled a wide application of statistical Machine Learning (ML) algorithms in the domains of Cyber-Physical Systems (CPS), Internet of Things (IoT) and Smart Building Networks (SBN). This paper proposes a learning-based framework for sequentially applying the data-driven statistical methods to predict indoor temperature and yields an algorithm for controlling building heating system accordingly. This framework consists of a two-stage modelling effort: in the first stage, an univariate time series model (AR) was employed to predict ambient conditions; together with other control variables, they served as the input features for a second stage modelling where an multivariate ML model (XGBoost) was deployed. The models were trained with real world data from building sensor network measurements, and used to predict future temperature trajectories. Experimental results demonstrate the effectiveness of the modelling approach and control algorithm, and reveal the promising potential of the mixed data-driven approach in smart building applications. By making wise use of IoT sensory data and ML algorithms, this work contributes to efficient energy management and sustainability in smart buildings.

Read more
Signal Processing

A Sketching Framework for Reduced Data Transfer in Photon Counting Lidar

Single-photon lidar has become a prominent tool for depth imaging in recent years. At the core of the technique, the depth of a target is measured by constructing a histogram of time delays between emitted light pulses and detected photon arrivals. A major data processing bottleneck arises on the device when either the number of photons per pixel is large or the resolution of the time stamp is fine, as both the space requirement and the complexity of the image reconstruction algorithms scale with these parameters. We solve this limiting bottleneck of existing lidar techniques by sampling the characteristic function of the time of flight (ToF) model to build a compressive statistic, a so-called sketch of the time delay distribution, which is sufficient to infer the spatial distance and intensity of the object. The size of the sketch scales with the degrees of freedom of the ToF model (number of objects) and not, fundamentally, with the number of photons or the time stamp resolution. Moreover, the sketch is highly amenable for on-chip online processing. We show theoretically that the loss of information for compression is controlled and the mean squared error of the inference quickly converges towards the optimal Cramér-Rao bound (i.e. no loss of information) for modest sketch sizes. The proposed compressed single-photon lidar framework is tested and evaluated on real life datasets of complex scenes where it is shown that a compression rate of up-to 1/150 is achievable in practice without sacrificing the overall resolution of the reconstructed image.

Read more
Signal Processing

A Survey of Deep Learning Architectures for Intelligent Reflecting Surfaces

Intelligent reflecting surfaces (IRSs) have recently received significant attention for wireless communications because it reduces the hardware complexity, physical size, weight, and cost of conventional large arrays. However, deployment of IRS entails dealing with multiple channel links between the base station (BS) and the users. Further, the BS and IRS beamformers require a joint design, wherein the IRS elements must be rapidly reconfigured. Data-driven techniques, such as deep learning (DL), are critical in addressing these challenges. The lower computation time and model-free nature of DL makes it robust against the data imperfections and environmental changes. At the physical layer, DL has been shown to be effective for IRS signal detection, channel estimation and active/passive beamforming using architectures such as supervised, unsupervised and reinforcement learning. This article provides a synopsis of these techniques for designing DL-based IRS-assisted wireless systems.

Read more
Signal Processing

A Tutorial on 5G NR V2X Communications

The Third Generation Partnership Project (3GPP) has recently published its Release 16 that includes the first Vehicle to-Everything (V2X) standard based on the 5G New Radio (NR) air interface. 5G NR V2X introduces advanced functionalities on top of the 5G NR air interface to support connected and automated driving use cases with stringent requirements. This paper presents an in-depth tutorial of the 3GPP Release 16 5G NR V2X standard for V2X communications, with a particular focus on the sidelink, since it is the most significant part of 5G NR V2X. The main part of the paper is an in-depth treatment of the key aspects of 5G NR V2X: the physical layer, the resource allocation, the quality of service management, the enhancements introduced to the Uu interface and the mobility management for V2N (Vehicle to Network) communications, as well as the co-existence mechanisms between 5G NR V2X and LTE V2X. We also review the use cases, the system architecture, and describe the evaluation methodology and simulation assumptions for 5G NR V2X. Finally, we provide an outlook on possible 5G NR V2X enhancements, including those identified within Release 17.

Read more
Signal Processing

A Two-Stage Wavelet Decomposition Method for Instantaneous Power Quality Indices Estimation Considering Interharmonics and Transient Disturbances

As the complexity increases in modern power systems, power quality analysis considering interharmonics has become a challenging and important task. This paper proposes a novel decomposition and estimation method for instantaneous power quality indices (PQIs) monitoring in single-phase and three-phase systems with interharmonics and transient disturbances. To separate the interharmonic components, a set of new scaling filter and wavelet filter with narrow transition bands are designed for the undecimated wavelet packet transform (UWPT). Further, a two-stage decomposition method for multi-tone voltage and current signals is proposed. The Hilbert transform (HT) is applied to calculate the instantaneous amplitude and phase of each frequency component, which accordingly allows the monitoring of different PQI parameters. Numerical tests are conducted to check the performance of the proposed method. The test results show that compared to other conventional approaches, instantaneous PQIs estimated by the proposed method present significant advances for tracking transitory changes in power systems, and could be considered as a helpful tool for high-accuracy PQ detections.

Read more
Signal Processing

A Wavelet-CNN-LSTM Model for Tailings Pond Risk Prediction

Tailings ponds are places for storing industrial waste. Once the tailings pond collapses, the villages nearby will be destroyed and the harmful chemicals will cause serious environmental pollution. There is an urgent need for a reliable forecast model, which could investigate the variation trend of stability coefficient of tailing dam and issue early warnings. In order to fill the gap, this work presents an hybrid network - Wavelet-based Long-Short-Term Memory (LSTM) and Convolutional Neural Network (CNN), namely Wavelet-CNN-LSTM netwrok for predicting the tailings pond risk. Firstly, we construct the especial nonlinear data processing method to impute the missing value with the numerical inversion (NI) method, which combines correlation analysis, sensitivity analysis, and Random Forest (RF) algorithms. Secondly, a new forecasting model was proposed to monitor the saturation line, which is the lifeline of the tailings pond and can directly reflect the stability of the tailings pond. After using the discrete wavelet transform (DWT) to decompose the original saturation line data into 4-layer wavelets and de-noise the data, the CNN was used to identify and learn the spatial structures in the time series, followed by LSTM cells for detecting the long-short-term dependence. Finally, different experiments were conducted to evaluate the effectiveness of our model by comparing it with other state-of-the-art algorithms. The results show that Wavelet-CNN-LSTM achieves the best score both in mean absolute percentage error (MAPE), root-mean-square error (RMSE) and R 2 .

Read more
Signal Processing

A Wideband Sliding Correlator based Channel Sounder in 65 nm CMOS: An Evaluation Board Design

Wide swaths of bandwidth at millimeter-wave (mmWave) and Terahertz (THz) frequencies stimulate diverse applications in wireless sensing, imaging, position location, cloud computing, and much more. These emerging applications motivate wireless communications hardware to operate with multi Gigahertz (GHz) bandwidth, at nominal costs, minimal size, and power consumption. Channel sounding system implementations currently used to study and measure wireless channels utilize numerous commercially available components from multiple manufacturers that result in a complex and large assembly with many costly and fragile cable interconnections between the constituents and commonly achieve a system bandwidth under one GHz. This paper presents an evaluation board (EVB) design that features a sliding correlator-based channel sounder with 2 GHz null-to-null RF bandwidth in a single monolithic integrated circuit (IC) fabricated in 65 nm CMOS technology. The EVB landscape provides necessary peripherals for signal interfacing, amplification, buffering, and enables integration into both the transmitter and receiver of a channel sounding system, thereby reducing complexity, size, and cost through integrated design. The channel sounder IC on the EVB is the worlds first to report gigabit-per-second baseband operation using low-cost CMOS technology, allowing the global research community to now have an inexpensive and compact channel sounder system with nanosecond time resolution capability for the detection of multipath signals in a wireless channel.

Read more
Signal Processing

A bi-atrial statistical shape model for large-scale in silico studies of human atria: model development and application to ECG simulations

Large-scale electrophysiological simulations to obtain electrocardiograms (ECG) carry the potential to produce extensive datasets for training of machine learning classifiers to, e.g., discriminate between different cardiac pathologies. The adoption of simulations for these purposes is limited due to a lack of ready-to-use models covering atrial anatomical variability. We built a bi-atrial statistical shape model (SSM) of the endocardial wall based on 47 segmented human CT and MRI datasets using Gaussian process morphable models. Generalization, specificity, and compactness metrics were evaluated. The SSM was applied to simulate atrial ECGs in 100 random volumetric instances. The first eigenmode of our SSM reflects a change of the total volume of both atria, the second the asymmetry between left vs. right atrial volume, the third a change in the prominence of the atrial appendages. The SSM is capable of generalizing well to unseen geometries and 95% of the total shape variance is covered by its first 23 eigenvectors. The P waves in the 12-lead ECG of 100 random instances showed a duration of 104ms in accordance with large cohort studies. The novel bi-atrial SSM itself as well as 100 exemplary instances with rule-based augmentation of atrial wall thickness, fiber orientation, inter-atrial bridges and tags for anatomical structures have been made publicly available. The novel, openly available bi-atrial SSM can in future be employed to generate large sets of realistic atrial geometries as a basis for in silico big data approaches.

Read more
Signal Processing

A comparison of three heart rate detection algorithms over ballistocardiogram signals

Heart rate (HR) detection from ballistocardiogram (BCG) signals is challenging because the signal morphology can vary between and within-subjects. Also, it differs from one sensor to another. Hence, it is essential to evaluate HR detection algorithms across several datasets and under different experimental setups. In this paper, we studied the potential of three HR detection algorithms across four independent BCG datasets. The three algorithms are as follows: the multiresolution analysis of the maximal overlap discrete wavelet transform (MODWT-MRA), continuous wavelet transform (CWT), and template matching (TM). The four datasets were obtained using a microbend fiber optic sensor, a fiber Bragg grating sensor, electromechanical films, and load cells, respectively. The datasets were gathered from: a) 10 patients during a polysomnography study, b) 50 subjects in a sitting position, c) 10 subjects in a sleeping position, and d) 40 subjects in a sleeping position. Overall, CWT with derivative of Gaussian provided superior results compared with the MODWT-MRA, CWT (frequency B-spline), and CWT (Shannon). That said, a BCG template was constructed from DataSet1. Then, it was used for HR detection in the other datasets. The TM method achieved satisfactory results for DataSet2 and DataSet3, but it did not detect the HR of two subjects in DataSet4. The proposed methods were implemented on a Raspberry Pi. As a result, the average time required to analyze a 30-second BCG signal was less than one second for all methods. Yet, the MODWT-MRA had the highest performance with an average time of 0.04 seconds.

Read more
Signal Processing

A generalized efficiency mismatch attack to bypass detection-scrambling countermeasure

The ability of an eavesdropper to compromise the security of a quantum communication system by changing the angle of the incoming light is well-known. Randomizing the role of the detectors has been proposed to be an efficient countermeasure to this type of attack. Here we show that the proposed countermeasure can be bypassed if the attack is generalized by including more attack variables. Using the experimental data from existing literature, we show how randomization effectively prevents the initial attack but fails to do so when Eve generalizes her attack strategy. Our result and methodology could be used to security-certify a free-space quantum communication receiver against all types of detector-efficiency-mismatch type attacks.

Read more

Ready to get started?

Join us today