Featured Researches

Signal Processing

Are wave union methods still suitable for 20 nm FPGA-based high-resolution (< 2 ps) time-to-digital converters?

This paper presents several new structures to pursue high-resolution (< 2 ps) time-to-digital converters (TDCs) in Xilinx 20 nm UltraScale field-programmable gate arrays (FPGAs). The proposed TDCs combined the advantages of 1) our newly proposed sub-tapped delay line (sub-TDL) architecture effective in removing bubbles and zero-bins and 2) the wave union (WU) A method to improve the resolution and reduce the impact introduced from ultrawide bins. We also compared the proposed WU/sub-TDL TDC with the TDC combining the dual sampling (DS) structure and the sub-TDL technique. Moreover, we introduced a binning method to improve the linearity and derived a formula of the total measurement uncertainty for a single-stage TDL-TDC to obtain its root-mean-square (RMS) resolution. Results conclude that the proposed designs are cost-effective in logic resources and have the potential for multiple-channel implementations. Different from the conclusions from a previous study, we found that the wave union is still influential in UltraScale devices when combining with our sub-TDL structure. We also compared with other published TDCs to demonstrate where the proposed TDCs stand.

Read more
Signal Processing

Artificial Intelligence Driven UAV-NOMA-MEC in Next Generation Wireless Networks

Driven by the unprecedented high throughput and low latency requirements in next-generation wireless networks, this paper introduces an artificial intelligence (AI) enabled framework in which unmanned aerial vehicles (UAVs) use non-orthogonal multiple access (NOMA) and mobile edge computing (MEC) techniques to service terrestrial mobile users (MUs). The proposed framework enables the terrestrial MUs to offload their computational tasks simultaneously, intelligently, and flexibly, thus enhancing their connectivity as well as reducing their transmission latency and their energy consumption. To this end, the fundamentals of this framework are first introduced. Then, a number of communication and AI techniques are proposed to improve the quality of experiences of terrestrial MUs. To this end, federated learning and reinforcement learning are introduced for intelligent task offloading and computing resource allocation. For each learning technique, motivations, challenges, and representative results are introduced. Finally, several key technical challenges and open research issues of the proposed framework are summarized.

Read more
Signal Processing

Artificial Intelligence based Sensor Data Analytics Framework for Remote Electricity Network Condition Monitoring

Rural electrification demands the use of inexpensive technologies such as single wire earth return (SWER) networks. There is a steadily growing energy demand from remote consumers, and the capacity of existing lines may become inadequate soon. Furthermore, high impedance arcing faults (HIF) from SWER lines can cause catastrophic bushfires such as the 2009 Black Saturday event. As a solution, reliable remote electricity networks can be established through breaking the existing systems down into microgrids, and existing SWER lines can be utilised to interconnect those microgrids. The development of such reliable networks with better energy demand management will rely on having an integrated network-wide condition monitoring system. As the first contribution of this thesis, a distributed online monitoring platform is developed that incorporates power quality monitoring, real-time HIF identification and transient classification in SWER network. Artificial Intelligence (AI) based techniques are developed to classify faults and transients. The proposed approach demonstrates higher HIF detection accuracy (98.67%) and reduced detection latency (115.2 ms). Secondly, a remote consumer load identification methodology is developed to detect the load type from its transients. An edge computing-based architecture is proposed to facilitate the high-frequency analysis for load identification. The proposed approach is evaluated in real-time, and it achieves an average accuracy of 98% in identifying different loads. Finally, a deep neural network-based energy disaggregation framework is developed to separate the load specific energy usage from an aggregated signal. The proposed framework is evaluated using a real-world data set. It improves the signal aggregate error by 44% and mean aggregate error by 19% in comparison with the state-of-the-art techniques.

Read more
Signal Processing

Artificial Intelligence for Satellite Communication: A Review

Satellite communication offers the prospect of service continuity over uncovered and under-covered areas, service ubiquity, and service scalability. However, several challenges must first be addressed to realize these benefits, as the resource management, network control, network security, spectrum management, and energy usage of satellite networks are more challenging than that of terrestrial networks. Meanwhile, artificial intelligence (AI), including machine learning, deep learning, and reinforcement learning, has been steadily growing as a research field and has shown successful results in diverse applications, including wireless communication. In particular, the application of AI to a wide variety of satellite communication aspects have demonstrated excellent potential, including beam-hopping, anti-jamming, network traffic forecasting, channel modeling, telemetry mining, ionospheric scintillation detecting, interference managing, remote sensing, behavior modeling, space-air-ground integrating, and energy managing. This work thus provides a general overview of AI, its diverse sub-fields, and its state-of-the-art algorithms. Several challenges facing diverse aspects of satellite communication systems are then discussed, and their proposed and potential AI-based solutions are presented. Finally, an outlook of field is drawn, and future steps are suggested.

Read more
Signal Processing

Artificial Intelligence for UAV-enabled Wireless Networks: A Survey

Unmanned aerial vehicles (UAVs) are considered as one of the promising technologies for the next-generation wireless communication networks. Their mobility and their ability to establish line of sight (LOS) links with the users made them key solutions for many potential applications. In the same vein, artificial intelligence (AI) is growing rapidly nowadays and has been very successful, particularly due to the massive amount of the available data. As a result, a significant part of the research community has started to integrate intelligence at the core of UAVs networks by applying AI algorithms in solving several problems in relation to drones. In this article, we provide a comprehensive overview of some potential applications of AI in UAV-based networks. We also highlight the limits of the existing works and outline some potential future applications of AI for UAV networks.

Read more
Signal Processing

Asymptotic Analysis of ADMM for Compressed Sensing

In this paper, we analyze the asymptotic behavior of alternating direction method of multipliers (ADMM) for compressed sensing, where we reconstruct an unknown structured signal from its underdetermined linear measurements. The analytical tool used in this paper is recently developed convex Gaussian min-max theorem (CGMT), which can be applied to various convex optimization problems to obtain its asymptotic error performance. In our analysis of ADMM, we analyze the convex subproblem in the update of ADMM and characterize the asymptotic distribution of the tentative estimate obtained at each iteration. The result shows that the update equations in ADMM can be decoupled into a scalar-valued stochastic process in the asymptotic regime with the large system limit. From the asymptotic result, we can predict the evolution of the error (e.g. mean-square-error (MSE) and symbol error rate (SER)) in ADMM for large-scale compressed sensing problems. Simulation results show that the empirical performance of ADMM and its theoretical prediction are close to each other in sparse vector reconstruction and binary vector reconstruction.

Read more
Signal Processing

Auction Based Approach For Resource Allocation In D2D Communication

Device to device communication has prevailed as an issue for small cell networks. Here we have implemented a new scheme that allows us to improve spectral capabilities of mobiles communicating with each other (peer to peer network) for downlink cellular network. Previously the spectral capabilities were handled by Reverse Iterative combinatorial auction mechanism, where the cellular uses used to bid for d2d links. We have made a comparison between Reverse Iterative combinatorial Auction (R-ICA) and New Auction method on the basis of plots on sum rate over SINR and number of d2d users.

Read more
Signal Processing

AutoBCS: Block-based Image Compressive Sensing with Data-driven Acquisition and Non-iterative Reconstruction

Block compressive sensing is a well-known signal acquisition and reconstruction paradigm with widespread application prospects in science, engineering and cybernetic systems. However, state-of-the-art block-based image compressive sensing (BCS) methods generally suffer from two issues. The sparsifying domain and the sensing matrices widely used for image acquisition are not data-driven, and thus both the features of the image and the relationships among subblock images are ignored. Moreover, doing so requires addressing high-dimensional optimization problems with extensive computational complexity for image reconstruction. In this paper, we provide a deep learning strategy for BCS, called AutoBCS, which takes the prior knowledge of images into account in the acquisition step and establishes a subsequent reconstruction model for performing fast image reconstruction with a low computational cost. More precisely, we present a learning-based sensing matrix (LSM) derived from training data to accomplish image acquisition, thereby capturing and preserving more image characteristics than those captured by existing methods. In particular, the generated LSM is proven to satisfy the theoretical requirements of compressive sensing, such as the so-called restricted isometry property. Additionally, we build a noniterative reconstruction network, which provides an end-to-end BCS reconstruction framework to eliminate blocking artifacts and maximize image reconstruction accuracy, in our AutoBCS architecture. Furthermore, we investigate comprehensive comparison studies with both traditional BCS approaches and newly developed deep learning methods. Compared with these approaches, our AutoBCS framework can not only provide superior performance in terms of image quality metrics (SSIM and PSNR) and visual perception, but also automatically benefit reconstruction speed.

Read more
Signal Processing

Automated Stroke Rehabilitation Assessment using Wearable Accelerometers in Free-Living Environments

Stroke is known as a major global health problem, and for stroke survivors it is key to monitor the recovery levels. However, traditional stroke rehabilitation assessment methods (such as the popular clinical assessment) can be subjective and expensive, and it is also less convenient for patients to visit clinics in a high frequency. To address this issue, in this work based on wearable sensing and machine learning techniques, we developed an automated system that can predict the assessment score in an objective manner. With wrist-worn sensors, accelerometer data was collected from 59 stroke survivors in free-living environments for a duration of 8 weeks, and we aim to map the week-wise accelerometer data (3 days per week) to the assessment score by developing signal processing and predictive model pipeline. To achieve this, we proposed two types of new features, which can encode the rehabilitation information from both paralysed/non-paralysed sides while suppressing the high-level noises such as irrelevant daily activities. Based on the proposed features, we further developed the longitudinal mixed-effects model with Gaussian process prior (LMGP), which can model the random effects caused by different subjects and time slots (during the 8 weeks). Comprehensive experiments were conducted to evaluate our system on both acute and chronic patients, and the results suggested its effectiveness.

Read more
Signal Processing

B-HAR: an open-source baseline framework for in depth study of human activity recognition datasets and workflows

Human Activity Recognition (HAR), based on machine and deep learning algorithms is considered one of the most promising technologies to monitor professional and daily life activities for different categories of people (e.g., athletes, elderly, kids, employers) in order to provide a variety of services related, for example to well-being, empowering of technical performances, prevention of risky situation, and educational purposes. However, the analysis of the effectiveness and the efficiency of HAR methodologies suffers from the lack of a standard workflow, which might represent the baseline for the estimation of the quality of the developed pattern recognition models. This makes the comparison among different approaches a challenging task. In addition, researchers can make mistakes that, when not detected, definitely affect the achieved results. To mitigate such issues, this paper proposes an open-source automatic and highly configurable framework, named B-HAR, for the definition, standardization, and development of a baseline framework in order to evaluate and compare HAR methodologies. It implements the most popular data processing methods for data preparation and the most commonly used machine and deep learning pattern recognition models.

Read more

Ready to get started?

Join us today