Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mario Cordina is active.

Publication


Featured researches published by Mario Cordina.


wireless communications and networking conference | 2009

Maximizing the Lifetime of Wireless Sensor Networks through Intelligent Clustering and Data Reduction Techniques

Mario Cordina; Carl James Debono

Wireless sensor networks are generally deployed in remote areas where no infrastructure is available. This imposes the use of battery operated devices which seriously limits the lifetime of the network. In this paper we present a cluster-based routing algorithm which is based on Fuzzy-ART neural networks to maximize the life span of such networks. Results show that the energy saving obtained improves the network lifetime by 79.6%, 17.1% and 22.4% (in terms of First Node Dies) when compared to LEACH, a centralised version of LEACH and a self-organizing map (SOM) neural network-based clustering algorithm respectively. Furthermore, this paper explores the use of a base station centric predictive filtering algorithm to reduce the amount of transmitted data leading to a further increase in network lifetime.


international symposium on communications, control and signal processing | 2008

Increasing wireless sensor network lifetime through the application of SOM neural networks

Mario Cordina; Carl James Debono

Wireless sensor networks are an emerging technology that have garnered significant research attention due to their extensive ability to monitor the physical world and their applicability to an extensive range of applications. These networks are generally battery powered making the lifetime of the network a major concern. This energy consumption problem can be mitigated to some extent through the use of energy-aware cluster-based routing algorithms. This work presents a novel cluster-based routing algorithm based on self-organizing map (SOM) neural networks. This solution optimizes the network in clusters in an attempt to balance the energy and reduce the transmission power required by the nodes. Results show that through the application of this optimization technique the systems lifetime is increased by 57% (in terms of first node dies) when compared to LEACH.


international conference on multimedia and expo | 2012

A Novel View-Level Target Bit Rate Distribution Estimation Technique for Real-Time Multi-view Video Plus Depth

Mario Cordina; Carl James Debono

This paper presents a novel view-level target bit rate distribution estimation technique for real-time Multi-view video plus depth using a statistical model that is based on the prediction mode distribution. Experiments using various standard test sequences show the efficacy of the technique, as the model manages to estimate online the view-level target bit rate distribution with an absolute mean estimation error of 2% and a standard deviation of 0.9%. Moreover, this technique provides adaptation of the view-level bit rate distribution providing scene change handling capability.


IEEE Transactions on Broadcasting | 2016

Quality Monitor for 3-D Video Over Hybrid Broadcast Networks

Luís Alberto da Silva Cruz; Mario Cordina; Carl James Debono; Pedro A. Amado Assunção

Hybrid broadcast networks are particularly envisaged to merge broadcast TV with broadband Internet and to act as a key enabler for new and better video services in the near future. This is expected to contribute for the evolution of 3-D and multiview video services due to the inherent diversity of its coded data, comprising several complementary streams. Using the multiview video-plus-depth format, at least two independent streams may be delivered through different channels over a hybrid network, that is, broadcasting backward compatible 2-D video in one channel and delivering its corresponding depth stream through complementary channels like LTE-based broadband Internet accesses. This article addresses the problem of monitoring the quality of 3-D video (color plus depth) delivered in such hybrid networking environments, proposing a novel scheme to estimate the impact of visual quality degradation resulting from packet losses in the broadband Internet carrying only the depth stream, without relying on the texture component of the video or any other reference data. A novel non-reference (NR) approach is described, operating as a cascade of two estimators, using only header information of the packets carrying the depth stream through IP broadband. The two-stage cascaded estimator comprises an NR packet-layer model based on an artificial neural network followed by a logistic model, with each stage outputting a separate quality estimate. Performance evaluations, done by comparing the actual and estimated scores for the structural similarity index and subjective differential mean-opinion score, reveals high accuracy for both of these estimates, with Pearson linear correlation coefficient values greater than 0.89. Since only packet-layer information is used, the algorithmic complexity of this monitoring tool is low, making it suitable for standalone implementation at arbitrary network nodes.


picture coding symposium | 2013

An adaptive Lagrange multiplier technique for multi-view video plus depth coding

Mario Cordina; Carl James Debono

The characteristics of the depth map video are different from those of the texture and thus the empirical function of the Lagrange multiplier λ normally used in the rate-distortion optimization (RDO) of texture views might not be suitable for depth map coding. In this paper, we propose a technique whereby the Lagrange multiplier used to select the macroblock (MB) mode is adapted based on the discontinuity region of the depth map and the frame type. This technique, which was tested on various standard test sequences, preserves the depth discontinuities leading to an average depth map bit rate saving of 12.5% without incurring significant quality degradation in the synthesized views compared to using the fixed Lagrange multiplier used in the respective multi-view texture videos.


visual communications and image processing | 2013

An adaptive texture-depth rate allocation estimation technique for low latency multi-view video plus depth transmission

Mario Cordina; Carl James Debono

This paper presents an adaptive texture-depth target bit rate allocation estimation technique for low latency multi-view video plus depth transmission using a multi-regression model. The proposed technique employs the prediction mode distribution of the macroblocks at the discontinuity regions of the depth map video to estimate the optimal texture-depth target bit rate allocation considering the total available bit rate. This technique was tested using various standard test sequences and has shown efficacy as the model is able to estimate, in real-time, the optimal texture-depth rate allocation with an absolute mean estimation error of 2.5% and a standard deviation of 2.2%. Moreover, it allows the texture-depth rate allocation to be adapted to the video sequence with good tracking performance, allowing the correct handling of scene changes.


Archive | 2010

Applying an SOM Neural Network to Increase the Lifetime of Battery-Operated Wireless Sensor Networks

Mario Cordina; Carl James Debono

Wireless sensor networks have garnered significant attention in recent years. According to (The Mobile Internet, 2004), more than half a billion nodes will be shipped for wireless sensor applications in 2010, for an end user market worth at least


international symposium on wireless communication systems | 2017

A support vector machine based sub-band CQI feedback compression scheme for 3GPP LTE systems

Mario Cordina; Carl James Debono

7 billion. Wireless sensor networks are one of the first real-world examples of pervasive computing, the notion that small, smart, computing and cheap sensing devices will eventually permeate the environment (Bulusu & Jha, 2005). The combination of distributed sensing, low power processors and wireless communication enables such technology to be used in a wide array of applications such as habitat monitoring and environment monitoring, military solutions, such as battlefield surveillance, and commercial applications, such as monitoring material fatigue and managing inventory. A wireless sensor network consists of hundreds or thousands of low-power, low-cost multifunctioning sensor nodes operating in an unattended environment with a limited supply of energy. The latter is one of the main constraints of each sensor node together with the limited processing power. These limitations, coupled with the deployment of a large number of sensor nodes, pose a number of challenges to the design and management of these networks, requiring energy-awareness at all layers of the networking protocol stack. The issues related to the physical and link layers are generally common for all sensor applications and therefore research in these areas focused on system-level energy awareness such as dynamic voltage scaling (Heinzelman et al, 2000a), radio communication hardware (Min et al, 2000), low duty-cycle issues (Woo & Culler, 2001), system partitioning (Ye et al, 2002) and energy-aware MAC protocols (Shih et al, 2001). At the network layer, energy efficient route setup protocols are necessary to reliably relay data from the sensor nodes to the sink whilst maximising the lifetime of the network. This chapter will focus on such a solution based on sensor node clustering, whereby the topology is decided through an SOM neural network. 25


international conference on multimedia and expo | 2017

A cross-layer MV-HEVC depth-texture rate allocation estimation technique in 3GPP LTE systems

Mario Cordina; Carl James Debono

Contemporary wireless communication standards, such as the long term evolution (LTE) standard, exploit several techniques, including link adaptation and frequency selective scheduling (FSS), to offer high data rate services. The efficacy of these techniques rely on the evolved Node B (eNB) having accurate channel state information through the use of a high signaling overhead process whereby channel quality indicator (CQI) feedback reports are sent by the user equipment (UE) to the eNB. In this work, we exploit a machine learning technique to address this problem and propose a novel sub-band CQI feedback compression scheme based on support vector machines to reduce this signaling overhead. The proposed compression scheme was implemented and tested in an LTE system level simulator and has shown efficacy with an overall CQI feedback signaling reduction of up to 88.7% whilst maintaining stable sector throughput, when compared to the standard third generation partnership project (3GPP) CQI feedback mechanism.


Iet Communications | 2017

Robust predictive filtering schemes for sub-band CQI feedback compression in 3GPP LTE systems

Mario Cordina; Carl James Debono

This paper presents a cross-layer depth-texture target bit rate allocation estimation technique for the transmission of Multiview High Efficiency Video Coding (MV-HEVC) texture plus depth content over 3GPP LTE systems. The proposed technique is based on a statistical model which exploits the texture and depth map image characteristics to estimate the optimal depth-texture rate allocation to be used by the codecs rate control algorithm. Experiments using standard test sequences show the effectiveness of the proposed technique as the model is able to estimate, on-line, the optimal depth-texture rate allocation with a mean absolute estimation error of 3.3% and a standard deviation of 2.2%. In addition, the proposed cross-layer architecture allows the depth-texture rate allocation to be adapted to both the video content characteristics and the available bandwidth offered by the wireless network making it suitable for the transmission of multiview 3D video to mobile devices.

Collaboration


Dive into the Mario Cordina's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge