Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shabab Bazrafkan is active.

Publication


Featured researches published by Shabab Bazrafkan.


IEEE Access | 2017

Smart Augmentation Learning an Optimal Data Augmentation Strategy

Joseph Lemley; Shabab Bazrafkan; Peter Corcoran

A recurring problem faced when training neural networks is that there is typically not enough data to maximize the generalization capability of deep neural networks. There are many techniques to address this, including data augmentation, dropout, and transfer learning. In this paper, we introduce an additional method, which we call smart augmentation and we show how to use it to increase the accuracy and reduce over fitting on a target network. Smart augmentation works, by creating a network that learns how to generate augmented data during the training process of a target network in a way that reduces that networks loss. This allows us to learn augmentations that minimize the error of that network. Smart augmentation has shown the potential to increase accuracy by demonstrably significant measures on all data sets tested. In addition, it has shown potential to achieve similar or improved performance levels with significantly smaller network sizes in a number of tested cases.


IEEE Consumer Electronics Magazine | 2017

Deep Learning for Consumer Devices and Services: Pushing the limits for machine learning, artificial intelligence, and computer vision.

Joe Lemley; Shabab Bazrafkan; Peter Corcoran

In the last few years, we have witnessed an exponential growth in research activity into the advanced training of convolutional neural networks (CNNs), a field that has become known as deep learning. This has been triggered by a combination of the availability of significantly larger data sets, thanks in part to a corresponding growth in big data, and the arrival of new graphics-processing-unit (GPU)-based hardware that enables these large data sets to be processed in reasonable timescales. Suddenly, a wide variety of long-standing problems in machine learning, artificial intelligence, and computer vision have seen significant improvements, often sufficient to break through long-standing performance barriers. Across multiple fields, these achievements have inspired the development of improved tools and methodologies leading to even broader applicability of deep learning. The new generation of smart assistants, such as Alexa, Hello Google, and others, have their roots and learning algorithms tied to deep learning. In this article, we review the current state of deep learning, explain what it is, why it has managed to improve on the long-standing techniques of conventional neural networks, and, most importantly, how you can get started with adopting deep learning into your own research activities to solve both new and old problems and build better, smarter consumer devices and services.


IEEE Consumer Electronics Magazine | 2015

Eye Gaze for Consumer Electronics: Controlling and commanding intelligent systems.

Shabab Bazrafkan; Anuradha Kar; Claudia Costache

Over the last several years, there has been much research and investigation on finding new ways to interact with the smart systems that currently form an integral part of our lives. One of the most widely researched fields in human-computer interaction (HCI) has been the use of human eye gaze as an input modality to control and command intelligent systems. For example, gaze-based schemes for hands-free input to computers for text entry/scrolling/pointing was proposed as early as in 1989 for disabled persons [1]. In the field of commercial applications, gaze-based interactions have brought immersive experiences in the world of virtual gaming and multimedia entertainment [2]. Eye gaze is also a significant feature for detecting the attention and intent of an individual. For example, gaze tracking could be implemented in a car to detect driver consciousness or in a smartphone to switch operations by sensing user attentiveness. In this article, we discuss some existing applications that use human eye gaze as a vital cue for various consumer platforms. For each of the use cases, the utility and advantages as well as inherent limitations are discussed.


international conference on consumer electronics | 2017

Deep learning for facial expression recognition: A step closer to a smartphone that knows your moods

Shabab Bazrafkan; Tudor Nedelcu; Pawel Filipczuk; Peter Corcoran

By growing the capacity and processing power of the handheld devices nowadays, a wide range of capabilities can be implemented in these devices to make them more intelligent and user friendly. Determining the mood of the user can be used in order to provide suitable reactions from the device in different conditions. One of the most studied ways of mood detection is by using facial expressions, which is still one of the challenging fields in pattern recognition and machine learning science. Deep Neural Networks (DNN) have been widely used in order to overcome the difficulties in facial expression classification. In this paper it is shown that the classification accuracy is significantly lower when the network is trained with one database and tested with a different database. A solution for obtaining a general and robust network is given as well.


international conference on consumer electronics | 2016

Eye-gaze systems — An analysis of error sources and potential accuracy in consumer electronics use cases

Anuradha Kar; Shabab Bazrafkan; Claudia C Ostache; Peter Corcoran

Several generic CE use cases and corresponding techniques for eye gaze estimation (EGE) are reviewed. The optimal approaches for each use case are determined from a review of recent literature. In addition, the most probable error sources for EGE are determined and the impact of these error sources is quantified. A discussion and analysis of the research outcome is given and future work outlined.


international conference on consumer electronics | 2016

Finger vein biometric: Smartphone footprint prototype with vein map extraction using computational imaging techniques

Shabab Bazrafkan; Tudor Nedelcu; Claudia Costache; Peter Corcoran

A new vein structure based biometric approach is introduced in this paper. The idea is to use the finger vein structure in the intermediate phalange to identify or authenticate individuals. The concept of the hardware is to be implementable in handheld devices. Hardware configuration and software implementation are presented. The idea of temporal median filter was used in order to fuse finger images and Gabor filter bank was implemented to extract the finger vein maps.


Pattern Recognition Letters | 2018

Latent space mapping for generation of object elements with corresponding data annotation

Shabab Bazrafkan; Hossein Javidnia; Peter Corcoran

Abstract Deep neural generative models such as Variational Auto-Encoders (VAE) and Generative Adversarial Networks (GAN) give promising results in estimating the data distribution across a range of machine learning fields of application. Recent results have been especially impressive in image synthesis where learning the spatial appearance information is a key goal. This enables the generation of intermediate spatial data that corresponds to the original dataset. In the training stage, these models learn to decrease the distance of their output distribution to the actual data and, in the test phase, they map a latent space to the data space. Since these models have already learned their latent space mapping, one question is whether there is a function mapping the latent space to any aspect of the database for the given generator. In this work, it has been shown that this mapping is relatively straightforward using small neural network models and by minimizing the mean square error. As a demonstration of this technique, two example use cases have been implemented: firstly, the idea to generate facial images with corresponding landmark data and secondly, generation of low-quality iris images (as would be captured with a smartphone user-facing camera) with a corresponding ground-truth segmentation contour.


Neural Networks | 2018

An end to end Deep Neural Network for iris segmentation in unconstrained scenarios

Shabab Bazrafkan; Shejin Thavalengal; Peter Corcoran

With the increasing imaging and processing capabilities of todays mobile devices, user authentication using iris biometrics has become feasible. However, as the acquisition conditions become more unconstrained and as image quality is typically lower than dedicated iris acquisition systems, the accurate segmentation of iris regions is crucial for these devices. In this work, an end to end Fully Convolutional Deep Neural Network (FCDNN) design is proposed to perform the iris segmentation task for lower-quality iris images. The network design process is explained in detail, and the resulting network is trained and tuned using several large public iris datasets. A set of methods to generate and augment suitable lower quality iris images from the high-quality public databases are provided. The network is trained on Near InfraRed (NIR) images initially and later tuned on additional datasets derived from visible images. Comprehensive inter-database comparisons are provided together with results from a selection of experiments detailing the effects of different tunings of the network. Finally, the proposed model is compared with SegNet-basic, and a near-optimal tuning of the network is compared to a selection of other state-of-art iris segmentation algorithms. The results show very promising performance from the optimized Deep Neural Networks design when compared with state-of-art techniques applied to the same lower quality datasets.


IEEE Consumer Electronics Magazine | 2018

Pushing the AI Envelope: Merging Deep Networks to Accelerate Edge Artificial Intelligence in Consumer Electronics Devices and Systems

Shabab Bazrafkan; Peter Corcoran


arxiv:eess.IV | 2017

Semi-Parallel Deep Neural Networks (SPDNN), Convergence and Generalization

Shabab Bazrafkan; Peter Corcoran

Collaboration


Dive into the Shabab Bazrafkan's collaboration.

Top Co-Authors

Avatar

Peter Corcoran

National University of Ireland

View shared research outputs
Top Co-Authors

Avatar

Hossein Javidnia

National University of Ireland

View shared research outputs
Top Co-Authors

Avatar

Joseph Lemley

National University of Ireland

View shared research outputs
Top Co-Authors

Avatar

Anuradha Kar

National University of Ireland

View shared research outputs
Top Co-Authors

Avatar

Claudia Costache

National University of Ireland

View shared research outputs
Top Co-Authors

Avatar

Joe Lemley

National University of Ireland

View shared research outputs
Top Co-Authors

Avatar

Tudor Nedelcu

National University of Ireland

View shared research outputs
Top Co-Authors

Avatar

Adrian-Stefan Ungureanu

National University of Ireland

View shared research outputs
Top Co-Authors

Avatar

Claudia C Ostache

National University of Ireland

View shared research outputs
Top Co-Authors

Avatar

Shejin Thavalengal

National University of Ireland

View shared research outputs
Researchain Logo
Decentralizing Knowledge