Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Emmanuel Okafor is active.

Publication


Featured researches published by Emmanuel Okafor.


international conference on pattern recognition applications and methods | 2017

Comparing Local Descriptors and Bags of Visual Words to Deep Convolutional Neural Networks for Plant Recognition

Pornntiwa Pawara; Emmanuel Okafor; Olarik Surinta; Lambertus Schomaker; Marco Wiering

The use of machine learning and computer vision methods for recognizing different plants from images has attracted lots of attention from the community. This paper aims at comparing local feature descriptors and bags of visual words with different classifiers to deep convolutional neural networks (CNNs) on three plant datasets; AgrilPlant, LeafSnap, and Folio. To achieve this, we study the use of both scratch and fine-tuned versions of the GoogleNet and the AlexNet architectures and compare them to a local feature descriptor with k-nearest neighbors and the bag of visual words with the histogram of oriented gradients combined with either support vector machines and multi-layer perceptrons. The results shows that the deep CNN methods outperform the hand-crafted features. The CNN techniques can also learn well on a relatively small dataset, Folio.


2017 IEEE International Conference on INnovations in Intelligent SysTems and Applications (INISTA) | 2017

Operational data augmentation in classifying single aerial images of animals

Emmanuel Okafor; Rik Smit; Lambertus Schomaker; Marco Wiering

In deep learning, data augmentation is important to increase the amount of training images to obtain higher classification accuracies. Most data-augmentation methods adopt the use of the following techniques: cropping, mirroring, color casting, scaling and rotation for creating additional training images. In this paper, we propose a novel data-augmentation method that transforms an image into a new image containing multiple rotated copies of the original image in the operational classification stage. The proposed method creates a grid of n×n cells, in which each cell contains a different randomly rotated image and introduces a natural background in the newly created image. This algorithm is used for creating new training and testing images, and enhances the amount of information in an image. For the experiments, we created a novel dataset with aerial images of cows and natural scene backgrounds using an unmanned aerial vehicle, resulting in a binary classification problem. To classify the images, we used a convolutional neural network (CNN) architecture and compared two loss functions (Hinge loss and cross-entropy loss). Additionally, we compare the CNN to classical feature-based techniques combined with a k-nearest neighbor classifier or a support vector machine. The results show that the pre-trained CNN with our proposed data-augmentation technique yields significantly higher accuracies than all other approaches.


ieee symposium series on computational intelligence | 2016

Comparative study between deep learning and bag of visual words for wild-animal recognition

Emmanuel Okafor; Pornntiwa Pawara; Faik Karaaba; Olarik Surinta; Valeriu Codreanu; Lambert Schomaker; Marco Wiering

Most research in image classification has focused on applications such as face, object, scene and character recognition. This paper examines a comparative study between deep convolutional neural networks (CNNs) and bag of visual words (BOW) variants for recognizing animals. We developed two variants of the bag of visual words (BOW and HOG-BOW) and examine the use of gray and color information as well as different spatial pooling approaches. We combined the final feature vectors extracted from these BOW variants with a regularized L2 support vector machine (L2-SVM) to distinguish between classes within our datasets. We modified existing deep CNN architectures: AlexNet and GoogleNet, by reducing the number of neurons in each layer of the fully connected layers and last inception layer for both scratch and pre-trained versions. Finally, we compared the existing CNN methods, our modified CNN architectures and the proposed BOW variants on our novel wild-animal dataset (Wild-Anim). The results show that the CNN methods significantly outperform the BOW techniques.


Journal of Information and Telecommunication | 2018

An analysis of rotation matrix and colour constancy data augmentation in classifying images of animals

Emmanuel Okafor; Lambert Schomaker; Marco Wiering

ABSTRACT In this paper, we examine a novel data augmentation (DA) method that transforms an image into a new image containing multiple rotated copies of the original image. The DA method creates a grid of cells, in which each cell contains a different randomly rotated image and introduces a natural background in the newly created image. We investigate the use of deep learning to assess the classification performance on the rotation matrix or original dataset with colour constancy versions of the datasets. For the colour constancy methods, we use two well-known retinex techniques: the multi-scale retinex and the multi-scale retinex with colour restoration for enhancing both original (ORIG) and rotation matrix (ROT) images. We perform experiments on three datasets containing images of animals, from which the first dataset is collected by us and contains aerial images of cows or non-cow backgrounds. To classify the Aerial UAV images, we use a convolutional neural network (CNN) architecture and compare two loss functions (hinge loss and cross-entropy loss). Additionally, we compare the CNN to classical feature-based techniques combined with a k-nearest neighbour classifier or a support vector machine. The best approach is then used to examine the colour constancy DA variants, ORIG and ROT-DA alone for three datasets (Aerial UAV, Bird-600 and Croatia fish). The results show that the rotation matrix data augmentation is very helpful for the Aerial UAV dataset. Furthermore, the colour constancy data augmentation is helpful for the Bird-600 dataset. Finally, the results show that the fine-tuned CNNs significantly outperform the CNNs trained from scratch on the Croatia fish and the Bird-600 datasets, and obtain very high accuracies on the Aerial UAV and Bird-600 datasets.


international conference on pattern recognition applications and methods | 2017

Using Deep Convolutional Neural Networks to Predict Goal-Scoring Opportunities in Soccer

Martijn Wagenaar; Emmanuel Okafor; Wouter Frencken; Marco Wiering

Deep learning approaches have successfully been applied to several image recognition tasks, such as face, object, animal and plant classification. However, almost no research has examined on how to use the field of machine learning to predict goal-scoring opportunities in soccer from position data. In this paper, we propose the use of deep convolutional neural networks (DCNNs) for the above stated problem. This aim is actualized using the following steps: 1) development of novel algorithms for finding goal-scoring opportunities and ball possession which are used to obtain positive and negative examples. The dataset consists of position data from 29 matches played by a German Bundlesliga team. 2) These examples are used to create original and enhanced images (which contain object trails of soccer positions) with a resolution size of


advanced concepts for intelligent vision systems | 2017

Data Augmentation for Plant Classification

Pornntiwa Pawara; Emmanuel Okafor; Lambertus Schomaker; Marco Wiering

256 \times 256


Archive | 2018

Detection and Recognition of Badgers Using Deep Learning

Emmanuel Okafor; Gerard Berendsen; Lambert Schomaker; Marco Wiering

pixels. 3) Both the original and enhanced images are fed independently as input to two DCNN methods: instances of both GoogLeNet and a 3-layered CNN architecture. A K-nearest neighbor classifier was trained and evaluated on ball positions as a baseline experiment. The results show that the GoogLeNet architecture outperforms all other methods with an accuracy of 67.1%.


document analysis systems | 2018

Deep Learning for Classification and as Tapped-Feature Generator in Medieval Word-Image Recognition

Sukalpa Chanda; Emmanuel Okafor; Sébastien Hamel; Dominique Stutzmann; Lambertus Schomaker

Data augmentation plays a crucial role in increasing the number of training images, which often aids to improve classification performances of deep learning techniques for computer vision problems. In this paper, we employ the deep learning framework and determine the effects of several data-augmentation (DA) techniques for plant classification problems. For this, we use two convolutional neural network (CNN) architectures, AlexNet and GoogleNet trained from scratch or using pre-trained weights. These CNN models are then trained and tested on both original and data-augmented image datasets for three plant classification problems: Folio, AgrilPlant, and the Swedish leaf dataset. We evaluate the utility of six individual DA techniques (rotation, blur, contrast, scaling, illumination, and projective transformation) and several combinations of these techniques, resulting in a total of 12 data-augmentation methods. The results show that the CNN methods with particular data-augmented datasets yield the highest accuracies, which also surpass previous results on the three datasets. Furthermore, the CNN models trained from scratch profit a lot from data augmentation, whereas the fine-tuned CNN models do not really profit from data augmentation. Finally, we observed that data-augmentation using combinations of rotation and different illuminations or different contrasts helped most for getting high performances with the scratch CNN models.


Archive | 2018

Integrated Dimensionality Reduction and Sequence Prediction using LSTM

Emmanuel Okafor; Lambertus Schomaker

This paper describes the use of two different deep-learning algorithms for object detection to recognize different badgers. We use recordings of four different badgers under varying background illuminations. In total four different object detection algorithms based on deep neural networks are compared: The single shot multi-box detector (SSD) with the Inception-V2 or MobileNet as a backbone, and the faster region-based convolutional neural network (Faster R-CNN) combined with Inception-V2 or residual networks. Furthermore, two different activation functions are compared to compute probabilities that some badger is in the detected region: the softmax and sigmoid functions. The results of all eight models show that SSD obtains higher recognition accuracies (97.8%–98.6%) than Faster R-CNN (84.8%–91.7%). However, the training time of Faster R-CNN is much shorter than that of SSD. The use of different output activation functions seems not to matter much.


The 29th Benelux Conference on Artificial Intelligence | 2017

Deep Colorization for Facial Gender Recognition

Jonathan Hogervorst; Emmanuel Okafor; Marco Wiering

Collaboration


Dive into the Emmanuel Okafor's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Faik Karaaba

University of Groningen

View shared research outputs
Top Co-Authors

Avatar

Rik Smit

University of Groningen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sukalpa Chanda

Gjøvik University College

View shared research outputs
Researchain Logo
Decentralizing Knowledge