Kofi Appiah
Nottingham Trent University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kofi Appiah.
field-programmable technology | 2005
Kofi Appiah; Andrew Hunter
This paper demonstrates the use of a single chip FPGA for the extraction of highly accurate background models in real time. The models are based on 24 bit RGB values and 8 bit grayscale intensity values. Three background models are presented, all using a camcorder, single FPGA chip, four blocks of RAM and a display unit. The architectures have been implemented and tested using a Panasonic NV-DS60B digital video camera connected to a Celoxica RC300 prototyping platform with a Xilinx Virtex II XC2v6000 FPGA and 4 banks of onboard RAM. The novel FPGA architecture presented has the advantages of minimizing latency and the movement of large datasets, by conducting time critical processes on BlockRAM. The systems operate at clock rates ranging from 57MHz to 65MHz and are capable of performing preprocessing functions like temporal low pass filtering on standard frame size of 640 times 480 pixels at up to 210 frames per second
Computer Vision and Image Understanding | 2010
Kofi Appiah; Andrew Hunter; Patrick Dickinson; Hongying Meng
This paper demonstrates the use of a single-chip FPGA for the segmentation of moving objects in a video sequence. The system maintains highly accurate background models, and integrates the detection of foreground pixels with the labelling of objects using a connected components algorithm. The background models are based on 24-bit RGB values and 8-bit gray scale intensity values. A multimodal background differencing algorithm is presented, using a single FPGA chip and four blocks of RAM. The real-time connected component labelling algorithm, also designed for FPGA implementation, run-length encodes the output of the background subtraction, and performs connected component analysis on this representation. The run-length encoding, together with other parts of the algorithm, is performed in parallel; sequential operations are minimized as the number of run-lengths are typically less than the number of pixels. The two algorithms are pipelined together for maximum efficiency.
field-programmable technology | 2008
Kofi Appiah; Andrew Hunter; Patrick Dickinson; Jonathan D. Owens
This paper introduces a real-time connected component labelling algorithm designed for field programmable gate array (FPGA) implementation. The algorithm run-length encodes the image, and performs connected component analysis on this representation. The run-length encoding, together with other parts of the algorithm, is performed in parallel; sequential operations are minimized as the number of runs are typically less than the number of pixels. The architecture is designed mainly on Block RAM (i.e. internal RAM) of the FPGA. A comparison with the multi-pass algorithm in hardware and software is presented to show the advantages of the algorithm. The algorithm runs comfortably in real-time with reasonably low resource utilization, making integration with other real-time algorithms feasible.
international symposium on neural networks | 2009
Kofi Appiah; Andrew Hunter; Hongying Meng; Shigang Yue; Mervyn Hobden; Nigel Priestley; Peter Hobden; Cy Pettit
A binary Self Organizing Map (SOM) has been designed and implemented on a Field Programmable Gate Array (FPGA) chip. A novel learning algorithm which takes binary inputs and maintains tri-state weights is presented. The binary SOM has the capability of recognizing binary input sequences after training. A novel tri-state rule is used in updating the network weights during the training phase. The rule implementation is highly suited to the FPGA architecture, and allows extremely rapid training. This architecture may be used in real-time for fast pattern clustering and classification of binary features.
Image and Vision Computing | 2009
Patrick Dickinson; Andrew Hunter; Kofi Appiah
Foreground segmentation is a fundamental first processing stage for vision systems which monitor real-world activity. In this paper, we consider the problem of achieving robust segmentation in scenes where the appearance of the background varies unpredictably over time. Variations may be caused by processes such as moving water, or foliage moved by wind, and typically degrade the performance of standard per-pixel background models. Our proposed approach addresses this problem by modeling homogeneous regions of scene pixels as an adaptive mixture of Gaussians in color and space. Model components are used to represent both the scene background and moving foreground objects. Newly observed pixel values are probabilistically classified, such that the spatial variance of the model components supports correct classification even when the background appearance is significantly distorted. We evaluate our method over several challenging video sequences, and compare our results with both per-pixel and Markov Random Field based models. Our results show the effectiveness of our approach in reducing incorrect classifications.
Computer Vision and Image Understanding | 2010
Hongying Meng; Kofi Appiah; Shigang Yue; Andrew Hunter; Mervyn Hobden; Nigel Priestley; Peter Hobden; Cy Pettit
Bio-inspired vision sensors are particularly appropriate candidates for navigation of vehicles or mobile robots due to their computational simplicity, allowing compact hardware implementations with low power dissipation. The Lobula Giant Movement Detector (LGMD) is a wide-field visual neuron located in the Lobula layer of the Locust nervous system. The LGMD increases its firing rate in response to both the velocity of an approaching object and the proximity of this object. It has been found that it can respond to looming stimuli very quickly and trigger avoidance reactions. It has been successfully applied in visual collision avoidance systems for vehicles and robots. This paper introduces a modified neural model for LGMD that provides additional depth direction information for the movement. The proposed model retains the simplicity of the previous model by adding only a few new cells. It has been simplified and implemented on a Field Programmable Gate Array (FPGA), taking advantage of the inherent parallelism exhibited by the LGMD, and tested on real-time video streams. Experimental results demonstrate the effectiveness as a fast motion detector.
IEEE Transactions on Circuits and Systems for Video Technology | 2012
Kofi Appiah; Andrew Hunter; Patrick Dickinson; Hongying Meng
This paper introduces a tri-state logic self-organizing map (bSOM) designed and implemented on a field programmable gate array (FPGA) chip. The bSOM takes binary inputs and maintains tri-state weights. A novel training rule is presented. The bSOM is well suited to FPGA implementation, trains quicker than the original self-organizing map (SOM), and can be used in clustering and classification problems with binary input data. Two practical applications, character recognition and appearance-based object identification, are used to illustrate the performance of the implementation. The appearance-based object identification forms part of an end-to-end surveillance system implemented wholly on FPGA. In both applications, binary signatures extracted from the objects are processed by the bSOM. The system performance is compared with a traditional SOM with real-valued weights and a strictly binary weighted SOM.
computer vision and pattern recognition | 2011
Hongying Meng; Kofi Appiah; Andrew Hunter; Patrick Dickinson
In this paper, a Naive Bayes classifier was simplified and implemented as a multi-class classifier for binary feature vectors. It was designed on FPGA using very limited hardware resources and runs quickly and efficiently in both training and testing phases. It was first tested on a handwriting digital number dataset, and then applied in the visual object recognition on a single FPGA based visual surveillance system. It was compared with a binary Self Organizing Map (bSOM) using tri-states operation on FPGA, and the experimental results demonstrated both its higher performance and lower resource usage on the FPGA chip.
international symposium on neural networks | 2009
Hongying Meng; Kofi Appiah; Andrew Hunter; Shigang Yue; Mervyn Hobden; Nigel Priestley; Peter Hobden; Cy Pettit
The Sparse Distributed Memory (SDM) proposed by Kanerva provides a simple model for human long-term memory, with a strong underlying mathematical theory. However, there are problematic features in the original SDM model that affect its efficiency and performance in real world applications and for hardware implementation. In this paper, we propose modifications to the SDM model that improve its efficiency and performance in pattern recall. First, the address matrix is built using training samples rather than random binary sequences. This improves the recall performance significantly. Second, the content matrix is modified using a simple tri-state logic rule. This reduces the storage requirements of the SDM and simplifies the implementation logic, making it suitable for hardware implementation. The modified model has been tested using pattern recall experiments. It is found that the modified model can recall clean patterns very well from noisy inputs.
ieee international conference on fuzzy systems | 2014
Kofi Appiah; Andrew Hunter; Ahmad Lotfi; Chris Waltham; Patrick Dickinson
This paper presents a system for automatically classifying the resting location of a moving object in an indoor environment. The system uses an unsupervised neural network (Self Organising Feature Map) fully implemented on a low-cost, low-power automated home-based surveillance system, capable of monitoring activity level of elders living alone independently. The proposed system runs on an embedded platform with a specialised ceiling-mounted video sensor for intelligent activity monitoring. The system has the ability to learn resting locations, to measure overall activity levels and to detect specific events such as potential falls. First order motion information, including first order moving average smoothing, is generated from the 2D image coordinates (trajectories). A novel edge-based object detection algorithm capable of running at a reasonable speed on the embedded platform has been developed. The classification is dynamic and achieved in real-time. The dynamic classifier is achieved using a SOFM and a probabilistic model. Experimental results show less than 20% classification error, showing the robustness of our approach over others in literature with minimal power consumption. The head location of the subject is also estimated by a novel approach capable of running on any resource limited platform with power constraints.