Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Noel Lopes is active.

Publication


Featured researches published by Noel Lopes.


Pattern Recognition | 2014

Towards adaptive learning with improved convergence of deep belief networks on graphics processing units

Noel Lopes; Bernardete Ribeiro

In this paper we focus on two complementary approaches to significantly decrease pre-training time of a deep belief network (DBN). First, we propose an adaptive step size technique to enhance the convergence of the contrastive divergence (CD) algorithm, thereby reducing the number of epochs to train the restricted Boltzmann machine (RBM) that supports the DBN infrastructure. Second, we present a highly scalable graphics processing unit (GPU) parallel implementation of the CD-k algorithm, which boosts notably the training speed. Additionally, extensive experiments are conducted on the MNIST and the HHreco databases. The results suggest that the maximum useful depth of a DBN is related to the number and quality of the training samples. Moreover, it was found that the lower-level layer plays a fundamental role for building successful DBN models. Furthermore, the results contradict the pre-conceived idea that all the layers should be pre-trained. Finally, it is shown that by incorporating multiple back-propagation (MBP) layers, the DBNs generalization capability is remarkably improved. HighlightsAdaptive step size technique that enhances the convergence of RBMs and DBNs.GPU parallel implementation of the RBMs and DBNs.Extensive experiment involving training hundreds of DBNs (MNIST and HHreco datasets).


international conference hybrid intelligent systems | 2010

GPUMLib: A new Library to combine Machine Learning algorithms with Graphics Processing Units

Noel Lopes; Bernardete Ribeiro; Ricardo Quintas

The Graphics Processing Unit (GPU) is a highly parallel, many-core device with enormous computational power, especially well-suited to address Machine Learning (ML) problems that can be expressed as data-parallel computations. As problems become increasingly demanding, parallel implementations of ML algorithms become critical for developing hybrid intelligent real-world applications. The relative low cost of GPUs combined with the unprecedent computational power they offer, make them particularly well-positioned to automatically analyze and capture relevant information from large amounts of data. In this paper, we propose the creation of an open source GPU Machine Learning Library (GPUMLib) that aims to provide the building blocks for the scientific community to develop GPU ML algorithms. Experimental results on benchmark datasets demonstrate that the GPUMLib components already implemented achieve significant savings over the counterpart CPU implementations.


intelligent data engineering and automated learning | 2009

GPU implementation of the multiple back-propagation algorithm

Noel Lopes; Bernardete Ribeiro

Graphics Processing Units (GPUs) can provide remarkable performance gains when compared to CPUs for computationally-intensive applications. Thus they are much attractive to be used as dedicated hardware in many fields such as in machine learning. In particular, the implementation of neural networks (NNs) in GPUs can decrease enormously the long training times during the learning process. In this paper, we describe a parallel implementation of the Multiple Back-Propagation (MBP) algorithm and present the results obtained when running the algorithm on two well-known benchmarks. We show that for both classification and regression problems our implementation reduces the computational cost when compared with the standalone CPU version.


International Journal of Neural Systems | 2011

AN EVALUATION OF MULTIPLE FEED-FORWARD NETWORKS ON GPUs

Noel Lopes; Bernardete Ribeiro

The Graphics Processing Unit (GPU) originally designed for rendering graphics and which is difficult to program for other tasks, has since evolved into a device suitable for general-purpose computations. As a result graphics hardware has become progressively more attractive yielding unprecedented performance at a relatively low cost. Thus, it is the ideal candidate to accelerate a wide variety of data parallel tasks in many fields such as in Machine Learning (ML). As problems become more and more demanding, parallel implementations of learning algorithms are crucial for a useful application. In particular, the implementation of Neural Networks (NNs) in GPUs can significantly reduce the long training times during the learning process. In this paper we present a GPU parallel implementation of the Back-Propagation (BP) and Multiple Back-Propagation (MBP) algorithms, and describe the GPU kernels needed for this task. The results obtained on well-known benchmarks show faster training times and improved performances as compared to the implementation in traditional hardware, due to maximized floating-point throughput and memory bandwidth. Moreover, a preliminary GPU based Autonomous Training System (ATS) is developed which aims at automatically finding high-quality NNs-based solutions for a given problem.


intelligent data engineering and automated learning | 2010

Non-negative matrix factorization implementation using graphic processing units

Noel Lopes; Bernardete Ribeiro

Non-Negative Matrix Factorization (NMF) algorithms decompose a matrix, containing only non-negative coefficients, into the product of two matrices, usually with reduced ranks. The resulting matrices are constrained to have only non-negative coefficients. NMF can be used to reduce the number of characteristics in a dataset, while preserving the relevant information that allows for the reconstruction of the original data. Since negative coefficients are not allowed, the original data is reconstructed through additive combinations of the parts-based factorized matrix representation. A Graphics Processing Unit (GPU) implementation of the NMF algorithms, using both the multiplicative and the additive (gradient descent) update rules is presented for the Euclidean distance as well as for the divergence cost function. The performance results on an image database demonstrate extremely high speedups, making the GPU implementations excel by far the CPU implementations.


international symposium on neural networks | 2001

Hybrid learning in a multi-neural network architecture

Noel Lopes; Bernardete Ribeiro

This paper describes a new class of neural networks (multiple feedforward networks (MFFNs)) obtained by integrating two feedforward networks in a novel manner. A new multiple backpropagation (MBP) algorithm that can be seen as a generalization of the backpropagation (BP) algorithm is also presented. The MFFNs and MBP algorithm together form a new neural architecture that is in most cases preferable to the use of multilayer perceptron networks trained with the BP algorithm. Experimental results on benchmarks show that the advantages offered by the new architecture are shorter training times for online learning and better generalization and function approximation capabilities.


international symposium on neural networks | 2012

Restricted Boltzmann Machines and Deep Belief Networks on multi-core processors

Noel Lopes; Bernardete Ribeiro; João Gonçalves

Deep learning architecture models by contrast with shallow models draw on the insights of biological inspiration which has been a challenge since the inception of the idea of simulating the brain. In particular their (many) hierarchical levels of composition track the development of parallel implementation in an attempt to become accessibly fast. When it comes to performance enhancement Graphics Processing Units (GPU) have carved their own strength in machine learning. In this paper, we present an approach that relies mainly on three kernels for implementing both the Restricted Boltzmann Machines (RBM) and Deep Belief Networks (DBN) algorithms. Instead of considering the neuron as the smallest unit of computation each thread represents the connection between two (one visible and one hidden) neurons. Although conceptually it may seem weird, the rationale behind is to think of a connection as performing a simple function that multiplies the clamped input by its weight. Thus, we maximize the GPU workload avoiding idle cores. Moreover, we placed great emphasis on the kernels to avoid uncoalesced memory accesses as well as to take advantage of the shared memory to reduce global memory accesses. Additionally, our approach uses a step adaptive learning rate procedure which accelerates convergence. The approach yields very good speedups (up to 46×) as compared with a straightforward implementation when both GPU and CPU implementations are tested on the MINST database.


international conference on neural information processing | 2011

Deep Belief Networks for Financial Prediction

Bernardete Ribeiro; Noel Lopes

Financial business prediction has lately raised a great interest due to the recent world crisis events. In spite of the many advanced shallow computational methods that have extensively been proposed, most algorithms have not yet attained a desirable level of applicability. All show a good performance for a given financial setup but fail in general to create better and reliable models. The main focus of this paper is to present a deep learning model with strong ability to generate high level feature representations for accurate financial prediction. The proposed Deep Belief Network (DBN) approach tested in a real dataset of French companies compares favorably to shallow architectures such as Support Vector Machines (SVM) and single Restricted Boltzmann Machine (RBM). We show that the underlying financial model with deep machine technology has a strong potential thus empowering the finance industry.


iberoamerican congress on pattern recognition | 2013

Extreme Learning Classifier with Deep Concepts

Bernardete Ribeiro; Noel Lopes

The text below describes a short introduction to extreme learning machines ELM enlightened by new developed applications. It also includes an introduction to deep belief networks DBN, noticeably tuned into the pattern recognition problems. Essentially, the deep belief networks learn to extract invariant characteristics of an object or, in other words, an DBN shows the ability to simulate how the brain recognizes patterns by the contrastive divergence algorithm. Moreover, it contains a strategy based on both the kernel and neural extreme learning of the deep features. Finally, it shows that the DBN-ELM recognition rate is competitive and often better than other successful approaches in well-known benchmarks. The results also show that the method is extremely fast when the neural based ELM is used.


iberoamerican congress on pattern recognition | 2009

Fast Pattern Classification of Ventricular Arrhythmias Using Graphics Processing Units

Noel Lopes; Bernardete Ribeiro

Graphics Processing Units (GPUs) can provide remarkable performance gains when compared to CPUs for computationally-intensive applications. In the biomedical area, most of the previous studies are focused on using Neural Networks (NNs) for pattern recognition of biomedical signals. However, the long training times prevent them to be used in real-time. This is critical for the fast detection of Ventricular Arrhythmias (VAs) which may cause cardiac arrest and sudden death. In this paper, we present a parallel implementation of the Back-Propagation (BP) and the Multiple Back-Propagation (MBP) algorithm which allowed significant training speedups. In our proposal, we explicitly specify data parallel computations by defining special functions (kernels ); therefore, we can use a fast evaluation strategy for reducing the computational cost without wasting memory resources. The performance of the pattern classification implementation is compared against other reported algorithms.

Collaboration


Dive into the Noel Lopes's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shafaatunnur Hasan

Universiti Teknologi Malaysia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Catarina Silva

Polytechnic Institute of Leiria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge