Richard Labib
École Polytechnique de Montréal
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Richard Labib.
Hvac&r Research | 2004
Michel Bernier; Patrice Pinel; Richard Labib; Raphaël Paillot
This article presents a technique to aggregate heating/cooling loads when using the cylindrical heat source method (CHS) to perform annual hourly energy simulations of ground-coupled heat pump (GCHP) systems. The technique, referred to as “multiple load aggregation algorithm” (or MLAA), uses two major thermal history periods, referred to as “past” and “immediate.” In addition, the MLAA accounts for thermal interference among boreholes by numerically solving the two-dimensional temperature field in the borefield. Results of a comparison between the MLAA and the duct storage (DST) model are presented. Several cases are examined with two different borefields and several load profiles. Results obtained for one- and ten-year simulations show that the MLAA is in very good agreement with the DST model. In the worst case, the maximum difference in fluid temperature is of the order of 2 K (3.6°F). This level of precision is more than adequate to perform accurate hourly simulations of GCHP systems.
international symposium on neural networks | 1999
Richard Labib
Feedforward multilayer neural networks are widely used for pattern recognition in diverse fields of applications. However, their inherent structural element, the perceptron, cannot perform pattern classification on nonlinearly separable patterns. These severe limitations motivated us in investigating the validity of a new structure for a single neuron capable of recognizing nonlinear patterns such as the XOR problem. This new architecture is inspired by biological assumptions involving stochastic processes. It is clearly established that only six-parameters are necessary to solve the XOR problem. Higher order problems are also investigated.
IEEE Transactions on Neural Networks | 2009
Jean-François Connolly; Richard Labib
Finding an accurate approximation of a discriminating function in order to evaluate its extrema is a common problem in the field of machine learning. A new type of neural network, the Quantron, generates a complicated wave function whose global maximum value is crucial for classifying patterns. To obtain an analytical approximation of this maximum, we present a multiscale scheme based on compactly supported inverted parabolas. Motivated by the Quantrons architecture as well as Laplaces method, this scheme stems from the multiresolution analysis (MRA) developed in the theory of wavelets. This approximation method will be performed, first, one scale at a time and, second, as a global approach. Convergence will be proved and results analyzed.
Neural Computing and Applications | 2010
Richard Labib; Karim Khattar
In this paper, we present a thorough mathematical analysis of the use of neural networks to solve a specific classification problem consisting of a bilinear boundary. The network under consideration is a three-layered perceptron with two hidden neurons having the sigmoid serving as the activation function. The analysis of the hidden space created by the outputs of the hidden neurons will provide results on the network’s capacity to isolate two classes of data in a bilinear fashion, and the importance of the value of the sigmoid parameter is highlighted. We will obtain an explicit analytical function describing the boundary generated by the network, thus providing information on the effect each parameter has on the network’s behavior. Generalizations of the results are obtained with additional neurons, and a theorem concerned with analytical reproducibility of the boundary function is established.
International Journal of Revenue Management | 2013
Shadi Sharif Azadeh; Richard Labib; Gilles Savard
This study analyses the use of neural networks to produce accurate forecasts of total bookings and cancellations before departure, of a major European rail operator. Effective forecasting models, can improve revenue performance of transportation companies significantly. The prediction model used in this research is an improved multi-layer perceptron (MLP) describing the relationship between number of passengers and factors affecting this quantity based on historical data. Relevant pre-processing approaches have been employed to make learning more efficient. The generalisation of the network is tested to evaluate the accuracy prediction of the regression model for future trends of reservations and cancellations using actual railroad data. The results show that it is a promising approach in railway demand forecasting with a low prediction error.
parallel problem solving from nature | 2004
Frédéric Ratle; Benoît Lecarpentier; Richard Labib; F. Trochu
A multi-objective evolutionary algorithm is applied to optimize the design of a helical spring made out of a composite material. The criteria considered are the minimization of the mass along with the maximization of the stiffness of the spring. Considering the computation time required for finite element analyses, the optimization is performed using approximate relations between design parameters. Dual kriging interpolation allows improving the accuracy of the classical model of spring stiffness by estimating the error between the model and the results of finite element analyses. This error is taken into account by adding a correction function to the stiffness function. The NSGA-II algorithm is applied and shows satisfactory results, while using the correction function induces a displacement of the Pareto front.
IEEE Transactions on Neural Networks | 2013
Pierre-Yves L'Espérance; Richard Labib
We present a mathematical model of a biological synapse based on stochastic processes to establish the temporal behavior of the postsynaptic potential following a quantal synaptic transmission. This potential form is the basis of the neural code. We suppose that the release of neurotransmitters in the synaptic cleft follows a Poisson process, and that they diffuse according to integrated Ornstein-Uhlenbeck processes in 3-D with random initial positions and velocities. The diffusion occurs in an isotropic environment between two infinite parallel planes representing the pre- and postsynaptic membrane. We state that the presynaptic membrane is perfectly reflecting and that the other is perfectly absorbing. The activation of the receptors polarizes the postsynaptic membrane according to a parallel RC circuit scheme. We present the results obtained by simulations according to a Gillespie algorithm and we show that our model exhibits realistic postsynaptic behaviors from a simple quantal occurrence.
International Journal of Neural Systems | 2012
Richard Labib; Simon de Montigny
The quantron is a hybrid neuron model related to perceptrons and spiking neurons. The activation of the quantron is determined by the maximum of a sum of input signals, which is difficult to use in classical learning algorithms. Thus, training the quantron to solve classification problems requires heuristic methods such as direct search. In this paper, we present an approximation of the quantron trainable by gradient search. We show this approximation improves the classification performance of direct search solutions. We also compare the quantron and the perceptrons performance in solving the IRIS classification problem.
Neural Computing and Applications | 2007
Richard Labib; Reza Assadi
Prior knowledge of the input–output problems often leads to supervised learning restrictions that can hamper the multi-layered perceptron’s (MLP) capacity to find an optimal solution. Restrictions such as fixing weights and modifying input variables may influence the potential convergence of the back-propagation algorithm. This paper will show mathematically how to handle such constraints in order to obtain a modified version of the traditional MLP capable of solving targeted problems. More specifically, it will be shown that fixing particular weights according to prior information as well as transforming incoming inputs can enable the user to limit the MLP search to a desired type of solution. The ensuing modifications pertaining to the learning algorithm will be established. Moreover, four supervised improvements will offer insight on how to control the convergence of the weights towards an optimal solution. Finally, applications involving packing and covering problems will be used to illustrate the potential and performance of this modified MLP.
Communications in Statistics-theory and Methods | 2017
Richard Labib; Simon de Montigny
ABSTRACT In this article, we obtain simplified as well as original closed-form expressions for integrals involving the gamma function. These explicit results are found neither in the literature nor using symbolic computation software. Subsequent results follow, giving rise to explicit expressions for integrals involving the error function, with application in order statistics. In particular, these results can be used within the framework of reliability theory.