Sam Leroux
Ghent University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sam Leroux.
international symposium on neural networks | 2015
Sam Leroux; Steven Bohez; Tim Verbelen; Bert Vankeirsbilck; Pieter Simoens; Bart Dhoedt
Deep neural networks are the state of the art technique for a wide variety of classification problems. Although deeper networks are able to make more accurate classifications, the value brought by an additional hidden layer diminishes rapidly. Even shallow networks are able to achieve relatively good results on various classification problems. Only for a small subset of the samples do the deeper layers make a significant difference. We describe an architecture in which only the samples that can not be classified with a sufficient confidence by a shallow network have to be processed by the deeper layers. Instead of training a network with one output layer at the end of the network, we train several output layers, one for each hidden layer. When an output layer is sufficiently confident in this result, we stop propagating at this layer and the deeper layers need not be evaluated. The choice of a threshold confidence value allows us to trade-off accuracy and speed.
Journal of Systems and Software | 2018
Elias De Coninck; Steven Bohez; Sam Leroux; Tim Verbelen; Bert Vankeirsbilck; Pieter Simoens; Bart Dhoedt
Deep learning has shown tremendous results on various machine learning tasks, but the nature of the problems being tackled and the size of state-of-the-art deep neural networks often require training and deploying models on distributed infrastructure. DIANNE is a modular framework designed for dynamic (re)distribution of deep learning models and procedures. Besides providing elementary network building blocks as well as various training and evaluation routines, DIANNE focuses on dynamic deployment on heterogeneous distributed infrastructure, abstraction of Internet of Things (loT) sensors, integration with external systems and graphical user interfaces to build and deploy networks, while retaining the performance of similar deep learning frameworks. In this paper the DIANNE framework is proposed as an all-in-one solution for deep learning, enabling data and model parallelism though a modular design, offloading to local compute power, and the ability to abstract between simulation and real environment
Knowledge and Information Systems | 2017
Sam Leroux; Steven Bohez; Elias De Coninck; Tim Verbelen; Bert Vankeirsbilck; Pieter Simoens; Bart Dhoedt
Most of the research on deep neural networks so far has been focused on obtaining higher accuracy levels by building increasingly large and deep architectures. Training and evaluating these models is only feasible when large amounts of resources such as processing power and memory are available. Typical applications that could benefit from these models are, however, executed on resource-constrained devices. Mobile devices such as smartphones already use deep learning techniques, but they often have to perform all processing on a remote cloud. We propose a new architecture called a cascading network that is capable of distributing a deep neural network between a local device and the cloud while keeping the required communication network traffic to a minimum. The network begins processing on the constrained device, and only relies on the remote part when the local part does not provide an accurate enough result. The cascading network allows for an early-stopping mechanism during the recall phase of the network. We evaluated our approach in an Internet of Things context where a deep neural network adds intelligence to a large amount of heterogeneous connected devices. This technique enables a whole variety of autonomous systems where sensors, actuators and computing nodes can work together. We show that the cascading architecture allows for a substantial improvement in evaluation speed on constrained devices while the loss in accuracy is kept to a minimum.
international symposium on neural networks | 2016
Sam Leroux; Steven Bohez; Elias De Coninck; Tim Verbelen; Bert Vankeirsbilck; Pieter Simoens; Bart Dhoedt
Using deep neural networks on resource constrained devices is a trending topic in neural network research. Various techniques for compressing neural networks have been proposed that allow evaluating a large neural network on a device with limited memory and processing power. These approaches usually generate a single compressed student network based on a larger teacher network. In some cases a more dynamic trade-off may be desired. In this paper we trained a sequence of increasingly large networks where each network is constrained to contain the unmodified features of all smaller networks. The weight matrix of the largest network has submatrices that correspond to the weight matrices of each of the smaller networks. This technique allows us to keep the parameters of several networks in memory while having the same memory footprint as the single largest network. A trade-off between accuracy and speed can be made at runtime. The proposed approach is validated on two image classification tasks running on a real-world Internet-of-Things (IoT) device.
ieee international conference on cloud networking | 2016
Elias De Coninck; Steven Bohez; Sam Leroux; Tim Verbelen; Bert Vankeirsbilck; Bart Dhoedt; Pieter Simoens
Cyber-physical systems (CPS) in the factory of thefuture will consist of cloud-hosted software governing an agileproduction process that is executed by mobile robots and thatis controlled by analyzing the data from a vast number ofsensors. CPSs thus operate on a distributed production floorinfrastructure and the set-up continuously changes with eachnew manufacturing task. In this paper, we present our OSGibasedmiddleware that abstracts the deployment of servicebasedCPS software components on a distributed platformcomprising robots, actuators, sensors and the cloud. Moreover, our middleware provides specific support to develop componentsbased on artificial neural networks, a technique that recentlybecame very popular for sensor data analytics and robot control. We demonstrate a system where a robot takes actions based onthe input from sensors in its vicinity.
Proceedings of the 2nd Workshop on Middleware for Context-Aware Applications in the IoT | 2015
Elias De Coninck; Tim Verbelen; Bert Vankeirsbilck; Steven Bohez; Sam Leroux; Pieter Simoens
international conference on machine learning | 2016
Sam Leroux; Steven Bohez; Cedric De Boom; Elias De Coninck; Tim Verbelen; Bert Vankeirsbilck; Pieter Simoens; Bart Dhoedt
network operations and management symposium | 2018
Sam Leroux; Steven Bohez; Pieter-Jan Maenhaut; Nathan Meheus; Pieter Simoens; Bart Dhoedt
international conference on learning representations | 2018
Sam Leroux; Pavlo Molchanov; Pieter Simoens; Bart Dhoedt; Thomas M. Breuel; Jan Kautz
arXiv: Learning | 2018
Sam Leroux; Tim Verbelen; Pieter Simoens; Bart Dhoedt