Julien N. P. Martel
University of Zurich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Julien N. P. Martel.
international symposium on biomedical imaging | 2016
Fabian Tschopp; Julien N. P. Martel; Srinivas C. Turaga; Matthew Cook; Jan Funke
With recent advances in high-throughput Electron Microscopy (EM) imaging it is now possible to image an entire nervous system of organisms like Drosophila melanogaster. One of the bottlenecks to reconstruct a connectome from these large volumes (≈ 100 TiB) is the pixel-wise prediction of membranes. The time it would typically take to process such a volume using a convolutional neural network (CNN) with a sliding window approach is in the order of years on a current GPU. With sliding windows, however, a lot of redundant computations are carried out. In this paper, we present an extension to the Caffe library to increase throughput by predicting many pixels at once. On a sliding window network successfully used for membrane classification, we show that our method achieves a speedup of up to 57×, maintaining identical prediction results.
medical image computing and computer-assisted intervention | 2014
Jan Funke; Julien N. P. Martel; Stephan Gerhard; Björn Andres; Dan C. Ciresan; Alessandro Giusti; Luca Maria Gambardella; Jürgen Schmidhuber; Hanspeter Pfister; Albert Cardona; Matthew Cook
The automatic reconstruction of neurons from stacks of electron microscopy sections is an important computer vision problem in neuroscience. Recent advances are based on a two step approach: First, a set of possible 2D neuron candidates is generated for each section independently based on membrane predictions of a local classifier. Second, the candidates of all sections of the stack are fed to a neuron tracker that selects and connects them in 3D to yield a reconstruction. The accuracy of the result is currently limited by the quality of the generated candidates. In this paper, we propose to replace the heuristic set of candidates used in previous methods with samples drawn from a conditional random field (CRF) that is trained to label sections of neural tissue. We show on a stack of Drosophila melanogaster neural tissue that neuron candidates generated with our method produce 30% less reconstruction errors than current candidate generation methods. Two properties of our CRF are crucial for the accuracy and applicability of our method: (1) The CRF models the orientation of membranes to produce more plausible neuron candidates. (2) The interactions in the CRF are restricted to form a bipartite graph, which allows a great sampling speed-up without loss of accuracy.
international symposium on circuits and systems | 2015
Julien N. P. Martel; Miguel Chau; Piotr Dudek; Matthew Cook
The interacting visual maps (IVM) algorithm introduced in [1] is able to perform the joint approximate inference of several visual quantities such as optic-flow, gray-level intensities and ego-motion, using a sparse input coming from a neuromorphic dynamic vision sensor (DVS). We show that features of the model such as the intrinsic parallelism and distributed nature of its computation make it a natural candidate to benefit from the cellular processor array (CPA) hardware architecture. We have now implemented the IVM algorithm on a general-purpose CPA simulator, and here we present results of our simulations and demonstrate that the IVM algorithm indeed naturally fits the CPA architecture. Our work indicates that extended versions of the IVM algorithm could benefit greatly from a dedicated hardware implementation, eventually yielding a high speed, low power visual odometry chip.
international symposium on circuits and systems | 2016
Julien N. P. Martel; Lorenz K. Müller; Stephen J. Carey; Piotr Dudek
To improve computational efficiency, it may be advantageous to transfer part of the intelligence lying in the core of a system to its sensors. Vision sensors equipped with small programmable processors at each pixel allow us to follow this principle in so-called near-focal plane processing, which is performed on-chip directly where light is being collected. Such devices need then only to communicate relevant pre-processed visual information to other parts of the system. In this work, we demonstrate how two classical problems, namely high dynamic range imaging and auto-focus, can be solved efficiently using two simple parallel algorithms implemented on such a chip. We illustrate with these two examples that embedding uncomplicated algorithms on-chip, directly where information acquisition takes place can replace more complex dedicated post-processing. Adapting data acquisition by bringing processing at the sensor level allows us to explore solutions that would not be feasible in a conventional sensor-ADC-processor pipeline.
european conference on circuit theory and design | 2015
Julien N. P. Martel; Miguel Chau; Matthew Cook; Piotr Dudek
Recently, several low and mid-level vision algorithms have been successfully demonstrated at high-frame rate on a low power-budget using compact programmable CPA (Cellular Processor Arrays) vision-chips that embed a Processing Element (PE) at each pixel. Because of the inherent constraint in the VLSI design of these devices, algorithms they run are limited to scarce resources, in particular memory - that is the number of registers available per pixel. In this work, we propose an algorithmic procedure to trade off the pixel resolution of a programmable CPA vision-chip against the number of its registers. By grouping pixels into “super-pixels” where pixel registers are interlaced, we virtually expose more registers in software allowing to run more sophisticated algorithms. We implement and demonstrate on an actual device an algorithm that could not have been executed on an existing CPA at full resolution due to its memory requirements.
Proceedings of the 10th International Conference on Distributed Smart Camera | 2016
Julien N. P. Martel; Yulia Sandamirskaya
In artificial vision applications, such as tracking, a large amount of data captured by sensors is transferred to processors to extract information relevant for the task at hand. Smart vision sensors offer a means to reduce the computational burden of visual processing pipelines by placing more processing capabilities next to the sensor. In this work, we use a vision-chip in which a small processor with memory is located next to each photosensitive element. The architecture of this device is optimized to perform local operations. To perform a task like tracking, we implement a neuromorphic approach using a Dynamic Neural Field, which allows to segregate, memorize, and track objects. Our system, consisting of the vision-chip running the DNF, outputs only the activity that corresponds to the tracked objects. These outputs reduce the bandwidth needed to transfer information as well as further post-processing, since computation happens at the pixel level.
IEEE Transactions on Circuits and Systems I-regular Papers | 2018
Julien N. P. Martel; Lorenz K. Müller; Stephen J. Carey; Jonathan Müller; Yulia Sandamirskaya; Piotr Dudek
Visual input can be used to recover the 3-D structure of a scene by estimating distances (depth) to the observer. Depth estimation is performed in various applications, such as robotics, autonomous driving, or surveillance. We present a low-power, compact, passive, and static imaging system that computes a semi-dense depth map in real time for a wide range of depths. This is achieved by using a focus-tunable liquid lens to sweep the optical power of the system at a high frequency, computing depth from focus on a mixed-signal programmable focal-plane processor. The use of local and highly parallel process- ing directly on the focal plane removes the sensor-processor bandwidth limitations typical in conventional imaging and processor technologies and allows real-time performance to be achieved.
international symposium on circuits and systems | 2017
Julien N. P. Martel; Lorenz K. Müller; Stephen J. Carey; Jonathan Müller; Yulia Sandamirskaya; Piotr Dudek
We demonstrate a 3D imaging system that produces sparse depth maps. It consists in a liquid focus-tunable lens whose focal power can be changed at high speed, placed in front of a SCAMP5 vision-chip embedding processing capabilities in each pixel. The focus-tunable lens performs focal sweeps with shallow depth of fields. These are sampled by the vision chip taking multiple images at different focus and analyzed on-chip to produce a single depth frame. The combination of the focus tunable-lens with the vision-chip, enabling near-focal plane processing, allows us to present a compact passive system that is static, monocular, real-time (> 25FPS) and low-power (< 1.6W).
international symposium on circuits and systems | 2017
Julien N. P. Martel; Lorenz K. Müller; Stephen J. Carey; Piotr Dudek
In this paper, we present a 3D imaging system providing a semi-dense depth map, using a passive, low-power, compact, static, monocular camera. The demonstrated depth estimation system reconstructs 32 depth-levels in real-time at 25FPS drawing less than 1.9W of power. This is achieved by performing computation on an analog focal-plane processor that analyses frames captured through a vibrating liquid focus-tunable lens. The optical system provides shallow depth of focus images and fast sweeps of optical power, while the use of pixel-level processing removes the sensor-processor bandwidth limitations of depth-from-focus systems built using conventional imaging and processor technologies. All-in-focus images are also obtained.
Frontiers in Computational Neuroscience | 2018
Peter U. Diehl; Julien N. P. Martel; Jakob Buhmann; Matthew Cook
In ancient Greece our brains were presumed to be mainly important for cooling our bodies. When humanity started to understand that our brains are important for thinking, the way it would be explained was with water pump systems as this was one of the most sophisticated models at the time. In the nineteenth century, when we started to utilize electricity it became apparent that our brains also use electrical signals. Then, in the twentieth century, we defined algorithms, improved electrical engineering and invented the computer. Those inventions prevail as some of the most common comparisons of how our brains might work. When taking a step back and comparing what we know from electrophysiology, anatomy, psychology, and medicine to current computational models of the neocortex, it becomes apparent that our traditional definition of an algorithm and of what it means to “compute” needs to be adjusted to be more applicable to the neocortex. More specifically, the traditional conversion from “input” to “output” is not as well defined when considering brain areas representing different aspects of the same scene. Consider for example reading this paper: while the input is quite clearly visual, it is not obvious what the desired output is besides maybe turning to the next page, but this should not be the goal in itself. Instead, the more interesting aspect is the change of state in different areas of the brain and the corresponding changes in states of neurons. There are many types of models that have the interaction of modules as the central aspect. Among those are:
Collaboration
Dive into the Julien N. P. Martel's collaboration.
Dalle Molle Institute for Artificial Intelligence Research
View shared research outputs