Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aurelien Lucchi is active.

Publication


Featured researches published by Aurelien Lucchi.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012

SLIC Superpixels Compared to State-of-the-Art Superpixel Methods

Radhakrishna Achanta; A. Shaji; Kevin Smith; Aurelien Lucchi; Pascal Fua; Sabine Süsstrunk

Computer vision applications have come to rely increasingly on superpixels in recent years, but it is not always clear what constitutes a good superpixel algorithm. In an effort to understand the benefits and drawbacks of existing methods, we empirically compare five state-of-the-art superpixel algorithms for their ability to adhere to image boundaries, speed, memory efficiency, and their impact on segmentation performance. We then introduce a new superpixel algorithm, simple linear iterative clustering (SLIC), which adapts a k-means clustering approach to efficiently generate superpixels. Despite its simplicity, SLIC adheres to boundaries as well as or better than previous methods. At the same time, it is faster and more memory efficient, improves segmentation performance, and is straightforward to extend to supervoxel generation.


IEEE Transactions on Medical Imaging | 2012

Supervoxel-Based Segmentation of Mitochondria in EM Image Stacks With Learned Shape Features

Aurelien Lucchi; Kevin Smith; Radhakrishna Achanta; Graham Knott; Pascal Fua

It is becoming increasingly clear that mitochondria play an important role in neural function. Recent studies show mitochondrial morphology to be crucial to cellular physiology and synaptic function and a link between mitochondrial defects and neuro-degenerative diseases is strongly suspected. Electron microscopy (EM), with its very high resolution in all three directions, is one of the key tools to look more closely into these issues but the huge amounts of data it produces make automated analysis necessary. State-of-the-art computer vision algorithms designed to operate on natural 2-D images tend to perform poorly when applied to EM data for a number of reasons. First, the sheer size of a typical EM volume renders most modern segmentation schemes intractable. Furthermore, most approaches ignore important shape cues, relying only on local statistics that easily become confused when confronted with noise and textures inherent in the data. Finally, the conventional assumption that strong image gradients always correspond to object boundaries is violated by the clutter of distracting membranes. In this work, we propose an automated graph partitioning scheme that addresses these issues. It reduces the computational complexity by operating on supervoxels instead of voxels, incorporates shape features capable of describing the 3-D shape of the target objects, and learns to recognize the distinctive appearance of true boundaries. Our experiments demonstrate that our approach is able to segment mitochondria at a performance level close to that of a human annotator, and outperforms a state-of-the-art 3-D segmentation technique.


medical image computing and computer assisted intervention | 2010

A fully automated approach to segmentation of irregularly shaped cellular structures in EM images

Aurelien Lucchi; Kevin Smith; Radhakrishna Achanta; Vincent Lepetit; Pascal Fua

While there has been substantial progress in segmenting natural images, state-of-the-art methods that perform well in such tasks unfortunately tend to underperform when confronted with the different challenges posed by electron microscope (EM) data. For example, in EM imagery of neural tissue, numerous cells and subcellular structures appear within a single image, they exhibit irregular shapes that cannot be easily modeled by standard techniques, and confusing textures clutter the background. We propose a fully automated approach that handles these challenges by using sophisticated cues that capture global shape and texture information, and by learning the specific appearance of object boundaries. We demonstrate that our approach significantly outperforms state-of-the-art techniques and closely matches the performance of human annotators.


international conference on computer vision | 2011

Are spatial and global constraints really necessary for segmentation

Aurelien Lucchi; Yunpeng Li; Xavier Boix; Kevin Smith; Pascal Fua

Many state-of-the-art segmentation algorithms rely on Markov or Conditional Random Field models designed to enforce spatial and global consistency constraints. This is often accomplished by introducing additional latent variables to the model, which can greatly increase its complexity. As a result, estimating the model parameters or computing the best maximum a posteriori (MAP) assignment becomes a computationally expensive task. In a series of experiments on the PASCAL and the MSRC datasets, we were unable to find evidence of a significant performance increase attributed to the introduction of such constraints. On the contrary, we found that similar levels of performance can be achieved using a much simpler design that essentially ignores these constraints. This more simple approach makes use of the same local and global features to leverage evidence from the image, but instead directly biases the preferences of individual pixels. While our investigation does not prove that spatial and consistency constraints are not useful in principle, it points to the conclusion that they should be validated in a larger context.


tangible and embedded interaction | 2010

An empirical evaluation of touch and tangible interfaces for tabletop displays

Aurelien Lucchi; Patrick Jermann; Guillaume Zufferey; Pierre Dillenbourg

Tabletop systems have become quite popular in recent years, during which there was considerable enthusiasm for the development of new interfaces. In this paper, we establish a comparison between touch and tangible interfaces. We set up an experiment involving several actions like translation and rotation. We recruited 40 participants to take part in a user study and we present our results with a discussion on the design of touch and tangible interfaces. Our contribution is an empirical study showing that overall, the tangible interface is much faster but under certain conditions, the touch interface could gain the upper hand.


tangible and embedded interaction | 2009

TinkerSheets: using paper forms to control and visualize tangible simulations

Guillaume Zufferey; Patrick Jermann; Aurelien Lucchi; Pierre Dillenbourg

This paper describes TinkerSheets, a paper-based interface to tangible simulations. The proposed interface combines the advantages of form-based input and paper. Form-based input allows to set an arbitrary number of parameters. Using paper as a medium for the interface keeps the interaction modality consistently physical. TinkerSheets are also used as an output screen to display summarized information about the simulation. A user study conducted in an authentic context shows how the characteristics of the interface shape real world usage. We also describe how the affordances of this control and visualization interface support the co-design of interaction with end-users.


international world wide web conferences | 2016

Probabilistic Bag-Of-Hyperlinks Model for Entity Linking

Octavian-Eugen Ganea; Marina Ganea; Aurelien Lucchi; Carsten Eickhoff; Thomas Hofmann

Many fundamental problems in natural language processing rely on determining what entities appear in a given text. Commonly referenced as entity linking, this step is a fundamental component of many NLP tasks such as text understanding, automatic summarization, semantic search or machine translation. Name ambiguity, word polysemy, context dependencies and a heavy-tailed distribution of entities contribute to the complexity of this problem. We here propose a probabilistic approach that makes use of an effective graphical model to perform collective entity disambiguation. Input mentions (i.e., linkable token spans) are disambiguated jointly across an entire document by combining a document-level prior of entity co-occurrences with local information captured from mentions and their surrounding context. The model is based on simple sufficient statistics extracted from data, thus relying on few parameters to be learned. Our method does not require extensive feature engineering, nor an expensive training procedure. We use loopy belief propagation to perform approximate inference. The low complexity of our model makes this step sufficiently fast for real-time usage. We demonstrate the accuracy of our approach on a wide range of benchmark datasets, showing that it matches, and in many cases outperforms, existing state-of-the-art methods.


computer vision and pattern recognition | 2013

Learning for Structured Prediction Using Approximate Subgradient Descent with Working Sets

Aurelien Lucchi; Yunpeng Li; Pascal Fua

We propose a working set based approximate sub gradient descent algorithm to minimize the margin-sensitive hinge loss arising from the soft constraints in max-margin learning frameworks, such as the structured SVM. We focus on the setting of general graphical models, such as loopy MRFs and CRFs commonly used in image segmentation, where exact inference is intractable and the most violated constraints can only be approximated, voiding the optimality guarantees of the structured SVMs cutting plane algorithm as well as reducing the robustness of existing sub gradient based methods. We show that the proposed method obtains better approximate sub gradients through the use of working sets, leading to improved convergence properties and increased reliability. Furthermore, our method allows new constraints to be randomly sampled instead of computed using the more expensive approximate inference techniques such as belief propagation and graph cuts, which can be used to reduce learning time at only a small cost of performance. We demonstrate the strength of our method empirically on the segmentation of a new publicly available electron microscopy dataset as well as the popular MSRC data set and show state-of-the-art results.


north american chapter of the association for computational linguistics | 2016

SwissCheese at SemEval-2016 Task 4: Sentiment Classification Using an Ensemble of Convolutional Neural Networks with Distant Supervision

Jan Deriu; Maurice Gonzenbach; Fatih Uzdilli; Aurelien Lucchi; Valeria De Luca; Martin Jaggi

In this paper, we propose a classifier for predicting message-level sentiments of English micro-blog messages from Twitter. Our method builds upon the convolutional sentence embedding approach proposed by (Severyn and Moschitti, 2015a; Severyn and Moschitti, 2015b). We leverage large amounts of data with distant supervision to train an ensemble of 2-layer convolutional neural networks whose predictions are combined using a random forest classifier. Our approach was evaluated on the datasets of the SemEval-2016 competition (Task 4) outperforming all other approaches for the Message Polarity Classification task.


international world wide web conferences | 2017

Leveraging Large Amounts of Weakly Supervised Data for Multi-Language Sentiment Classification

Jan Milan Deriu; Aurelien Lucchi; Valeria De Luca; Aliaksei Severyn; Simone Müller; Mark Cieliebak; Thomas Hofmann; Martin Jaggi

This paper presents a novel approach for multi-lingual sentiment classification in short texts. This is a challenging task as the amount of training data in languages other than English is very limited. Previously proposed multi-lingual approaches typically require to establish a correspondence to English for which powerful classifiers are already available. In contrast, our method does not require such supervision. We leverage large amounts of weakly-supervised data in various languages to train a multi-layer convolutional network and demonstrate the importance of using pre-training of such networks. We thoroughly evaluate our approach on various multi-lingual datasets, including the recent SemEval-2016 sentiment prediction benchmark (Task 4), where we achieved state-of-the-art performance. We also compare the performance of our model trained individually for each language to a variant trained for all languages at once. We show that the latter model reaches slightly worse - but still acceptable - performance when compared to the single language model, while benefiting from better generalization properties across languages.

Collaboration


Dive into the Aurelien Lucchi's collaboration.

Top Co-Authors

Avatar

Pascal Fua

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin Smith

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Graham Knott

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge