Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeremy E. Cohen is active.

Publication


Featured researches published by Jeremy E. Cohen.


IEEE Signal Processing Letters | 2015

Fast Decomposition of Large Nonnegative Tensors

Jeremy E. Cohen; Rodrigo Cabral Farias; Pierre Comon

In signal processing, tensor decompositions have gained in popularity this last decade. In the meantime, the volume of data to be processed has drastically increased. This calls for novel methods to handle Big Data tensors. Since most of these huge data are issued from physical measurements, which are intrinsically real nonnegative, being able to compress nonnegative tensors has become mandatory. Following recent works on HOSVD compression for Big Data, we detail solutions to decompose a nonnegative tensor into decomposable terms in a compressed domain.


IEEE Transactions on Geoscience and Remote Sensing | 2016

Nonnegative tensor CP decomposition of hyperspectral data

Miguel Angel Veganzones; Jeremy E. Cohen; Rodrigo Cabral Farias; Jocelyn Chanussot; Pierre Comon

New hyperspectral missions will collect huge amounts of hyperspectral data. In addition, it is possible now to acquire time series and multiangular hyperspectral images. The process and analysis of these big data collections will require common hyperspectral techniques to be adapted or reformulated. The tensor decomposition, which is also known as multiway analysis, is a technique to decompose multiway arrays, i.e., hypermatrices with more than two dimensions (ways). Hyperspectral time series and multiangular acquisitions can be represented as a three-way tensor. Here, we apply canonical polyadic (CP) tensor decomposition techniques to the blind analysis ohyperspectral big data. In order to do so, we use a novel compression-based nonnegative CP decomposition. We show that the proposed methodology can be interpreted as multilinear blind spectral unmixing, i.e., a higher order extension of the widely known spectral unmixing. In the proposed approach, the big hyperspectral tensor is decomposed in three sets of factors, which can be interpreted as spectral signatures, their spatial distribution, and temporal/angular changes. We provide experimental validation using a study case of the snow coverage of the French Alps during the snow season.


IEEE Transactions on Signal Processing | 2016

Exploring Multimodal Data Fusion Through Joint Decompositions with Flexible Couplings

Rodrigo Cabral Farias; Jeremy E. Cohen; Pierre Comon

A Bayesian framework is proposed to define flexible coupling models for joint tensor decompositions of multiple datasets. Under this framework, a natural formulation of the data fusion problem is to cast it in terms of a joint maximum a posteriori (MAP) estimator. Data-driven scenarios of joint posterior distributions are provided, including general Gaussian priors and non Gaussian coupling priors. We present and discuss implementation issues of algorithms used to obtain the joint MAP estimator. We also show how this framework can be adapted to tackle the problem of joint decompositions of large datasets. In the case of a conditional Gaussian coupling with a linear transformation, we give theoretical bounds on the data fusion performance using the Bayesian Cramér-Rao bound. Simulations are reported for hybrid coupling models ranging from simple additive Gaussian models to Gamma-type models with positive variables and to the coupling of data sets which are inherently of different size due to different resolution of the measurement devices.


international conference on latent variable analysis and signal separation | 2015

Joint Decompositions with Flexible Couplings

Rodrigo Cabral Farias; Jeremy E. Cohen; Christian Jutten; Pierre Comon

A Bayesian framework is proposed to define flexible coupling models for joint decompositions of data sets. Under this framework, a solution to the joint decomposition can be cast in terms of a maximum a posteriori estimator. Examples of joint posterior distributions are provided, including general Gaussian priors and non Gaussian coupling priors. Then simulations are reported and show the effectiveness of this approach to fuse information from data sets, which are inherently of different size due to different time resolution of the measurement devices.


european signal processing conference | 2015

Multilinear spectral unmixing of hyperspectral multiangle images

Miguel Angel Veganzones; Jeremy E. Cohen; R. Cabrai Farias; Ruben Marrero; Jocelyn Chanussot; Pierre Comon

Spectral unmixing is one of the most important and studied topics in hyperspectral image analysis. By means of spectral unmixing it is possible to decompose a hyperspectral image in its spectral components, the so-called endmembers, and their respective fractional spatial distributions, so-called abundance maps. New hyperspectral missions will allow to acquire hyperspectral images in new ways, for instance, in temporal series or in multi-angular acquisitions. Working with these incoming huge databases of multi-way hyperspec-tral images will raise new challenges to the hyperspectral community. Here, we propose the use of compression-based non-negative tensor canonical polyadic (CP) decompositions to analyze this kind of datasets. Furthermore, we show that the non-negative CP decomposition could be understood as a multi-linear spectral unmixing technique. We evaluate the proposed approach by means of Mars synthetic datasets built upon multi-angular in-lab hyperspectral acquisitions.


international conference on acoustics, speech, and signal processing | 2017

Very low bitrate spatial audio coding with dimensionality reduction

Christian Rohlfing; Jeremy E. Cohen; Antoine Liutkus

In this paper, we show that tensor compression techniques based on randomization and partial observations are very useful for spatial audio object coding. In this application, we aim at transmitting several audio signals called objects from a coder to a decoder. A common strategy is to transmit only the downmix of the objects along some small information permitting reconstruction at the decoder. In practice, this is done by transmitting compressed versions of the objects spectrograms and separating the mix with Wiener filters. Previous research used nonnegative tensor factorizations in this context, with bitrates as low as 1 kbps per object. Building on recent advances on tensor compression, we show that the computation time for encoding can be extremely reduced. Then, we demonstrate how the mixture can be exploited at the decoder to avoid the transmission of many parameters, permitting bitrates as low as 0.1 kbps per object for comparable performance.


international conference on acoustics, speech, and signal processing | 2015

Performance estimation for tensor CP decomposition with structured factors

Maxime Boizard; Rémy Boyer; Gérard Favier; Jeremy E. Cohen; Pierre Comon

The Canonical Polyadic tensor decomposition (CPD), also known as Candecomp/Parafac, is very useful in numerous scientific disciplines. Structured CPDs, i.e. with Toeplitz, circulant, or Hankel factor matrices, are often encountered in signal processing applications. As subsequently pointed out, specialized algorithms were recently proposed for estimating the deterministic parameters of structured CP decompositions. A closed-form expression of the Cramér-Rao bound (CRB) is derived, related to the problem of estimating CPD parameters, when the observed tensor is corrupted with an additive circular i.i.d. Gaussian noise. This CRB is provided for arbitrary tensor rank and sizes. Finally, the proposed CRB expression is used to asses the statistical efficiency of the existing algorithms by means of simulation results in the cases of third-order tensors having three circulant factors on one hand, and an Hankel factor on the other hand.


sensor array and multichannel signal processing workshop | 2016

Modeling time warping in tensor decomposition

Bertrand Rivet; Jeremy E. Cohen

Taking into account subject variability in data mining is one of the great challenges of modern biomedical engineering. In EEG recordings, the assumption that time sources are exactly shared by multiple subjects, multiple recordings of the same subject, or even multiples instances of the sources in one recording is especially wrong. In this paper, we propose to deal with shared underlying sources expressed through time warping in multiple EEG recordings, in the context of ocular artifact removal. Diffeomorphisms are used to model the time warping operators. We derive an algorithm that extracts all sources and diffeomorphism in the model and show successful simulations, giving a proof of concept that subject variability can be tackled with tensor modeling.


european signal processing conference | 2016

Joint tensor compression for coupled canonical polyadic decompositions

Jeremy E. Cohen; Rodrigo Cabral Farias; Pierre Comon

To deal with large multimodal datasets, coupled canonical polyadic decompositions are used as an approximation model. In this paper, a joint compression scheme is introduced to reduce the dimensions of the dataset. Joint compression allows to solve the approximation problem in a compressed domain using standard coupled decomposition algorithms. Computational complexity required to obtain the coupled decomposition is therefore reduced. Also, we propose to approximate the update of the coupled factor by a simple weighted average of the independent updates of the coupled factors. The proposed approach and its simplified version are tested with synthetic data and we show that both do not incur substantial loss in approximation performance.


international conference on latent variable analysis and signal separation | 2018

Curve Registered Coupled Low Rank Factorization

Jeremy E. Cohen; Rodrigo Cabral Farias; Bertrand Rivet

We propose an extension of the canonical polyadic (CP) tensor model where one of the latent factors is allowed to vary through data slices in a constrained way. The components of the latent factors, which we want to retrieve from data, can vary from one slice to another up to a diffeomorphism. We suppose that the diffeomorphisms are also unknown, thus merging curve registration and tensor decomposition in one model, which we call registered CP. We present an algorithm to retrieve both the latent factors and the diffeomorphism, which is assumed to be in a parametrized form. At the end of the paper, we show simulation results comparing registered CP with other models from the literature.

Collaboration


Dive into the Jeremy E. Cohen's collaboration.

Top Co-Authors

Avatar

Pierre Comon

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Rodrigo Cabral Farias

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Jocelyn Chanussot

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Miguel Angel Veganzones

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bertrand Rivet

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Konstantin Usevich

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

R. Cabrai Farias

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Christian Jutten

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

R. Cabral Farias

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge