Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christine De Mol is active.

Publication


Featured researches published by Christine De Mol.


Proceedings of the National Academy of Sciences of the United States of America | 2009

Sparse and stable Markowitz portfolios

Joshua Brodie; Ingrid Daubechies; Christine De Mol; Domenico Giannone; Ignace Loris

We consider the problem of portfolio selection within the classical Markowitz mean-variance framework, reformulated as a constrained least-squares regression problem. We propose to add to the objective function a penalty proportional to the sum of the absolute values of the portfolio weights. This penalty regularizes (stabilizes) the optimization problem, encourages sparse portfolios (i.e., portfolios with only few active positions), and allows accounting for transaction costs. Our approach recovers as special cases the no-short-positions portfolios, but does allow for short positions in limited number. We implement this methodology on two benchmark data sets constructed by Fama and French. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naïve evenly weighted portfolio.


Journal of Computational Biology | 2009

A regularized method for selecting nested groups of relevant genes from microarray data.

Christine De Mol; Sofia Mosci; Magali Traskine; Alessandro Verri

Gene expression analysis aims at identifying the genes able to accurately predict biological parameters like, for example, disease subtyping or progression. While accurate prediction can be achieved by means of many different techniques, gene identification, due to gene correlation and the limited number of available samples, is a much more elusive problem. Small changes in the expression values often produce different gene lists, and solutions which are both sparse and stable are difficult to obtain. We propose a two-stage regularization method able to learn linear models characterized by a high prediction performance. By varying a suitable parameter these linear models allow to trade sparsity for the inclusion of correlated genes and to produce gene lists which are almost perfectly nested. Experimental results on synthetic and microarray data confirm the interesting properties of the proposed method and its potential as a starting point for further biological investigations.


Journal of Modern Optics | 1984

Resolution in diffraction-limited imaging, a singular value analysis. IV: The case of uncertain localization or non-uniform illumination of the object

M. Bertero; Christine De Mol; E. R. Pike; John G. Walker

Previous work in this series, in which the theory of singular systems has been used to discuss the problem of diffraction-limited imaging, is generalized to allow the reconstruction of an object over a region with ‘soft’ edges. The generalization is introduced to deal with gaussian-beam or other illumination or uncertain a priori knowledge of position. The solutions are developed in a weighted L2 space. Examples of singular functions and vectors with both gaussian and sine illumination are given. The analysis is applied to determine the amplitude response functions of the scanning optical or acoustic microscope systems proposed by Bertero and Pike in the first paper of the series and it is found that the performance of a scanning microscope of the new type should approach exactly that of a conventional coherent microscope of twice the resolving power.


Computational Management Science | 2009

Feature selection for high-dimensional data

Augusto Destrero; Sofia Mosci; Christine De Mol; Alessandro Verri; Francesca Odone

This paper focuses on feature selection for problems dealing with high-dimensional data. We discuss the benefits of adopting a regularized approach with L1 or L1–L2 penalties in two different applications—microarray data analysis in computational biology and object detection in computer vision. We describe general algorithmic aspects as well as architecture issues specific to the two domains. The very promising results obtained show how the proposed approach can be useful in quite different fields of application.


Progress in Optics | 1996

III Super-Resolution by Data Inversion

M. Bertero; Christine De Mol

Publisher Summary This chapter explains super-resolution using case examples, and discusses the classical Rayleigh resolution limit and defines it in terms of the overlap between the images of two-point sources. It is shown that, in the framework of modern Fourier optics, the resolving power is characterized instead by specifying the spatial-frequency band associated with the instrument. Some important features of inversion methods are discussed to the extent that they relate to the assessment of resolution limits. The main difficulty encountered in solving inverse problems is their sensitivity to noise in the data, which can be the source of major instabilities in the solutions. The role of regularization is to prevent such instabilities from occurring. The chapter explains the problem of extrapolating the object spectrum outside the band or the effective band under the assumption that the object vanishes outside some finite known domain. The case of scanning microscopy is considered. The chapter also focuses on confocal microscopy and shows how the use of data inversion techniques allows enhancing the resolving power of such microscopes. The problem of inverse diffraction from plane to plane, which consists of back-propagating toward the source plane a field propagating in free space, is considered.


International Journal of Computer Vision | 2009

A Regularized Framework for Feature Selection in Face Detection and Authentication

Augusto Destrero; Christine De Mol; Francesca Odone; Alessandro Verri

This paper proposes a general framework for selecting features in the computer vision domain—i.e., learning descriptions from data—where the prior knowledge related to the application is confined in the early stages. The main building block is a regularization algorithm based on a penalty term enforcing sparsity. The overall strategy we propose is also effective for training sets of limited size and reaches competitive performances with respect to the state-of-the-art. To show the versatility of the proposed strategy we apply it to both face detection and authentication, implementing two modules of a monitoring system working in real time in our lab. Aside from the choices of the feature dictionary and the training data, which require prior knowledge on the problem, the proposed method is fully automatic. The very good results obtained in different applications speak for the generality and the robustness of the framework.


asian conference on computer vision | 2007

A regularized approach to feature selection for face detection

Augusto Destrero; Christine De Mol; Francesca Odone; Alessandro Verri

In this paper we present a trainable method for selecting features from an overcomplete dictionary of measurements. The starting point is a thresholded version of the Landweber algorithm for providing a sparse solution to a linear system of equations. We consider the problem of face detection and adopt rectangular features as an initial representation for allowing straightforward comparisons with existing techniques. For computational efficiency and memory requirements, instead of implementing the full optimization scheme on tenths of thousands of features, we propose to first solve a number of smaller size optimization problems obtained by randomly sub-sampling the feature vector, and then recombining the selected features. The obtained set is still highly redundant, so we further apply feature selection. The final feature selection system is an efficient two-stages architecture. Experimental results of an optimized version of the method on face images and image sequences indicate that this method is a serious competitor of other feature selection schemes recently popularized in computer vision for dealing with problems of real time object detection.


Annals of Physics | 1980

Sufficient Conditions for the Existence of Bound States in a Potential without Spherical Symmetry

Khosrow Chadan; Christine De Mol

We give in this paper several suffieient conditions for the existence of negative energy bound states in a purely attractive potential without spherical symmetry. These conditions generalize the condition obtained recently by K. Chadan and A. Martin (C. R. Acad. Sci. Paris290 (1980), 151), and can ensure the existence of n bound states. For the spherically symmetric case, one gets simple formulae which are also new.


Journal of Physics: Conference Series | 2013

Regularized Blind Deconvolution with Poisson Data

Loïc Lecharlier; Christine De Mol

We propose easy-to-implement algorithms to perform blind deconvolution of nonnegative images in the presence of noise of Poisson type. Alternate minimization of a regularized Kullback-Leibler cost function is achieved via multiplicative update rules. The scheme allows to prove convergence of the iterates to a stationary point of the cost function. Numerical examples are reported to demonstrate the feasibility of the proposed method.


Trends in Optics#R##N#Research, Developments and Applications | 1996

Resolution enhancement by data inversion techniques

Christine De Mol

Publisher Summary This chapter describes resolution enhancement by data inversion techniques. The development of micro-informatics and computer-assisted devices has deeply modified the classical concept of resolving power of an optical instrument. In modern instruments, numerical algorithms can be implemented to process—that is, invert the recorded data to get estimates of the probed object with enhanced resolutions. The resolution limits then arise as practical limitations because of the noise amplification inherent in all inversion procedures. The chapter illustrates the way such limits can be assessed and the circumstances under which a significant amount of super-resolution is achievable. Moreover, following the development of Fourier methods in optics, it has become more usual to characterize the resolution of an optical system in terms of its bandwidth. Fourier transform of the object is an entire analytic function and hence, could be uniquely recovered from that piece of spectrum transmitted by the optical system. Finally, the chapter illustrates the way to assess the amount of achievable super-resolution where the object is uniformly illuminated over some finite slit. Before analyzing the conditions under which super-resolution can be achieved, the chapter briefly outlines the required framework and describes the way a mathematical model is defined for the imaging process and the way the associated inverse problem is solved.

Collaboration


Dive into the Christine De Mol's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michel Defrise

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

F. Gori

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

G. Guattari

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Giovanni Alberto Viano

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Researchain Logo
Decentralizing Knowledge