Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eve Leconte is active.

Publication


Featured researches published by Eve Leconte.


Papers in Regional Science | 2003

Explaining the pattern of regional unemployment: The case of the Midi-Pyrénées region

Yves Aragon; Dominique Haughton; Jonathan Haughton; Eve Leconte; Eric Malin; Anne Ruiz-Gazen; Christine Thomas-Agnan

Abstract. Unemployment rates vary widely at the sub-regional level. We seek to explain why such variation occurs, using data for 174 districts in the Midi-Pyrénées region of France for 1990–1991. A set of explanatory variables is derived from theory and the voluminous literature. The best model includes a correction for spatially autocorrelated errors. Unemployment rates are higher in urban areas and, where per capita income is higher, are consistent with the view that unemployment differences largely reflect variations in “amenities.” Along with a lack of evidence of housing market rigidities, these suggest that subregional variations in unemployment are not mainly the result of labor market disequilibrium.


Lifetime Data Analysis | 2002

Smooth Conditional Distribution Function and Quantiles under Random Censorship

Eve Leconte; Sandrine Poiraud-Casanova; Christine Thomas-Agnan

We consider a nonparametric random design regression model in which the response variable is possibly right censored. The aim of this paper is to estimate the conditional distribution function and the conditional α-quantile of the response variable. We restrict attention to the case where the response variable as well as the explanatory variable are unidimensional and continuous. We propose and discuss two classes of estimators which are smooth with respect to the response variable as well as to the covariate. Some simulations demonstrate that the new methods have better mean square error performances than the generalized Kaplan-Meier estimator introduced by Beran (1981) and considered in the literature by Dabrowska (1989, 1992) and Gonzalez-Manteiga and Cadarso-Suarez (1994).


2015 IEEE 10th International Symposium on Diagnostics for Electrical Machines, Power Electronics and Drives (SDEMPED) | 2015

Variable importance assessment in lifespan models of insulation materials: A comparative study

Farah Salameh; Antoine Picot; Marie Chabert; Eve Leconte; Anne Ruiz-Gazen; Pascal Maussion

This paper presents and compares different methods for evaluating the relative importance of variables involved in insulation lifespan models. Parametric and non-parametric models are derived from accelerated aging tests on twisted pairs covered with an insulating varnish under different stress constraints (voltage, frequency and temperature). Parametric models establish a simple stress-lifespan relationship and the variable importance can be evaluated from the estimated parameters. As an alternative approach, non-parametric models explain the stress-lifespan relationship by means of regression trees or random forests (RF) for instance. Regression trees naturally provide a hierarchy between the variables. However, they suffer from a high dependency with respect to the training set. This paper shows that RF provide a more robust model while allowing a quantitative variable importance assessment. Comparisons of the different models are performed on different training and test sets obtained through experiments.


Statistical Methods in Medical Research | 2016

Optimal scheduling of post-therapeutic follow-up of patients treated for cancer for early detection of relapses.

Serge M.A. Somda; Eve Leconte; Jean-Marie Boher; Bernard Asselain; Andrew Kramar; Thomas Filleron

Post-therapeutic surveillance is one important component of cancer care. However, there still is no evidence-based strategies to schedule patients’ follow-up examinations. Our approach is based on the modeling of the probability of the onset of relapse at an early asymptotic or preclinical stage and its transition to a clinical stage. For that we consider a multistate homogeneous Markov model, which includes the natural history of relapse. The model also handles separately the different types of possible relapses. The optimal schedule is provided by the calendar visit that maximizes a utility function. The methodology has been applied to laryngeal cancer. The different follow-up strategies revealed to be more efficient than those proposed by different scientific societies.


Medical Decision Making | 2014

Determining the Length of Posttherapeutic Follow-up for Cancer Patients Using Competing Risks Modeling

Serge M.A. Somda; Eve Leconte; Andrew Kramar; Nicolas Penel; Christine Chevreau; Martine Delannes; Maria Rios; Thomas Filleron

Background/Objective. After a curative treatment for cancer, patients enter into a posttherapeutic surveillance phase. This phase aims to detect relapses as soon as possible to improve the outcome. Mould and others predicted with a simple formula, using a parametric mixture cure model, how long early-stage breast cancer patients should be followed after treatment. However, patients in posttherapeutic surveillance phase are at risk of different events types with different responses according to their prognostic factors and different probabilities to be cured. This paper presents an adaptation of the method proposed by Mould and others, taking into account competing risks. Our loss function estimates, when follow-up is stopped at a given time, the proportion of patients who will fail after this time and who could have been treated successfully. Method. We use the direct approach for cumulative incidence modeling in the presence of competing risks with an improper Gompertz probability distribution as proposed by Jeong and Fine. Prognostic factors can be taken into account, leading to a proportional hazards model. In a second step, the estimates of the Gompertz model are combined with the probability for a patient to be treated successfully in case of relapse for each event type. The method is applied to 2 examples, a numeric fictive example and a real data set on soft tissue sarcoma. Results and Conclusion. The model presented is a good tool for decision making to determine the total length of posttherapeutic surveillance. It can be applied to all cancers regardless of the localizations.


Computers in Biology and Medicine | 2018

Designing phase II clinical trials to target subgroup of interest in a heterogeneous population: A case study using an R package

Bastien Cabarrou; Patrick Sfumato; Eve Leconte; Jean-Marie Boher; Thomas Filleron

Phase II trials that evaluate target therapies based on a biomarker must be well designed in order to assess anti-tumor activity as well as clinical utility of the biomarker. Classical phase II designs do not deal with this molecular heterogeneity and can lead to an erroneous conclusion in the whole population, whereas a subgroup of patients may well benefit from the new therapy. Moreover, the target population to be evaluated in a phase III trial may be incorrectly specified. Alternative approaches are proposed in the literature that make it possible to include two subgroups according to biomarker status (negative/positive) in the same study. Jones, Parashar and Tournoux et al. propose different stratified adaptive two-stage designs to identify a subgroup of interest in a heterogeneous population that could possibly benefit from the experimental treatment at the end of the first or second stage. Nevertheless, these designs are rarely used in oncology research. After introducing these stratified adaptive designs, we present an R package (ph2hetero) implementing these methods. A case study is provided to illustrate both the designs and the use of the R package. These stratified adaptive designs provide a useful alternative to classical two-stage designs and may also provide options in contexts other than biomarker studies.


Computers in Biology and Medicine | 2018

Focus on an infrequently used quantity in the context of competing risks: The conditional probability function

Bastien Cabarrou; Florence Dalenc; Eve Leconte; Jean-Marie Boher; Thomas Filleron

In clinical studies of hematologic and oncologic diseases, the outcomes of interest are generally composite time to event endpoints which are usually defined by occurrence of different event types. Nonetheless, clinicians are interested in studying only one event type, which leads to a competing risks situation. In this context, Pepe and Mori presented a quantity directly derived from the cumulative incidence: the conditional probability. This function defines the probability that a given event occurs, conditionally on not having had a competing event by that time. The objective of this paper is to present this conditional cumulative incidence function and to compare its use to the cumulative incidence in different data sets. Different scenarios highlight the importance of the competing event on the interpretation of the conditional probability. Conditional probability needs to be interpreted jointly with the cumulative incidence. This quantity can be of interest especially when the risk of the competing event is large, strongly precludes the risk of the event of interest and provides useful additional information.


Computers in Biology and Medicine | 2017

Comparison of variable selection methods for high-dimensional survival data with competing events

Julia Gilhodes; Christophe Zemmour; Soufiane Ajana; Alejandra Martinez; Jean-Pierre Delord; Eve Leconte; Jean-Marie Boher; Thomas Filleron

BACKGROUND In the era of personalized medicine, its primordial to identify gene signatures for each event type in the context of competing risks in order to improve risk stratification and treatment strategy. Until recently, little attention was paid to the performance of high-dimensional selection in deriving molecular signatures in this context. In this paper, we investigate the performance of two selection methods developed in the framework of high-dimensional data and competing risks: Random survival forest and a boosting approach for fitting proportional subdistribution hazards models. METHODS Using data from bladder cancer patients (GSE5479) and simulated datasets, stability and prognosis performance of the two methods were evaluated using a resampling strategy. For each sample, the data set was split into 100 training and validation sets. Molecular signatures were developed in the training sets by the two selection methods and then applied on the corresponding validation sets. RESULTS Random survival forest and boosting approach have comparable performance for the prediction of survival data, with few selected genes in common. Nevertheless, many different sets of genes are identified by the resampling approach, with a very small frequency of genes occurrence among the signatures. Also, the smaller the training sample size, the lower is the stability of the signatures. CONCLUSION Random survival forest and boosting approach give good predictive performance but gene signatures are very unstable. Further works are needed to propose adequate strategies for the analysis of high-dimensional data in the context of competing risks.


Clinical Genitourinary Cancer | 2017

A Statistical Approach to Determine the Optimal Duration of Post-Treatment Follow-Up: Application to Metastatic Nonseminomatous Germ Cell Tumors.

Serge M.A. Somda; Stéphane Culine; Christine Chevreau; Karim Fizazi; Eve Leconte; Andrew Kramar; Thomas Filleron

Micro‐Abstract As the number of patients in cancer remission increases every year, an economically attractive option is to reduce duration of follow‐up according to prognostic factors. In the present study we propose a statistical method to define an optimal duration of follow‐up for patients in remission after treatment for cancer, for detection of recurrences. Background: The objective of this study was to present a statistical method to define an optimal duration of follow‐up for patients in remission after treatment for cancer, for detection of recurrences. Patients and Methods: Surveillance duration was estimated using the 2‐step approach proposed by Mould et al. Relapse‐free interval was modeled using the parametric cure model proposed by Boag. The optimal length of follow‐up was then estimated as the minimal elapsed time after which the probability of a patient to relapse and to be cured with success is below a given threshold value. The method is applied to 2 real data sets of patients treated for metastatic non seminomatous germ‐cell tumors: T93BP and T93MP. Results: For the T93BP, cure rate was estimated at 91.3% and proportions of patients who relapsed after 3 and 5 years were estimated at 0.5% and 0.2%. With a probability of success of salvage treatment equal to 80% and 50%, numbers of delayed cases after 5 years were 2 and 1. For T93MP, the proportion of patients who presented relapse after 5 and 10 years were estimated at 5.2% and 2.6%. Considering a probability of salvage treatment equal to 20%, the number of delayed cases after 5 and 10 years were 10 and 5. Conclusion: Using this methodology, duration of post‐therapeutic follow‐up might be tailored according to an objective criteria: the number of patients who present relapse after the end of follow‐up and who could have been treated with success in case of early detection.


Journal of Official Statistics | 2005

Conditional ordering using nonparametric expectiles

Yves Aragon; Sandrine Casanova; Robert G. Chambers; Eve Leconte

Collaboration


Dive into the Eve Leconte's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yves Aragon

University of Toulouse

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge