Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pedro Delicado is active.

Publication


Featured researches published by Pedro Delicado.


Computational Statistics & Data Analysis | 2011

Dimensionality reduction when data are density functions

Pedro Delicado

Functional Data Analysis deals with samples where a whole function is observed for each individual. A relevant case of FDA is when the observed functions are density functions. Among the particular characteristics of density functions, the most of the fact that they are an example of infinite dimensional compositional data (parts of some whole which only carry relative information) is made. Several dimensionality reduction methods for this particular type of data are compared: functional principal components analysis with or without a previous data transformation, and multidimensional scaling for different inter-density distances, one of them taking into account the compositional nature of density functions. The emphasis is on the steps previous and posterior to the application of a particular dimensionality reduction method: care must be taken in choosing the right density function transformation and/or the appropriate distance between densities before performing dimensionality reduction; subsequently the graphical representation of dimensionality reduction results must take into account that the observed objects are density functions. The different methods are applied to artificial and real data (population pyramids for 223 countries in year 2000). As a global conclusion, the use of multidimensional scaling based on compositional distance is recommended.


Computational Statistics & Data Analysis | 2010

Distance-based local linear regression for functional predictors

Eva Boj; Pedro Delicado; Josep Fortiana

The problem of nonparametrically predicting a scalar response variable from a functional predictor is considered. A sample of pairs (functional predictor and response) is observed. When predicting the response for a new functional predictor value, a semi-metric is used to compute the distances between the new and the previously observed functional predictors. Then each pair in the original sample is weighted according to a decreasing function of these distances. A Weighted (Linear) Distance-Based Regression is fitted, where the weights are as above and the distances are given by a possibly different semi-metric. This approach can be extended to nonparametric predictions from other kinds of explanatory variables (e.g., data of mixed type) in a natural way.


Computational Statistics & Data Analysis | 2008

A small sample comparison of maximum likelihood, moments and L-moments methods for the asymmetric exponential power distribution

Pedro Delicado; M.N. Goria

Three methods of estimation, namely maximum likelihood, moments and L-moments, when data come from an asymmetric exponential power distribution are considered. This is a very flexible four-parameter family exhibiting variety of tail and shape behaviours. The analytical expression of the first four L-moments of these distributions are derived, allowing for the use of L-moments estimators. A simulation study compares the three estimation methods in small samples.


Statistics and Computing | 2009

Measuring non-linear dependence for two random variables distributed along a curve

Pedro Delicado; Marcelo Smrekar

We propose new dependence measures for two real random variables not necessarily linearly related. Covariance and linear correlation are expressed in terms of principal components and are generalized for variables distributed along a curve. Properties of these measures are discussed. The new measures are estimated using principal curves and are computed for simulated and real data sets. Finally, we present several statistical applications for the new dependence measures.


Journal of Regulatory Economics | 2002

Desperately Seeking Theta's: Estimating the Distribution of Consumers under Increasing Block Rates

Fidel Castro-Rodriguez; José-María Da-Rocha; Pedro Delicado

This paper shows that increasing block rate pricing schedules usually applied by water utilities can reduce the efficiency and equity levels. To do this, we first present a two step method to estimate the demand and to recover the distribution of consumer tastes when increasing block rate pricing is used. We show that in this case the tariff induces a pooling equilibrium and customers with different taste parameters will be observed to choose the same consumption level. Second, we show that a two-part tariff that neither reduces the revenue for the firm nor increases the aggregate level of water consumption increases the welfare and equity levels in relation to an increasing block rates schedule.


Investigative Radiology | 1999

Validation procedures in radiologic diagnostic models. Neural network and logistic regression.

Estanislao Arana; Pedro Delicado; Luis Martí-Bonmatí

OBJECTIVE To compare the performance of two predictive radiologic models, logistic regression (LR) and neural network (NN), with five different resampling methods. METHODS One hundred sixty-seven patients with proven calvarial lesions as the only known disease were enrolled. Clinical and CT data were used for LR and NN models. Both models were developed with cross-validation, leave-one-out, and three different bootstrap algorithms. The final results of each model were compared with error rate and the area under receiver operating characteristic curves (Az). RESULTS The NN obtained statistically higher Az values than LR with cross-validation. The remaining resampling validation methods did not reveal statistically significant differences between LR and NN rules. CONCLUSIONS The NN classifier performs better than the one based on LR. This advantage is well detected by three-fold cross-validation but remains unnoticed when leave-one-out or bootstrap algorithms are used.


Annals of the Institute of Statistical Mathematics | 1999

Goodness of Fit Tests in Random Coefficient Regression Models

Pedro Delicado; Juan Romo

Random coefficient regressions have been applied in a wide range of fields, from biology to economics, and constitute a common frame for several important statistical models. A nonparametric approach to inference in random coefficient models was initiated by Beran and Hall. In this paper we introduce and study goodness of fit tests for the coefficient distributions; their asymptotic behavior under the null hypothesis is obtained. We also propose bootstrap resampling strategies to approach these distributions and prove their asymptotic validity using results by Giné and Zinn on bootstrap empirical processes. A simulation study illustrates the properties of these tests.


Protein Journal | 2013

A dynamic model of the proteins that form the initial iron-sulfur cluster biogenesis machinery in yeast mitochondria.

Isaac Amela; Pedro Delicado; Antonio Gómez; Enrique Querol; Juan Cedano

The assembly of iron-sulfur clusters (ISCs) in eukaryotes involves the protein Frataxin. Deficits in this protein have been associated with iron inside the mitochondria and impair ISC biogenesis as it is postulated to act as the iron donor for ISCs assembly in this organelle. A pronounced lack of Frataxin causes Friedreich’s Ataxia, which is a human neurodegenerative and hereditary disease mainly affecting the equilibrium, coordination, muscles and heart. Moreover, it is the most common autosomal recessive ataxia. High similarities between the human and yeast molecular mechanisms that involve Frataxin have been suggested making yeast a good model to study that process. In yeast, the protein complex that forms the central assembly platform for the initial step of ISC biogenesis is composed by yeast frataxin homolog, Nfs1–Isd11 and Isu. In general, it is commonly accepted that protein function involves interaction with other protein partners, but in this case not enough is known about the structure of the protein complex and, therefore, how it exactly functions. The objective of this work is to model the protein complex in order to gain insight into structural details that end up with its biological function. To achieve this goal several bioinformatics tools, modeling techniques and protein docking programs have been used. As a result, the structure of the protein complex and the dynamic behavior of its components, along with that of the iron and sulfur atoms required for the ISC assembly, have been modeled. This hypothesis will help to better understand the function and molecular properties of Frataxin as well as those of its ISC assembly protein partners.


Connection Science | 2009

Analysing musical performance through functional data analysis: rhythmic structure in Schumann's Traumerei

Josué Almansa; Pedro Delicado

Functional data analysis (FDA) is a relatively new branch of statistics devoted to describing and modelling data that are complete functions. Many relevant aspects of musical performance and perception can be understood and quantified as dynamic processes evolving as functions of time. In this paper, we show that FDA is a statistical methodology well suited for research into the field of quantitative musical performance analysis. To demonstrate this suitability, we consider tempo data for 28 performances of Schumanns Träumerei and analyse them by means of functional principal component analysis (one of the most powerful descriptive tools included in FDA). Specifically, we investigate the commonalities and differences between different performances regarding (expressive) timing, and we cluster similar performances together. We conclude that musical data considered as functional data reveal performance structures that might otherwise go unnoticed.


Computational Statistics & Data Analysis | 2010

Confidence intervals for median survival time with recurrent event data

Juan R. González; Edsel A. Peña; Pedro Delicado

Several methods of constructing confidence intervals for the median survival time of a recurrent event data are developed. One of them is based on asymptotic variances estimated using some transformations. Others are based on bootstrap techniques. Two types of recurrent event models are considered: the first one is a model where the inter-event times are independent and identically distributed, and the second one is a model where the inter-event times are associated, with the association arising from a gamma frailty model. Bootstrap and asymptotic confidence intervals are studied through simulation. These methods are applied and compared using two real data sets arising in the biomedical and public health settings, using an available R package. The first example belongs to data from a study concerning small bowel motility where an independent model may be assumed. The second example involves hospital readmissions in patients diagnosed with colorectal cancer. In this example the interoccurrence times are correlated.

Collaboration


Dive into the Pedro Delicado's collaboration.

Top Co-Authors

Avatar

Ramón Giraldo

National University of Colombia

View shared research outputs
Top Co-Authors

Avatar

Enrique Querol

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Eva Boj

University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Juan Cedano

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Isaac Amela

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Adrià Caballé

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Ana Justel

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Antonio Gómez

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge