Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Raydonal Ospina is active.

Publication


Featured researches published by Raydonal Ospina.


Computational Statistics & Data Analysis | 2012

A general class of zero-or-one inflated beta regression models

Raydonal Ospina; Silvia L. P. Ferrari

This paper proposes a general class of regression models for continuous proportions when the data contain zeros or ones. The proposed class of models assumes that the response variable has a mixed continuous-discrete distribution with probability mass at zero or one. The beta distribution is used to describe the continuous component of the model, since its density has a wide range of different shapes depending on the values of the two parameters that index the distribution. We use a suitable parameterization of the beta law in terms of its mean and a precision parameter. The parameters of the mixture distribution are modeled as functions of regression parameters. We provide inference, diagnostic, and model selection tools for this class of models. A practical application that employs real data is presented.


Computational Statistics & Data Analysis | 2006

Improved point and interval estimation for a beta regression model

Raydonal Ospina; Francisco Cribari-Neto; Klaus L. P. Vasconcellos

In this paper we consider the beta regression model recently proposed by Ferrari and Cribari-Neto [2004. Beta regression for modeling rates and proportions. J. Appl. Statist. 31, 799-815], which is tailored to situations where the response is restricted to the standard unit interval and the regression structure involves regressors and unknown parameters. We derive the second order biases of the maximum likelihood estimators and use them to define bias-adjusted estimators. As an alternative to the two analytically bias-corrected estimators discussed, we consider a bias correction mechanism based on the parametric bootstrap. The numerical evidence favors the bootstrap-based estimator and also one of the analytically corrected estimators. Several different strategies for interval estimation are also proposed. We present an empirical application.


IEEE Transactions on Reliability | 2014

Goodness-of-Fit Tests for the Birnbaum-Saunders Distribution With Censored Reliability Data

Michelli Barros; Víctor Leiva; Raydonal Ospina; Aline Tsuyuguchi

We propose goodness-of-fit tests for Birnbaum-Saunders distributions with type-II right censored data. Classical goodness-of-fit tests based on the empirical distribution, such as Anderson-Darling, Cramér-von Misses, and Kolmogorov-Smirnov, are adapted to censored data, and evaluated by means of a simulation study. The obtained results are applied to real-world censored reliability data.


Remote Sensing | 2017

Unassisted quantitative evaluation of despeckling filters

Luis Gómez Déniz; Raydonal Ospina; Alejandro C. Frery

SAR (Synthetic Aperture Radar) imaging plays a central role in Remote Sensing due to, among other important features, its ability to provide high-resolution, day-and-night and almost weather-independent images. SAR images are affected from a granular contamination, speckle, that can be described by a multiplicative model. Many despeckling techniques have been proposed in the literature, as well as measures of the quality of the results they provide. Assuming the multiplicative model, the observed image Z is the product of two independent fields: the backscatter X and the speckle Y. The result of any speckle filter is X ^ , an estimator of the backscatter X, based solely on the observed data Z. An ideal estimator would be the one for which the ratio of the observed image to the filtered one I = Z / X ^ is only speckle: a collection of independent identically distributed samples from Gamma variates. We, then, assess the quality of a filter by the closeness of I to the hypothesis that it is adherent to the statistical properties of pure speckle. We analyze filters through the ratio image they produce with regards to first- and second-order statistics: the former check marginal properties, while the latter verifies lack of structure. A new quantitative image-quality index is then defined, and applied to state-of-the-art despeckling filters. This new measure provides consistent results with commonly used quality measures (equivalent number of looks, PSNR, MSSIM, β edge correlation, and preservation of the mean), and ranks the filters results also in agreement with their visual analysis. We conclude our study showing that the proposed measure can be successfully used to optimize the (often many) parameters that define a speckle filter.


PLOS ONE | 2016

Classification and Verification of Handwritten Signatures with Time Causal Information Theory Quantifiers

Osvaldo A. Rosso; Raydonal Ospina; Alejandro C. Frery

We present a new approach for handwritten signature classification and verification based on descriptors stemming from time causal information theory. The proposal uses the Shannon entropy, the statistical complexity, and the Fisher information evaluated over the Bandt and Pompe symbolization of the horizontal and vertical coordinates of signatures. These six features are easy and fast to compute, and they are the input to an One-Class Support Vector Machine classifier. The results are better than state-of-the-art online techniques that employ higher-dimensional feature spaces which often require specialized software and hardware. We assess the consistency of our proposal with respect to the size of the training sample, and we also use it to classify the signatures into meaningful groups.


IEEE Transactions on Geoscience and Remote Sensing | 2015

Interval Edge Estimation in SAR Images

Laercio Dias; Francisco Cribari-Neto; Raydonal Ospina

This paper considers edge interval estimation between two regions of a synthetic aperture radar (SAR) image, which differ in texture. This is a difficult task because SAR images are contaminated with speckle noise. Different point estimation strategies under multiplicative noise are discussed in the literature. It is important to assess the quality of such point estimates and to also perform inference under a given confidence level. This can be achieved through interval parameter estimation. To that end, we propose bootstrap-based edge confidence interval. The relative merits of the different inference strategies are compared using Monte Carlo simulation. The results show that interval edge estimation can be used to assess the accuracy of an edge point estimate. They also show that interval estimates can be quite accurate and that they can indicate the absence of an edge. In order to illustrate interval edge estimation, we also analyze a real data set.


Frontiers in Psychology | 2018

Manipulating the alpha level cannot cure significance testing

David Trafimow; Valentin Amrhein; Corson N. Areshenkoff; Carlos Barrera-Causil; Eric J. Beh; Yusuf K. Bilgic; Roser Bono; Michael T. Bradley; William M. Briggs; Héctor A. Cepeda-Freyre; Sergio E. Chaigneau; Daniel R. Ciocca; Juan Carlos Correa; Denis Cousineau; Michiel R. de Boer; Subhra Sankar Dhar; Igor Dolgov; Juana Gómez-Benito; Marian Grendar; James W. Grice; Martin E. Guerrero-Gimenez; Andrés Gutiérrez; Tania B. Huedo-Medina; Klaus Jaffe; Armina Janyan; Ali Karimnezhad; Fränzi Korner-Nievergelt; Koji Kosugi; Martin Lachmair; Rubén Ledesma

We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = 0.05 to p = 0.005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and sample size much more directly than significance testing does; but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, and implications for applications. To boil all this down to a binary decision based on a p-value threshold of 0.05, 0.01, 0.005, or anything else, is not acceptable.


Educational and Psychological Measurement | 2017

Three Strategies for the Critical Use of Statistical Methods in Psychological Research

Guillermo Campitelli; Guillermo Macbeth; Raydonal Ospina; Fernando Marmolejo-Ramos

We present three strategies to replace the null hypothesis statistical significance testing approach in psychological research: (1) visual representation of cognitive processes and predictions, (2) visual representation of data distributions and choice of the appropriate distribution for analysis, and (3) model comparison. The three strategies have been proposed earlier, so we do not claim originality. Here we propose to combine the three strategies and use them not only as analytical and reporting tools but also to guide the design of research. The first strategy involves a visual representation of the cognitive processes involved in solving the task at hand in the form of a theory or model together with a representation of a pattern of predictions for each condition. The second approach is the GAMLSS approach, which consists of providing a visual representation of distributions to fit the data, and choosing the best distribution that fits the raw data for further analyses. The third strategy is the model comparison approach, which compares the model of the researcher with alternative models. We present a worked example in the field of reasoning, in which we follow the three strategies.


Proceeding Series of the Brazilian Society of Computational and Applied Mathematics | 2015

Estudo de caso de programação inteira para automação de grade de horários do departamento de estatística da Universidade Federal de Pernambuco

Abel Borges; André Leite; Raydonal Ospina; Geiza Cristina da Silva

A otimizacao de grade de horarios e um problema de grande interesse e muitos algoritmos foram desenvolvidos nos ultimos anos [1, 2, 3, 4], principalmente para o caso particular de cursos de graduacao. Neste trabalho, descreve-se uma aplicacao de otimizacao combinatoria para a automacao da grade de horarios dos professores do departamento de estatistica da Universidade Federal de Pernambuco. Considera-se o problema, semestral, de alocacao de professores a disciplinas de acordo com suas preferencias individuais e satisfazendo a uma serie de restricoes. As variaveis de decisao consideram os seguintes conjuntos (com respectivas dimensoes): professor (40), a disciplina (50), o dia (5), o turno (3), o periodo (2) e a sala onde sera ministrado o curso (15). O departamento oferece disciplinas de duas naturezas: internas e externas. As disciplinas internas sao aquelas relacionadas ao curso de graduacao e de responsabilidade do departamento, de modo que se tem flexibilidade na determinacao de horarios e salas. As disciplinas externas podem ser de dois tipos: as ministradas em outros cursos com horarios predefinidos, sendo permitido apenas a escolha do professor; e as ministradas ao curso de graduacao em estatistica por outros departamentos, com horarios pre-definidos e nos quais nao se pode alocar outras disciplinas. A preferencia dos professores por disciplina e estabelecida por meio de uma relacao de preferencia (ordinal) dos professores a partir de um questionario me que cada professor selecionara um subconjunto proprio das possiveis disciplinas, de cardinalidade predefinida e bem menor que o numero total de disciplinas, e estabelecera uma ordem neste subconjunto. A variavel de decisao, x[i, j, k, l,m, n], binaria, assumira valor 1 se o professor i for alocado para a disciplina j no dia k, no turno l, no periodo m e na sala n; e valor zero caso contrario. Precisaremos das seguintes estruturas: . p[i, j]. (parâmetro) Utilidade ordinal do professor i em relacao a disciplina j. Representa a preferencia do professor para com as disciplinas. Note-se que ele precisa atribuir um valor diferente de zero apenas para um subconjunto proprio de disciplinas. . y[i, j]. (variavel auxiliar). Variavel binaria que determina se o professor i sera alocado na disciplina j. z[j, k]. (variavel auxiliar). Variavel binaria que determina se a disciplina j sera ministrada no dia k. Em relacao aos professores, temos as seguintes caracteristicas: (i) Professores que deverao ser alocados a uma unica disciplina, devido a atividades em cargos administrativos, disciplinas na pos-graduacao, ou outras questoes previstas na legislacao. (ii) Professores que deverao ser alocados em duas disciplinas (Limite inferior para professores com atividades de pesquisa ou de extensao). (iii) Professores substitutos de 20h e 40h. Em relacao as restricoes, pode-se citar como principais: . Numero de disciplinas por professor; . Toda disciplina oferecida deve ser associada a um professor; . Nao alocar professores em turnos extremos; . Nao alocar disciplinas em turnos diferentes; . Toda disciplina ocupa dois periodos. . Distâncias entre aulas de dois ou tres dias; . Disciplinas externas com horarios fixos; . Respeitar horario utilizados por outras disciplinas; Finalmente, deve ser considerado no funcional objetivo caracteristicas como (i) Maximizar a oferta de disciplinas aos alunos; (ii) Minimizar o numero de dias de aula dos professores (ou maximizar, se o professor preferir aulas esparsas na semana); (iii) Maximizar as preferencias por disciplinas dos professores.


Revista Brasileira De Economia | 2014

A Substituição da Contribuição Patronal para o Faturamento: Efeitos Macroeconômicos, sobre a Progressividade e Distribuição de Renda no Brasil

Wilton Bernardino da Silva; Nelson Leitão Paes; Raydonal Ospina

Na esfera Nacional, as Medidas Provisorias 563/2012, 582/2012, 601/2012 e 612/2013 propoem a substituicao da aliquota previdenciaria patronal do INSS por uma aliquota tributaria de 1% ou 2% sobre o faturamento, esta alteracao sendo feita em setores economicos intensivos em trabalho. O presente estudo avalia impactos economicos dessas mudancas sobre os agregados, setores e familias na economia brasileira. Os resultados sugerem que os efeitos da substituicao se devem exclusivamente a reducao da carga tributaria embutida na proposta, mas sem impacto relevante na reducao das distorcoes do sistema tributario brasileiro.

Collaboration


Dive into the Raydonal Ospina's collaboration.

Top Co-Authors

Avatar

Alejandro C. Frery

Federal University of Alagoas

View shared research outputs
Top Co-Authors

Avatar

Francisco Cribari-Neto

Federal University of Pernambuco

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Klaus L. P. Vasconcellos

Federal University of Pernambuco

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guillermo Macbeth

National Scientific and Technical Research Council

View shared research outputs
Top Co-Authors

Avatar

Nelson Leitão Paes

Federal University of Pernambuco

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wilton Bernardino da Silva

Federal University of Pernambuco

View shared research outputs
Researchain Logo
Decentralizing Knowledge