Akio Namba
Kobe University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Akio Namba.
Econometric Theory | 2002
Akio Namba
In this paper, we consider a linear regression model when relevant regressors are omitted. We derive the explicit formulae for the predictive mean squared errors (PMSEs) of the Stein-rule (SR) estimator, the positive-part Stein-rule (PSR) estimator, the minimum mean squared error (MMSE) estimator, and the adjusted minimum mean squared error (AMMSE) estimator. It is shown analytically that the PSR estimator dominates the SR estimator in terms of PMSE even when there are omitted relevant regressors. Also, our numerical results show that the PSR estimator and the AMMSE estimator have much smaller PMSEs than the ordinary least squares estimator even when the relevant regressors are omitted.
Communications in Statistics - Simulation and Computation | 2004
Akio Namba
Abstract In this paper we consider to test the hypothesis using the empirical likelihood. To calculate the critical value of the test, two bootstrap methods are applied. Our simulation results indicate that the bootstrap methods improve the small sample property of the test.
Journal of Statistical Planning and Inference | 2000
Akio Namba
Abstract In this paper, we consider a heterogeneous pre-test estimator which consists of the two-stage hierarchial information (2SHI) estimator and the Stein-rule (SR) estimator. This estimator is called the pre-test 2SHI (PT-2SHI) estimator. It is shown analytically that the PT-2SHI estimator dominates the SR estimator in terms of mean squared error (MSE) if the parameter values in the PT-2SHI estimator are chosen appropriately. Moreover, our numerical results show that the appropriate PT-2SHI estimator dominates the positive-part Stein-rule (PSR) estimator.
Statistics & Probability Letters | 2003
Akio Namba
In this paper, we consider a regression model with omitted relevant regressors and a general family of shrinkage estimators of regression coefficients. We derive the formula for the predictive mean squared error (PMSE) of the estimators. It is shown analytically that the positive-part shrinkage estimator dominates the ordinary shrinkage estimator even when there are omitted relevant regressors. Also, as an example, our result is applied to the double k-class estimator.
Journal of Statistical Computation and Simulation | 2010
Akio Namba; Kazuhiro Ohtani
In this paper, we consider a linear regression model and propose a pre-test ridge regression estimator which is obtained by incorporating a pre-test into a ridge regression estimator proposed by Huang [J.-C. Huang, Improving the estimation precision for a selected parameter in multiple regression analysis: An algebraic approach, Econ. Lett. 62 (1999), pp. 261–264]. We derive the exact formula for the risk of the estimator under the asymmetric LINEX loss function. Our numerical results show that the risk performance of the estimator is improved by incorporating the pre-test.
Journal of Statistical Computation and Simulation | 2003
Fikri Akdeniz; Akio Namba
In this paper, using the asymmetric LINEX loss function we derive and numerically evaluate the risk function of the new feasible ridge regression estimator.We also examine the risk performance of this estimator when the LINEX loss function is used.
Communications in Statistics-theory and Methods | 2018
Akio Namba; Kazuhiro Ohtani
ABSTRACT In this paper, we consider a regression model and propose estimators which are the weighted averages of two estimators among three estimators; the Stein-rule (SR), the minimum mean squared error (MMSE), and the adjusted minimum mean-squared error (AMMSE) estimators. It is shown that one of the proposed estimators has smaller mean-squared error (MSE) than the positive-part Stein-rule (PSR) estimator over a moderate region of parameter space when the number of the regression coefficients is small (i.e., 3), and its MSE performance is comparable to the PSR estimator even when the number of the regression coefficients is not so small.
Journal of Statistical Computation and Simulation | 2018
Akio Namba; Haifeng Xu
ABSTRACT In this paper, assuming that there exist omitted explanatory variables in the specified model, we derive the exact formula for the mean squared error (MSE) of a general family of shrinkage estimators for each individual regression coefficient. It is shown analytically that when our concern is to estimate each individual regression coefficient, the positive-part shrinkage estimators have smaller MSE than the original shrinkage estimators under some conditions even when the relevant regressors are omitted. Also, by numerical evaluations, we showed the effects of our theorem for several specific cases. It is shown that the positive-part shrinkage estimators have smaller MSE than the original shrinkage estimators for wide region of parameter space even when there exist omitted variables in the specified model.
Journal of Statistical Computation and Simulation | 2018
Akio Namba; Haifeng Xu
ABSTRACT Consider a linear regression model with some relevant regressors are unobservable. In such a situation, we estimate the model by using the proxy variables as regressors or by simply omitting the relevant regressors. In this paper, we derive the explicit formula of predictive mean squared error (PMSE) of a general family of shrinkage estimators of regression coefficients. It is shown analytically that the positive-part shrinkage estimator dominates the ordinary shrinkage estimator even when proxy variables are used in place of the unobserved variables. Also, as an example, our result is applied to the double k-class estimator proposed by Ullah and Ullah (Double k-class estimators of coefficients in linear regression. Econometrica. 1978;46:705–722). Our numerical results show that the positive-part double k-class estimator with proxy variables has preferable PMSE performance.
Communications in Statistics-theory and Methods | 2018
Haifeng Xu; Akio Namba
Abstract In this paper, we analytically derive the exact formula for the mean squared error (MSE) of two weighted average (WA) estimators for each individual regression coefficient. Further, we execute numerical evaluations to investigate small sample properties of the WA estimators, and compare the MSE performance of the WA estimators with the other shrinkage estimators and the usual OLS estimator. Our numerical results show that (1) the WA estimators have smaller MSE than the other shrinkage estimators and the OLS estimator over a wide region of parameter space; (2) the range where the relative MSE of the WA estimator is smaller than that of the OLS estimator gets narrower as the number of explanatory variables k increases.