Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hirokuni Tamura is active.

Publication


Featured researches published by Hirokuni Tamura.


Applied statistics | 1970

Use of Orthogonal Factors for Selection of Variables in a Regression Equation-An Illustration

Janet R. Daling; Hirokuni Tamura

SUMMARY Selection of explanatory variables in the regression equation has been a prime problem in constructing a prediction equation. This paper describes and gives an illustration of a selection technique which makes use of the orthogonality among factors extracted from the correlation matrix. Using the factors not as new variables, but merely as the reference frame, we can identify a near orthogonal subset of explanatory variables. It is indicated that this approach provides the model builder with the flexibility that is not available in the conventional, purely mechanical, selection methods. SELECTION of explanatory variables in multiple regression analysis has been a prime problem in the analysis of unplanned data. The interdependency among the explanatory variables makes it difficult to determine empirically the contribution of each independent variable to the observed variation of the dependent variable. Various alternative selection techniques have been proposed, but the criterion employed in each technique seems quite arbitrary and is known to provide different solutions for the same problem (Draper and Smith, 1966, p. 163). Attempts to include more variables in the equation are often frustrated by near singularity of the normal equation system, and make estimates of regression coefficients highly sensitive to small changes in the original data. Resulting equations often contain coefficients with theoretically incorrect signs restricting the use of the equation as a functional relationship explaining the system under study. This paper describes an approach to the problem of selection of variables in regression analysis. This method purports to come up with a prediction equation in accordance with the principle of parsimony in terms of minimum interdependency among variables. The technique can be viewed as a use of the principal components regression proposed by Kendall (1957). In the procedure to be described, however, we make use of orthogonality among components, using the components not as new variables but merely as the reference frame to identify a near orthogonal subset of explanatory variables. Selection of such variables minimizes overlapping of information supplied by explanatory variables in the regression.


Journal of Accounting Research | 1982

Jackknifed Ratio Estimation in Statistical Auditing

Peter A. Frost; Hirokuni Tamura

In auditing of an accounting population, auditors are concerned with the precision of both a point estimate and an interval estimate of the true value of the population total. A ratio estimator is efficient for obtaining a point estimate, but does not always produce a reliable interval estimate. It is well known that the estimated variance of the ratio estimator is biased. Since this is a possible cause of the unreliable confidence interval estimates, a jackknifed ratio estimator is suggested and tested. With the jackknifed ratio estimator, the variance of the estimator is obtained directly from the sample by means of sample splitting. Our conclusion, based on an extensive Monte Carlo study, is that the jackknife improves the accuracy of the point estimate and the reliability of the interval estimate. The improvement is not great enough to overcome the primary deficiency of the ratio estimator-the unreliability of the confidence interval when the population has either low error rates or one-sided errors; however, the jackknife estimator dominates the ratio estimator in those populations that are most favorable for applying the ratio estimator, that is, populations with high error rates and both over- and understatement errors.


Journal of Accounting Research | 1986

Tightening CAV (DUS) Bounds by Using a Parametric Model

Hirokuni Tamura; Peter A. Frost

This paper presents a new method of setting an upper bound for the aggregate error of an accounting population using Dollar Unit Sampling (DUS; see Anderson and Teitlebaum [1973]). The most widely used DUS bound setting procedure is the Stringer bound (Felix, Leslie, and Neter [1982]). This nonparametric procedure is popular because it produces a reliable bound for a wide variety of accounting populations. A disadvantage of the Stringer bound is that it is not statistically efficient, particularly when the population has low taintings. Other CA V (Combined Attributes and Variables) bounds have demonstrated the same problem (Reneau [1978]). A statistically inefficient bound may cause the auditor to conclude erroneously that the aggregate error may exceed a given material size, thereby promoting costly overauditing. The classical bound, an alternative distribution-free procedure that depends on the normality of the sampling distribution of the point estimator, is asymptotically efficient but unreliable in sample sizes used by auditors (Frost and Tamura [1986]). A simple parametric procedure is feasible if the underlying distribution of taintings is identified. We present an example of a parametric procedure using the power function density as a model and demonstrate the


Journal of Accounting Research | 1986

Accuracy Of Auxiliary Information Interval Estimation In Statistical Auditing

Peter A. Frost; Hirokuni Tamura

Auxiliary information estimators, such as the ratio and difference estimators, are widely used in statistical auditing because they produce more precise point estimates than other estimators (e.g., the mean per unit estimator) which do not utilize as much information. However, these auxiliary information estimators do not always yield reliable confidence intervals, that is, their true levels of confidence can differ drastically from the stated levels. Studies by Kaplan [1973], Neter and Loebbecke [1975; 1977], and Frost and Tamura [1982] have documented this problem, but have not explained why it occurs. The purpose of this paper is to show that the skewness of the distribution of these estimators largely accounts for their failure to yield reliable confidence intervals in statistical auditing. The skewness is induced by the low error rates in accounts. Other conjectures have been made about the underlying causes of this problem. These conjectures center on the statistic T, defined as:


Naval Research Logistics Quarterly | 1970

Goal programming in econometrics

W. Allen Spivey; Hirokuni Tamura


Biometrika | 1988

Estimation of rare errors using expert judgement

Hirokuni Tamura


International Economic Review | 1970

Generalized Simultaneous Equation Models

W. Allen Spivey; Hirokuni Tamura


Health Services Research | 1989

Linear programming models for cost reimbursement.

Diehr G; Hirokuni Tamura


International Statistical Review | 2007

Foundational Value of Statistics Education for Management Curriculum

Hirokuni Tamura


Teaching Statistics | 1994

Model Comparison in Regression

Hirokuni Tamura

Collaboration


Dive into the Hirokuni Tamura's collaboration.

Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge