Yoshio Takane
University of Victoria
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yoshio Takane.
Psychometrika | 1977
Yoshio Takane; Forrest W. Young; Jan de Leeuw
A new procedure is discussed which fits either the weighted or simple Euclidian model to data that may (a) be defined at either the nominal, ordinal, interval or ratio levels of measurement; (b) have missing observations; (c) be symmetric or asymmetric; (d) be conditional or unconditional; (e) be replicated or unreplicated; and (f) be continuous or discrete. Various special cases of the procedure include the most commonly used individual differences multidimensional scaling models, the familiar nonmetric multidimensional scaling model, and several other previously undiscussed variants.The procedure optimizes the fit of the model directly to the data (not to scalar products determined from the data) by an alternating least squares procedure which is convergent, very quick, and relatively free from local minimum problems.The procedure is evaluated via both Monte Carlo and empirical data. It is found to be robust in the face of measurement error, capable of recovering the true underlying configuration in the Monte Carlo situation, and capable of obtaining structures equivalent to those obtained by other less general procedures in the empirical situation.
Psychometrika | 1987
Yoshio Takane; Jan de Leeuw
Equivalence of marginal likelihood of the two-parameter normal ogive model in item response theory (IRT) and factor analysis of dichotomized variables (FA) was formally proved. The basic result on the dichotomous variables was extended to multicategory cases, both ordered and unordered categorical data. Pair comparison data arising from multiple-judgment sampling were discussed as a special case of the unordered categorical data. A taxonomy of data for the IRT and FA models was also attempted.
Psychometrika | 1976
Forrest W. Young; Jan de Leeuw; Yoshio Takane
A method is discussed which extends canonical regression analysis to the situation where the variables may be measured at a variety of levels (nominal, ordinal, or interval), and where they may be either continuous or discrete. There is no restriction on the mix of measurement characteristics (i.e., some variables may be discrete-ordinal, others continuous-nominal, and yet others discrete-interval). The method, which is purely descriptive, scales the observations on each variable, within the restriction imposed by the variables measurement characteristics, so that the canonical correlation is maximal. The alternating least squares algorithm is discussed. Several examples are presented. It is concluded that the method is very robust. Inferential aspects of the method are not discussed.
Psychometrika | 1976
Jan de Leeuw; Forrest W. Young; Yoshio Takane
A method is developed to investigate the additive structure of data that (a) may be measured at the nominal, ordinal or cardinal levels, (b) may be obtained from either a discrete or continuous source, (c) may have known degrees of imprecision, or (d) may be obtained in unbalanced designs. The method also permits experimental variables to be measured at the ordinal level. It is shown that the method is convergent, and includes several previously proposed methods as special cases. Both Monte Carlo and empirical evaluations indicate that the method is robust.
Psychometrika | 1978
Forrest W. Young; Yoshio Takane; Jan de Leeuw
A method is discussed which extends principal components analysis to the situation where the variables may be measured at a variety of scale levels (nominal, ordinal or interval), and where they may be either continuous or discrete. There are no restrictions on the mix of measurement characteristics and there may be any pattern of missing observations. The method scales the observations on each variable within the restrictions imposed by the variables measurement characteristics, so that the deviation from the principal components model for a specified number of components is minimized in the least squares sense. An alternating least squares algorithm is discussed. An illustrative example is given.
Psychometrika | 1991
Yoshio Takane; Tadashi Shibayama
A method for structural analysis of multivariate data is proposed that combines features of regression analysis and principal component analysis. In this method, the original data are first decomposed into several components according to external information. The components are then subjected to principal component analysis to explore structures within the components. It is shown that this requires the generalized singular value decomposition of a matrix with certain metric matrices. The numerical method based on the QR decomposition is described, which simplifies the computation considerably. The proposed method includes a number of interesting special cases, whose relations to existing methods are discussed. Examples are given to demonstrate practical uses of the method.
Psychometrika | 2004
Heungsun Hwang; Yoshio Takane
We propose an alternative method to partial least squares for path analysis with components, called generalized structured component analysis. The proposed method replaces factors by exact linear combinations of observed variables. It employs a well-defined least squares criterion to estimate model parameters. As a result, the proposed method avoids the principal limitation of partial least squares (i.e., the lack of a global optimization procedure) while fully retaining all the advantages of partial least squares (e.g., less restricted distributional assumptions and no improper solutions). The method is also versatile enough to capture complex relationships among variables, including higher-order components and multi-group comparisons. A straightforward estimation algorithm is developed to minimize the criterion.
Neuroscience | 2006
Todd S. Woodward; Tara A. Cairo; Christian C. Ruff; Yoshio Takane; Michael A. Hunter; Elton T.C. Ngan
One of the main challenges in working memory research has been to understand the degree of separation and overlap between the neural systems involved in encoding and maintenance. In the current study we used a variable load version of the Sternberg item recognition test (two, four, six, or eight letters) and a functional connectivity method based on constrained principal component analysis to extract load-dependent neural systems underlying encoding and maintenance, and to characterize their anatomical overlap and functional interaction. Based on the pattern of functional connectivity, constrained principal component analysis identified a load-dependent encoding system comprising bilateral occipital (Brodmanns area (BA) 17, 18), bilateral superior parietal (BA 7), bilateral dorsolateral prefrontal (BA 46), and dorsal anterior cingulate (BA 24, 32) regions. For maintenance, in contrast, constrained principal component analysis identified a system that was characterized by both load-dependent increases and decreases in activation. The structures in this system jointly activated by maintenance load involved left posterior parietal (BA 40), left inferior prefrontal (BA 44), left premotor and supplementary motor areas (BA 6), and dorsal cingulate regions (BA 24, 32), while the regions displaying maintenance-load-dependent activity decreases involved bilateral occipital (BA 17, 18), posterior cingulate (BA 23) and rostral anterior cingulate/orbitofrontal (BA 10, 11, 32) regions. The correlation between the encoding and maintenance systems was strong and negative (Pearsons r = -.55), indicting that some regions important for visual processing during encoding displayed reduced activity during maintenance, while subvocal rehearsal and phonological storage regions important for maintenance showed a reduction in activity during encoding. In summary, our analyses suggest that separable and complementary subsystems underlie encoding and maintenance in verbal working memory, and they demonstrate how constrained principal component analysis can be employed to characterize neuronal systems and their functional contributions to higher-level cognition.
Psychometrika | 1981
Yoshio Takane
A single-step maximum likelihood estimation procedure is developed for multidimensional scaling of dissimilarity data measured on rating scales. The procedure can fit the euclidian distance model to the data under various assumptions about category widths and under two distributional assumptions. The scoring algorithm for parameter estimation has been developed and implemented in the form of a computer program. Practical uses of the method are demonstrated with an emphasis on various advantages of the method as a statistical procedure.
Applicable Algebra in Engineering, Communication and Computing | 2001
Yoshio Takane; Michael A. Hunter
Abstract. Constrained principal component analysis (CPCA) incorporates external information into principal component analysis (PCA) of a data matrix. CPCA first decomposes the data matrix according to the external information (external analysis), and then applies PCA to decomposed matrices (internal analysis). The external analysis amounts to projections of the data matrix onto the spaces spanned by matrices of external information, while the internal analysis involves the generalized singular value decomposition (GSVD). Since its original proposal, CPCA has evolved both conceptually and methodologically; it is now founded on firmer mathematical ground, allows a greater variety of decompositions, and includes a wider range of interesting special cases. In this paper we present a comprehensive theory and various extensions of CPCA, which were not fully envisioned in the original paper. The new developments we discuss include least squares (LS) estimation under possibly singular metric matrices, two useful theorems concerning GSVD, decompositions of data matrices into finer components, and fitting higher-order structures. We also discuss four special cases of CPCA; 1) CCA (canonical correspondence analysis) and CALC (canonical analysis with linear constraints), 2) GMANOVA (generalized MANOVA), 3) Lagranges theorem, and 4) CANO (canonical correlation analysis) and related methods. We conclude with brief remarks on advantages and disadvantages of CPCA relative to other competitors.