Shizuhiko Nishisato
University of Toronto
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Shizuhiko Nishisato.
Psychometrika | 1984
Shizuhiko Nishisato
This study formulates a property of a quantification method, the principle of equivalent partitioning (PEP). When the PEP is used together with Guttmans principle of internal consistency (PIC) in a simple way, the combination offers an interesting way of analyzing categorical data in terms of the variate(s) chosen by the investigator, a type of canonical analysis. The study discusses applications of the technique to multiple-choice, rank-order, and paired comparison data.
Psychometrika | 1996
Shizuhiko Nishisato
Some historical background and preliminary technical information are first presented, and then a number of hidden, but important, methodological aspects of dual scaling are illustrated and discussed: normed versus projected weights, the amount of information accounted for by each solution, a perfect solution to the problem of multidimensional unfolding, multidimensional quantification space, graphical display, number-of-option problems, option standardization versus item standardization, and asymmetry of symmetric (dual) scaling. Contrary to the common perception that dual scaling and similar quantification methods are now mathematically transparent, the present study demonstrates how much more needs to be clarified for routine use of the method to arrive at valid conclusions. Data analysis must be carried out in such a way that common sense, intuition and sound logic will prevail.
Archive | 2014
Shizuhiko Nishisato; Yasumasa Baba; Hamparsum Bozdogan; Koji Kanefuji
Diversity is characteristic of the information age and also of statistics. To date, the social sciences have contributed greatly to the development of handling data under the rubric of measurement, while the statistical sciences have made phenomenal advances in theory and algorithms. Measurement and Multivariate Analysis promotes an effective interplay between those two realms of research-diversity with unity. The union and the intersection of those two areas of interest are reflected in the papers in this book, drawn from an international conference in Banff, Canada, with participants from 15 countries. In five major categories - scaling, structural analysis, statistical inference, algorithms, and data analysis - readers will find a rich variety of topics of current interest in the extended statistical community.
Archive | 1990
Shizuhiko Nishisato
Quantification of categorical data, with external criteria, is considered. The focal point of the study is to propose and discuss a graphical method to elucidate structure of data which involve experimental designs. Two examples are presented to illustrate some advantages of the method over the traditional joint row/column display.
Archive | 2014
Shizuhiko Nishisato
Representation of categorical data by nominal measurement leaves the entire information intact, which is not the case with widely used numerical or pseudo-numerical representation such as Likert-type scoring. This aspect is first explained, and then we turn our attention to the analysis of nominally represented data. For the analysis of a large number of variables, one typically resorts to dimension reduction, and its necessity is often greater with categorical data than with continuous data. In spite of this, Nishisato S, Clavel JG (Behaviormetrika 57:15–32, 2010) proposed an approach which is diametrically opposite to the dimension-reduction approach, for they advocate the use of doubled hyper-space to accommodate both row variables and column variables of two-way data in a common space. The rationale of doubled space can be used to vindicate the validity of the Carroll-Green-Schaffer scaling (Carroll JD, Green PE, Schaffer CM (1986) J Mark Res 23(3):271–280). The current paper will then introduce a simple procedure for the analysis of a hyper-dimensional configuration of data, called cluster analysis through filters. A numerical example will be presented to show a clear contrast between the dimension-reduction approach and the total information analysis by cluster analysis. There is no doubt that our approach is preferred to the dimension-reduction approach on two grounds: our results are a factual summary of a multidimensional data configuration, and our procedure is simple and practical.
GfKl | 2012
José G. Clavel; Shizuhiko Nishisato
In most multidimensional analyses, the dimension reduction is a key concept and reduced space analysis is routinely used. Contrary to this traditional approach, total information analysis (TIA) (Nishisato and Clavel, Behaviormetrika 37:15–32, 2010) places its focal point on tapping every piece of information in data. The present paper is to demonstrate that the time-honored practice of reduced space analysis may have to be reconsidered as its grasp of data structure may be compromised by ignoring intricate details of data. The paper will present numerical examples to make our point.
Archive | 2016
Shizuhiko Nishisato
The basic premise of dual scaling/correspondence analysis lies in the simultaneous or symmetric analysis of rows and columns of the data matrix, a task that resembles the analysis of principal component analysis of both the person-to-person correlation matrix and the item-by-item correlation matrix together. Our main quest: whether or not we can represent both analyses in the same Euclidean space. The traditional graphical methods are very problematic: symmetric display or French plot suffers from the discrepancy between the row space and the column space; non-symmetric display involves the projection of data onto standardized space, which does not contain coordinate information in the data; a variety of biplots, of which criticisms we rarely see, involve operations that do not typically maintain row and column measurements on the equal metrics, or if they do they are not the coordinates of the data. Thus, none of these provides a precise description of complex information in data, hence failing in the basic objective of symmetric data analysis. This paper will identify logical problems of the current practice and offers a justifiable alternative to joint graphical display. “Graphing is believing” may in reality remain to be a wishful thinking.
GfKl | 2005
Shizuhiko Nishisato
Our common sense tell us that continuous data contain more information than categorized data. To prove it, however, is not that straightforward because most continuous variables are typically subjected to linear analysis, and categorized data to nonlinear analysis. This discrepancy prompts us to put both data types on a comparable basis, which leads to a number of problems, in particular, how to define information and how to capture both linear and nonlinear relations between variables both continuous and categorical. This paper proposes a general framework for both types of data so that we may look at the original statement on information.
Archive | 2003
Shizuhiko Nishisato
Dual scaling offers us an invaluable opportunity to have another look at the total information contained in data. This paper sheds some light on the necessity of developing a tool for full-information analysis and the potential of dual scaling as such a tool.
Archive | 1996
Shizuhiko Nishisato
Dual scaling quantifies such categorical data as contingency tables, multiple-choice data, sorting data, paired comparison data, rank-order data, and successive categories data. These data can be classified into two types, incidence data and dominance data. The present study is an overview of some key formulas and several conceptual problems, which require further investigations. Most of them are peculiar to data types, and some remedial procedures are suggested for them as interim measures. Awareness of these difficulties in dual scaling and other related methods seems to be the most notable recent development.