Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where A. Susan M. Niessen is active.

Publication


Featured researches published by A. Susan M. Niessen.


Frontiers in Psychology | 2015

A trial studying approach to predict college achievement

Rob R. Meijer; A. Susan M. Niessen

We argue that using trial studying is a reliable and valid way to select students for higher education. This method is based on a work sample approach often used in personnel selection contexts. We discuss that this method has predictive validity for study success, has high acceptance by stakeholders, and measures self-regulation in a high-stakes testing context that cannot be measured through self-report questionnaires. We suggest further research to implement this method to select students.


Assessment | 2016

A Practical Guide to Check the Consistency of Item Response Patterns in Clinical Research Through Person-Fit Statistics: Examples and a Computer Program

Rob R. Meijer; A. Susan M. Niessen; Jorge N. Tendeiro

Although there are many studies devoted to person-fit statistics to detect inconsistent item score patterns, most studies are difficult to understand for nonspecialists. The aim of this tutorial is to explain the principles of these statistics for researchers and clinicians who are interested in applying these statistics. In particular, we first explain how invalid test scores can be detected using person-fit statistics; second, we provide the reader practical examples of existing studies that used person-fit statistics to detect and to interpret inconsistent item score patterns; and third, we discuss a new R-package that can be used to identify and interpret inconsistent score patterns.


PLOS ONE | 2016

Predicting Performance in Higher Education Using Proximal Predictors

A. Susan M. Niessen; Rob R. Meijer; Jorge N. Tendeiro

We studied the validity of two methods for predicting academic performance and student-program fit that were proximal to important study criteria. Applicants to an undergraduate psychology program participated in a selection procedure containing a trial-studying test based on a work sample approach, and specific skills tests in English and math. Test scores were used to predict academic achievement and progress after the first year, achievement in specific course types, enrollment, and dropout after the first year. All tests showed positive significant correlations with the criteria. The trial-studying test was consistently the best predictor in the admission procedure. We found no significant differences between the predictive validity of the trial-studying test and prior educational performance, and substantial shared explained variance between the two predictors. Only applicants with lower trial-studying scores were significantly less likely to enroll in the program. In conclusion, the trial-studying test yielded predictive validities similar to that of prior educational performance and possibly enabled self-selection. In admissions aimed at student-program fit, or in admissions in which past educational performance is difficult to use, a trial-studying test is a good instrument to predict academic performance.


Perspectives on Psychological Science | 2017

On the use of broadened admission criteria in higher education

A. Susan M. Niessen; Rob R. Meijer

There is an increasing interest in the use of broadened criteria for admission to higher education, often assessed through noncognitive instruments. We argue that there are several reasons why, despite some significant progress, the use of noncognitive predictors to select students is problematic in high-stakes educational selection and why the incremental validity will often be modest, even when studied in low-stakes contexts. Furthermore, we comment on the use of broadened admission criteria in relation to reducing adverse impact of testing on some groups, and we extend the literature by discussing an approach based on behavioral sampling, which showed promising results in Europe. Finally, we provide some suggestions for future research.


International Journal of Selection and Assessment | 2017

Applying organizational justice theory to admission into higher education: Admission from a student perspective

A. Susan M. Niessen; Rob R. Meijer; Jorge N. Tendeiro

Applicant perceptions of methods used in admission procedures to higher education were investigated using organizational justice theory. Applicants to a psychology study program completed a questionnaire about several admission methods. General favorability, ratings on justice dimensions, relationships between general favorability and these dimensions, and differences in perceptions based on gender and on the aim of the admission procedure (selection or matching) were studied. In addition, the relationship between favorability and test performance, and the relationship between favorability and behavioral outcomes were investigated. Applicants rated interviews and trial‐studying tests most favorably. Contrary to expectations based on the existing literature, high school grades were perceived least favorably and there was no relationship between applicant perceptions and enrollment decisions. In line with previous research in the employment literature, general favorability was most strongly related to face validity, study‐relatedness, applicant differentiation, the chance to show skills, perceived scientific evidence, and perceived wide‐spread use. We found no differences in applicant perceptions based on gender and small differences based on the aim of admission procedures. These results extend the applicant perceptions literature to educational admission and the results are useful for administrators when choosing methods to admit students.


Sports Medicine | 2018

Comment on: "Talent Identification in Sport: A Systematic Review"

Tom L. G. Bergkamp; A. Susan M. Niessen; Ruud J. R. Den Hartigh; Wouter Frencken; Rob R. Meijer

We read the recent systematic review by Johnston et al. [1] with great interest, and we compliment the authors on providing an overview of the empirical studies regarding talent identification programs in sports. The talent identification literature contains many studies that relate one or multiple performance components to athletes’ skill levels to find prerequisites for excellent athletic performance. Although other critical nonsystematic reviews on talent identification programs have been published [2–6], a systematic review synthesizing the available evidence in terms of predictive validity of the performance components was timely. Accordingly, the review by Johnston et al. [1] can be used to highlight several gaps in the talent identification research field, as we will elaborate on below. We think that research from selection psychology can offer some valuable insights with respect to the conclusion by Johnston et al. [1] that large inconsistencies exist in the relationship between predictor variables and skilled performance. Moreover, we argue below that future empirical research may benefit from reconsidering the operationalization of elite performance to better evaluate the predictors of sport-specific talent and sports talent in general. In the majority of talent identification studies, including the studies incorporated in the review paper by Johnston et al. [1], the manifestation of sports talent is sports performance. In selection psychology, this—to be predicted— behavior is referred to as the criterion [7]. Johnston et al. [1] distinguished three predictor-criterion categories that were examined in the included studies: cognitive/psychological capabilities, physical profile, and previous performance/experience. However, the predictor–performance relationships comprised all sports: no indication of the related sport for each identified relationship was explicitly given. Aggregating predictors within each category across sports led to the conclusion that ‘‘in general, no variables within the studies examined uniformly predicted skill level’’ (p. 8), which the authors explained through inconsistent study designs and diverse definitions of what a talented athlete is. However, a more straightforward interpretation for this finding is that the included empirical studies examined many different sports domains. Psychological research has demonstrated that when the criterion consists of multiple factors, different patterns of predictor–criterion relationships can emerge [8]. This also applies to the concept of sports talent: every study included in the review examined This is the letter to the original article available at https://doi.org/10. 1007/s40279-017-0803-2.


PLOS ONE | 2018

Admission testing for higher education: A multi-cohort study on the validity of high-fidelity curriculum-sampling tests

A. Susan M. Niessen; Rob R. Meijer; Jorge N. Tendeiro

We investigated the validity of curriculum-sampling tests for admission to higher education in two studies. Curriculum-sampling tests mimic representative parts of an academic program to predict future academic achievement. In the first study, we investigated the predictive validity of a curriculum-sampling test for first year academic achievement across three cohorts of undergraduate psychology applicants and for academic achievement after three years in one cohort. We also studied the relationship between the test scores and enrollment decisions. In the second study, we examined the cognitive and noncognitive construct saturation of curriculum-sampling tests in a sample of psychology students. The curriculum-sampling tests showed high predictive validity for first year and third year academic achievement, mostly comparable to the predictive validity of high school GPA. In addition, curriculum-sampling test scores showed incremental validity over high school GPA. Applicants who scored low on the curriculum-sampling tests decided not to enroll in the program more often, indicating that curriculum-sampling admission tests may also promote self-selection. Contrary to expectations, the curriculum-sampling tests scores did not show any relationships with cognitive ability, but there were some indications for noncognitive saturation, mostly for perceived test competence. So, curriculum-sampling tests can serve as efficient admission tests that yield high predictive validity. Furthermore, when self-selection or student-program fit are major objectives of admission procedures, curriculum-sampling test may be preferred over or may be used in addition to high school GPA.


European Journal of Sport Science | 2018

Selection procedures in sports: Improving predictions of athletes’ future performance

Ruud J. R. Den Hartigh; A. Susan M. Niessen; Wouter Frencken; Rob R. Meijer

Abstract The selection of athletes has been a central topic in sports sciences for decades. Yet, little consideration has been given to the theoretical underpinnings and predictive validity of the procedures. In this paper, we evaluate current selection procedures in sports given what we know from the selection psychology literature. We contrast the popular clinical method (predictions based on overall impressions of experts) with the actuarial approach (predictions based on pre-defined decision rules), and we discuss why the latter approach often leads to superior performance predictions. Furthermore, we discuss the “signs” and the “samples” approaches. Taking the prevailing signs approach, athletes’ technical-, tactical-, physical-, and psychological skills are often assessed separately in controlled settings. However, for predicting later sport performance, taking samples of athletes’ behaviours in their sports environment may result in more valid assessments. We discuss the possible advantages and implications of making selection procedures in sports more actuarial and sample-based.


Perspectives on Psychological Science | 2017

College admissions, diversity, and performance-based assessment: Reply to Stemler (2017)

A. Susan M. Niessen; Rob R. Meijer

Stemler (2017, this issue) provided a constructive comment to our article on broadened admission criteria in higher education (Niessen & Meijer, 2017, this issue); we thank Steven Stemler and we provide a short response. Stemler’s main criticism of our article was that it lacked a theoretical framework. Let us clarify our theoretical orientation about college admission: In our view, there are admission criteria and desired outcomes. In selection or admission research we try to show that admission criteria (predictors) are positively related to desired outcomes (also often denoted as criteria). This is common research practice in educational admission testing and in personnel selection (e.g., Ployhart, Schneider, & Schmitt, 2006). Stemler (2017) proposed a mission, implementation, and assessment framework (the MIA model) to guide the discussion about college admission. Whereas we use a different terminology, we are on the same page here. We agree that colleges differ in the skills they aim to develop and that this calls for different admission procedures that adhere to the skills that are valued at an institution. As pointed out in our article (see p. 437), we also agree that taking into account mission (in our terminology, the outcomes), implementation (in our terminology, the curriculum), and assessment of both admission criteria and educational outcomes is essential for good admission procedures and that these elements should be closely related. It is precisely this relation that often seems to be lacking in practice and in many publications about college admission. The main problem is the assessment component. As argued before (Niessen & Meijer, 2017), regardless of the mission (or desired outcomes), outcome assessment is crucial. Stemler (2017) stated that we should not hold on to the classroom as the only place where important competencies are developed and that a large portion of the acquisition of core competencies happens outside of the classroom through informal interactions. Of course, we acknowledge that learning can and does take place outside of formal classroom environments, and we agree that this type of learning can be very beneficial. However, if we are dealing with core competencies in the heart of a university mission, simply assuming that these competencies are being developed through informal interactions is not enough. In order to evaluate university missions and implementation, we first must be explicit about what type of competencies we expect students to acquire and we need to assess those competencies. That is precisely what is lacking, as Stemler (2017) also acknowledged. Indeed, as Niessen and Meijer (2017) discussed, acquired knowledge and skills reflected by a degree should be in line with the university mission statement. If the mission is knowledge acquisition in certain domains, then outcomes can be specified as grades on domain specific courses. If the objective is to acquire skills such as leadership or problem solving, these skills should be incorporated and assessed in the curriculum. That can be done either through formal instruction or through more informal learning. Most often, however, university missions are multidimensional. Therefore, contrary to Stemler (2017), we think that it is necessary for GPA to become a more multidimensional indicator. If not, the MIA model will not hold. Furthermore, if we are interested in the acquisition of specific types of competencies, we can always use components of GPA that represent specific aspects of the mission. This approach is also common in the context of medical school admissions (Lievens, Buyse, & Sackett, 2005; Reiter, Eva, Rosenfeld, & Norman, 2007). So, with respect to outcomes, we argue that all competencies at the core of the university mission should be assessed formally and validly but not necessarily in one GPA that represents classroom learning. Stemler (2017) also discussed that admission officers strive for a diverse class of students to aid learning through peer interaction. As we already discussed in Niessen and 693055 PPSXXX10.1177/1745691617693055Niessen and MeijerCollege Admissions, Diversity, and Performance-Based Assessment research-article2017


International Journal of Selection and Assessment | 2017

Applying organizational justice theory to admission into higher education: Admission from a student perspective: Niessen et al.

A. Susan M. Niessen; Rob R. Meijer; Jorge N. Tendeiro

Applicant perceptions of methods used in admission procedures to higher education were investigated using organizational justice theory. Applicants to a psychology study program completed a questionnaire about several admission methods. General favorability, ratings on justice dimensions, relationships between general favorability and these dimensions, and differences in perceptions based on gender and on the aim of the admission procedure (selection or matching) were studied. In addition, the relationship between favorability and test performance, and the relationship between favorability and behavioral outcomes were investigated. Applicants rated interviews and trial‐studying tests most favorably. Contrary to expectations based on the existing literature, high school grades were perceived least favorably and there was no relationship between applicant perceptions and enrollment decisions. In line with previous research in the employment literature, general favorability was most strongly related to face validity, study‐relatedness, applicant differentiation, the chance to show skills, perceived scientific evidence, and perceived wide‐spread use. We found no differences in applicant perceptions based on gender and small differences based on the aim of admission procedures. These results extend the applicant perceptions literature to educational admission and the results are useful for administrators when choosing methods to admit students.

Collaboration


Dive into the A. Susan M. Niessen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge