Michael S. Trevisan
Washington State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael S. Trevisan.
American Journal of Evaluation | 2007
Michael S. Trevisan
This article presents the state of practice of evaluability assessment (EA) as represented in the published literature from 1986 to 2006. Twenty-three EA studies were located, showing that EA was conducted in a wide variety of programs, disciplines, and settings. Most studies employed document reviews, site visits, and interviews, common methodologies previously recommended in the literature on EA. The use of uncommon methodologies such as the use of standardized instruments and statistical modeling were also found in studies obtained for this review. The most common rationale for conducting EA mentioned in these studies was determining program readiness for impact assessment, program development, and formative evaluation. Outcomes found in these studies include the construction of a program logic model, development of goals and objectives, and modification of program components. The findings suggest that EA is practiced and published more widely than previously known. Recommendations to enhance EA practice are offered.
American Journal of Evaluation | 2004
Michael S. Trevisan
This paper provides the results of a literature review on the use of practical, hands-on training experiences in evaluation course work and training programs. The review spans the years 1965-2003. I identified 18 articles that encompass four basic approaches for practical evaluation training: simulation, role-play, single course projects, and practicum experiences. The articles are summarized, documenting strengths, challenges, and unique features for each strategy. Findings from this review indicate substantial resources are often needed for effective practical training experiences. Authors of articles in this review illustrate a variety of options for incorporating methodology and or evaluation theory into the training experiences. The few articles that adhere to a pedagogical framework employ learning models that are consonant with the adult education literature and structure the practical experience accordingly. The literature reveals a lack of formal research on practical evaluation training. Faculty and students consistently speak to the benefits of these training experiences.
Educational and Psychological Measurement | 1991
Michael S. Trevisan; Gilbert Sax; William B. Michael
Reliability and validity of multiple-choice examinations were computed as a function of the number of options per item and student ability for junior class parochial high school students administered the verbal section of the Washington Pre-College Test Battery. The least discriminating options were deleted to create 3- and 4-option test formats from the original 5-option item test. Students were placed into ability groups by using noncontiguous grade point average (GPA) cutoffs. The GPAs were the criteria for the validity coefficients. Significant differences (p c 0.05) were found between reliability coefficients for low ability students. The optimum number of options was three when the ability groups were combined. None of the validity coefficients followed the hypothesized trend. These results are part of the mounting evidence that suggests the efficacy of the 3-option item. An explanation is provided.
American Journal of Evaluation | 2002
Michael S. Trevisan
There is common understanding and wide agreement for practical experiences in evaluation training programs. The argument for this idea is that practical training will provide students the real world experiences needed for preparation as professional evaluators and that these experiences are not possible in didactic course work. Several strategies for providing practical evaluation experiences can be found in the literature. They are limited by their short-term nature, typically bound by a semester time frame. This paper offers another possibility for providing practical evaluation training; namely, university-supported, long-term funded evaluation projects. These projects are managed through a university center that provides assistance to clients in student assessment and program evaluation. After a description of the center, the benefits and challenges of providing these experiences for students are discussed.
American Journal of Evaluation | 2002
Michael S. Trevisan
The K-12 school counseling field continues to develop and implement Comprehensive, Developmental, Guidance and Counseling (CDGC) programs to organize services provided by school counselors. Given the program evaluation requirement to guide refinement and renewal efforts, enhancement of CDGC evaluation infrastructure is needed. This paper presents the results of a literature review, organized by the Milstein and Cotton (2000) evaluation capacity framework, detailing and analyzing the contextual factors and system features that impact the evaluation capacity of school counseling programs. The paper fills a need in the literature for more substantive work focused on evaluation capacity, particularly with respect to the school counseling field. The analytic approach is a test case that could be applied to the analysis of evaluation capacity in other settings.
American Journal of Evaluation | 2000
Michael S. Trevisan
State-level school counselor certification requirements are discussed with respect to program evaluation expectations. Certification offices of all 50 states and Washington, DC, were asked to supply current school counselor knowledge and skill requirements. Nineteen states and Washington, DC, require some form of program evaluation knowledge and skills. Only Colorado and Washington specifically require program evaluation standards recommended by the Council for Accreditation of Counseling and Related Educational Programs for their school counseling programs. These findings suggest that the nation may be producing school counselors deficient in the program evaluation skills needed to meet their professional responsibilities. Several recommendations are made for ameliorating the current training deficiencies of pre-service school counselors and for using in-service training to remediate the program evaluation knowledge and skill gap of practicing school counselors. Implications for the larger evaluation community are discussed.
Educational and Psychological Measurement | 1994
Michael S. Trevisan; Gilbert Sax; Willilam B. Michael
Previous studies have demonstrated the efficacy of using three-option multiple-choice items. In these studies three- and four-option items were constructed from preexisting item analysis data obtained from five-option items. In this study, a two-option test was constructed, and options were systematically added to this test by using a taxonomy of item writing rules to guide the process of distractor development. Nonsignificant differences (p 2 .05) were found among the reliability coefficients, the reliability estimates for the three-, four-, and five-option formats being all on the same order of magnitude. These findings continue to provide evidence for the efficacy of the three-option item.
frontiers in education conference | 1997
Kenneth L. Gentili; J. Hannan; Richard W. Crain; Denny Davis; Michael S. Trevisan
This paper describes techniques that are used in competency-based, introductory engineering design courses. Material used in the course has been produced and developed by the TIDEE (Transferable Integrated Design in Engineering Education) coalition, an NSF sponsored project with principal investigators at Washington State University, Tacoma Community College and the University of Washington. The course develops creative problem solving techniques, communication and teamwork skills. It emphasizes process improvement rather than evaluating the product and results. This provides an opportunity for students to take risks, try new approaches and gain confidence.
frontiers in education conference | 1997
Janet Hannan; Dale E. Calkins; Richard W. Crain; Denny Davis; Kenneth L. Gentili; Charlena Grimes; Michael S. Trevisan
Engineering design is fundamental to all areas of engineering education. It takes shape as project-based learning. As a facts-based approach becomes integrated with a hands-on, learning-to-solve-problems approach, engineering design is a perfect vehicle for this integration. TIDEE (Transferable Integrated Design in Engineering Education) is an NSF grant involving Washington State University, the University of Washington, Tacoma Community College, and the Washington Council for Engineering and Related Technical Education (WCERTE). TIDEE aims to establish a flexible engineering design structure, prepare faculty to use and develop new engineering design materials, and increase the diversity of engineering enrollment. The TIDEE summer science camp is funded by both the NSF (National Science Foundation), as part of the TIDEE coalition, and the Boeing Company. The camp is funded for four annual summer sessions. The TIDEE summer science camp introduces a group of high school students to engineering design. The TIDEE camp works well. The team-based, cooperative, hands-on activities appeal to the campers. The activities are planned for success in a safe supportive environment. Teaching assistants, graduate students, instructors, and professors are good role models in engineering, tours expand their career horizons and campers acquire competencies in all aspects of engineering design. The awards banquet brings campers and their families together to understand and appreciate what is gained from the TIDEE camp.
technical symposium on computer science education | 2011
Christopher D. Hundhausen; Pawan Agarwal; Michael S. Trevisan
Given the increased importance of communication, teamwork, and critical thinking skills in the computing profession, we have been exploring studio-based instructional methods, in which students develop solutions and iteratively refine them through critical review by their peers and instructor. We have developed an adaptation of studio-based instruction for computing education called the pedagogical code review (PCR), which is modeled after the code inspection process used in the software industry. Unfortunately, PCRs are time-intensive, making them difficult to implement within a typical computing course. To address this issue, we have developed an online environment that allows PCRs to take place asynchronously outside of class. We conducted an empirical study that compared a CS 1 course with online PCRs against a CS 1 course with face-to-face PCRs. Our study had three key results: (a) in the course with face-to-face PCRs, student attitudes with respect to self-efficacy and peer learning were significantly higher; (b) in the course with face-to-face PCRs, students identified more substantive issues in their reviews; and (c) in the course with face-to-face PCRs, students were generally more positive about the value of PCRs. In light of our findings, we recommend specific ways online PCRs can be better designed.