Hüseyin Selvi
Mersin University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hüseyin Selvi.
Kuram Ve Uygulamada Egitim Bilimleri | 2016
Semra Erdoğan; Gülhan Orekici Temel; Hüseyin Selvi; Irem Ersöz Kaya
Criteria regarding whether a concept, theory, design, or even a whole discipline is actually scientific vary from one field to another. However, there are invariant criteria for all fields such as the abilities to observe, measure, transmit, repeat, reproduce, verify, and falsify. These criteria allow different scientists to monitor or determine whether theories or designs related to a specific case or concept are valid and reliable. These criteria even prepare the conditions for measurability and reproducibility, which allow the opportunity for further research, as well as protect scientists from being trapped in prejudices. One of the essential criteria for science is measurability. Hence, advances in science can be claimed to develop in parallel with advances in measurement science (Erdogan, 2011; Karakac, 1988).In this respect, one can argue that compared to other science disciplines, advances in scientific fields where the investigated qualities can be directly measured are quicker, and therefore, quality measurements are comparatively easier to undertake.The situation is completely different in science disciplines such as education and psychology where the investigated qualities cannot be directly measured. In these disciplines, one attempts to predict the conditions of the related quality based on responses provided to specific stimuli; in other words, measurement is indirect (Gulliksen, 1950). However, although indirect measurement makes it possible to measure qualities that cannot be directly measured, it may also radically increase the potential error sources involved in the process. While the direction and amount of these error sources are sometimes apparent and can be identified (i.e., fixed, systematic change), sometimes they cannot (random error). This fact makes it rather hard and complex to undertake quality measurement in sciences where indirect measurement is a necessity because error sources with unidentified directions and amounts damage data reliability and impair the accuracy of the procedural comparisons that use these measurements.Scientists have developed various methods and techniques for examining reliability related to different error sources. Although these methods and techniques can be found under different classifications in different resources, they are simply classified by Crocker and Algina (1986) as methods based on multiple applications (such as equivalent forms and test-retest methods) or single application (split-half method or item-covariance-based methods). As the classification shows, some methods and techniques calculate error sources by using a single application to examine data reliability, whereas others rely on repeated measures, or scoring by multiple raters.In scientific disciplines such as education and psychology, where the investigated qualities cannot be directly measured, written, oral, and kinetic exams that require scoring by more than one rater; procedural comparisons that compare new methods and techniques developed according to scientific and technological advances; longitudinal studies; scale adaptation-development studies; and so on, are common. Reliability is the weakest link in studies where it is necessary to collect data from the same variable using different measurement tools, or to collect data from the same variable by using the same tool at different intervals (Guler & Gelbal, 2010). As a matter of fact, taking more than one measurement from the same variable also hosts the possibility of contamination from error sources as a result of interaction, both singly and in combination. Therefore, although the internal consistency of scores received from measurement tools is examined in itself, it is necessary to ensure interrater and intra-rater agreement in order to provide reliability (Guler & Gelbal, 2010; Lin, Hedayet, & Wu, 2012). In this context, agreement means similarities among measurements obtained by different inter-raters/methods. …
Mersin Üniversitesi Eğitim Fakültesi Dergisi | 2013
Hüseyin Selvi; Bayram Biçak
Bu calismada, Milli Egitim Bakanligi Surucu kurslarinda kullanilan surucu egitim programi Stufflebeam’in baglam, girdi, surec ve urun (CIIP) degerlendirme modeli kullanilarak degerlendirilmistir. Calismaya, uygun ornekleme metodu ile secilen 500 kisilik bir grup katilmistir. Yapilan analizler neticesinde: ilgili programin katilimcilarin gorusleri dogrultusunda pek cok acidan yetersiz oldugu gorulmus ve bu yetersizliklerin giderilmesine yonelik onerilerde bulunulmustur.
Iranian Red Crescent Medical Journal | 2014
Oya Ögenler; Hüseyin Selvi
International Journal of Assessment Tools in Education | 2017
Hüseyin Selvi; Devrim Alıcı
International Journal of Assessment Tools in Education | 2017
Hüseyin Selvi; Devrim Alıcı
Mersin Üniversitesi Sağlık Bilimleri Dergisi | 2016
Hüseyin Selvi
Kuram Ve Uygulamada Egitim Bilimleri | 2016
Gülhan Orekici Temel; Semra Erdoğan; Hüseyin Selvi; Irem Ersöz Kaya
Mersin Üniversitesi Sağlık Bilimleri Dergisi | 2015
Hüseyin Selvi
Archive | 2014
Oya Ögenler; Hüseyin Selvi
Archive | 2010
Seçil Ömür; Hüseyin Selvi