Miguel Ardid Ramírez
Polytechnic University of Valencia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Miguel Ardid Ramírez.
Educacion Xx1 | 2018
Jaime Riera Guasp; Miguel Ardid Ramírez; Ana Jesús Vidaurre Garayo; J.M. Meseguer-Dueñas; José A. Gómez-Tejedor
Development of the information and communication technologies has led to an increase in the use of Computer Based Assessment (CBA) in higher education. In the last decade, there has been a discussion on online versus traditional pen-and-paper exams. The aim of this study was to verify whether students have reserves about auto-scored online exams, and if that is the case, to determine the reasons. The study was performed in the context of a blended assessment in which 1200 students were enrolled on a first-year physics university course. Among them, 463 answered an anonymous survey, supplemented by information obtained from an open-ended question and from interviews with students. Three factors (labelled ‘F1-Learning,’ ‘F2- Use of Tool,’ and ‘F3-Assessment’) emerged from the quantitative analysis of the survey, and an additive scale was established. We found significant differences in the ‘F3-Assessment’ factor compared to the other two factors, indicating a lower acceptance of the tool for student assessment. It seems that even though students are used to computers, they have a lack of confidence in online exams. We carried out an in-depth survey on this topic in the form of an open-ended question and by interviewing a small group of 11 students to confer strength and nuance to the quantitative results of the survey. Although their comments were positive in general, especially on ease-of-use and on its usefulness in indicating the level achieved during the learning process, there was also some criticism of the clarity of questions and the strictness of the marking system. These two factors, among others, could have been the cause of the worse perception of F3-Assessment and the origin of the students’ reluctance towards online exams and automatic scoring.Development of the information and communication technologies has led to an increase in the use of Computer Based Assessment (CBA) in higher education. In the last decade, there has been a discussion on online versus traditional pen-and-paper exams. The aim of this study was to verify whether students have reserves about auto-scored online exams, and if that is the case, to determine the reasons. The study was performed in the context of a blended assessment in which 1200 students were enrolled on a first-year physics university course. Among them, 463 answered an anonymous survey, supplemented by information obtained from an open-ended question and from interviews with students. Three factors (labelled ‘F1-Learning,’ ‘F2- Use of Tool,’ and ‘F3-Assessment’) emerged from the quantitative analysis of the survey, and an additive scale was established. We found significant differences in the ‘F3-Assessment’ factor compared to the other two factors, indicating a lower acceptance of the tool for student assessment. It seems that even though students are used to computers, they have a lack of confidence in online exams. We carried out an in-depth survey on this topic in the form of an open-ended question and by interviewing a small group of 11 students to confer strength and nuance to the quantitative results of the survey. Although their comments were positive in general, especially on ease-of-use and on its usefulness in indicating the level achieved during the learning process, there was also some criticism of the clarity of questions and the strictness of the marking system. These two factors, among others, could have been the cause of the worse perception of F3-Assessment and the origin of the students’ reluctance towards online exams and automatic scoring.
In-Red 2015 - Congreso de Innovación Educativa y Docencia en Red de la Universitat Politècnica de València | 2015
Ana Jesús Vidaurre Garayo; Miguel Ardid Ramírez; Vanesa Paula Cuenca Gotor; Isabel Salinas Marín; José Molina Mateo; Jaime Riera Guasp; Marcos Herminio Gimenez Valentin; José Antonio Gómez Tejedor; Rosa Martínez Sala; José María Meseguer Dueñas
La evaluacion entre companeros es una forma de aprendizaje colaborativo en el que los estudiantes valoran el producto de aprendizaje de otros estudiantes. En nuestro caso, realizan dos tipos de evaluaciones relacionadas con la comunicacion efectiva: evaluan cualitativamente presentaciones orales de ejercicios, y en equipo evaluan cuantitativamente documentos con la resolucion de problemas de otros equipos. Se les han dado pautas sobre como llevar a cabo la evaluacion. El resultado ha sido comparado con la evaluacion hecha por los profesores. En la evaluacion cualitativa, la ordenacion por cualidad de los alumnos coincide con la de los profesores, i en la cuantitativa, las diferencias entran dentro de lo razonable entre profesores expertos. Ademas, la argumentacion que hacen a las calificaciones demuestra un trabajo de evaluacion riguroso y un aprendizaje a traves del trabajo hecho por los companeros.
Educacion Xx1 | 2018
Jaime Riera Guasp; Miguel Ardid Ramírez; Ana Jesús Vidaurre Garayo; J.M. Meseguer-Dueñas; José A. Gómez-Tejedor
Archive | 2017
Miguel Ardid Ramírez; Jaime Riera Guasp
Archive | 2016
José Antonio Gómez Tejedor; Jaime Riera Guasp; Miguel Ardid Ramírez
Archive | 2016
José Antonio Gómez Tejedor; Jaime Riera Guasp; Miguel Ardid Ramírez
Archive | 2016
Jaime Riera Guasp; José Antonio Gómez Tejedor; Miguel Ardid Ramírez
Archive | 2016
José Antonio Gómez Tejedor; Jaime Riera Guasp; Miguel Ardid Ramírez
Archive | 2016
José Antonio Gómez Tejedor; Jaime Riera Guasp; Miguel Ardid Ramírez
Archive | 2016
Jaime Riera Guasp; José Antonio Gómez Tejedor; Miguel Ardid Ramírez