Daryl J. D'Souza
RMIT University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daryl J. D'Souza.
Information Processing and Management | 2004
Daryl J. D'Souza; James A. Thom; Justin Zobel
In a distributed document database system, a query is processed by passing it to a set of individual collections and collating the responses. For a system with many such collections, it is attractive to first identify a small subset of collections as likely to hold documents of interest before interrogating only this small subset in more detail. A method for choosing collections that has been widely investigated is the use of a selection index, which captures broad information about each collection and its documents. In this paper, we re-evaluate several techniques for collection selection.We have constructed new sets of test data that reflect one way in which distributed collections would be used in practice, in contrast to the more artificial division into collections reported in much previous work. Using these managed collections, collection ranking based on document surrogates is more effective than techniques such as CORI that are based on collection lexicons. Moreover, these experiments demonstrate that conclusions drawn from artificial collections are of questionable validity.
international computing education research workshop | 2011
Judithe Sheard; Simon; Angela Carbone; Donald Chinn; Mikko-Jussi Laakso; Tony Clear; Michael de Raadt; Daryl J. D'Souza; James Harland; Raymond Lister; Anne Philpott; Geoff Warburton
This paper describes the development of a classification scheme that can be used to investigate the characteristics of introductory programming examinations. We describe the process of developing the scheme, explain its categories, and present a taste of the results of a pilot analysis of a set of CS1 exam papers. This study is part of a project that aims to investigate the nature and composition of formal examination instruments used in the summative assessment of introductory programming students, and the pedagogical intentions of the educators who construct these instruments.
Computer Science Education | 2010
Shuhaida Mohamed Shuhidan; Margaret Hamilton; Daryl J. D'Souza
Learning to program is known to be difficult for novices. High attrition and high failure rates in foundation-level programming courses undertaken at tertiary level in Computer Science programs, are commonly reported. A common approach to evaluating novice programming ability is through a combination of formative and summative assessments, with the latter typically represented by a final examination. Preparation of such assessment is driven by instructor perceptions of student learning of programming concepts. This in turn may yield instructor perspectives of summative assessment that do not necessarily correlate with student expectations or abilities. In this article, we present results of our study around instructor perspectives of summative assessment for novice programmers. Both quantitative and qualitative data have been obtained via survey responses from programming instructors with varying teaching experience, and from novice student responses to targeted examination questions. Our findings highlight that most of the instructors believed that summative assessment is, and is meant to be, a valid measure of a students ability to program. Most instructors further believed that Multiple-choice Questions (MCQs) provide a means of testing a low level of understanding, and a few added qualitative comments to suggest that MCQs are easy questions, and others refused to use them at all. There was no agreement around the proposition that if a question was designed to test a low level of skill, or a low level in a hierarchy of a body of knowledge, that such a question should or would be found to be easy by the student. To aid our analysis of assessment questions, we introduced four measures: Syntax Knowledge; Semantic Knowledge; Problem Solving Skill and the Level of Difficulty of the Problem. We applied these measures to selected examination questions, and have identified gaps between the instructor perspectives of what is considered to be an easy question and also in what is required to be assessed to determine whether students have achieved the goals of their course.
integrating technology into computer science education | 2013
Judithe Sheard; Simon; Angela Carbone; Daryl J. D'Souza; Margaret Hamilton
Previous studies of assessment of programming via written examination have focused on analysis of the examination papers and the questions they contain. This paper reports the results of a study that investigated how these final exam papers are developed, how students are prepared for these exams, and what pedagogical foundations underlie the exams. The study involved interviews of 11 programming lecturers. From our analysis of the interviews, we find that most exams are based on existing formulas that are believed to work; that the lecturers tend to trust in the validity of their exams for summative assessment; and that while there is variation in the approaches taken to writing the exams, all of the exam writers take a fairly standard approach to preparing their students to sit the exam. We found little evidence of explicit references to learning theories or models, indicating that the process is based largely on intuition and experience.
2013 Learning and Teaching in Computing and Engineering | 2013
Mercy Maleko; Dip Nandi; Margaret Hamilton; Daryl J. D'Souza; James Harland
As students become more mobile they increasingly require access to their educational resources anytime and anywhere. University courses are typically managed through learning management systems, which were established to enable access to their educational resources online at any time, but are these enough? We are interested in researching the impact that Facebook can have for online students in an introductory programming course. In particular we want to know whether any learning can occur in Facebook. A programming group was set up on Facebook for our cohort of fully online students who already have access to Blackboard, our Universitys learning management system, for them to discuss, chat and brainstorm about programming. We compare the student participation to the two environments: the Blackboard Discussion Forum and the Facebook programming group, over the semester of the course. In this paper we analyse the student postings and identify the similarities and differences of the two environments and we discuss the benefits and drawbacks of each environment. Our primary finding was that Facebook attracted more students (over Blackboard) due to its social and community learning benefits, encouraging students to support one another. Blackboard was viewed as the authoritative and valid medium for official course material. Finally, there is a need for further work to determine how the two media may be better integrated for course delivery.
koli calling international conference on computing education research | 2012
Simon; Daryl J. D'Souza; Judy Sheard; James Harland; Angela Carbone; Mikko-Jussi Laakso
This paper reports the results of a study investigating how accurately programming teachers can judge the level of difficulty of questions in introductory programming examinations. Having compiled a set of 45 questions that had been used in introductory programming exams, we took three measures of difficulty for the questions: the expectations of the person who taught the course and set the exams; the consensus expectations of an independent group of programming teachers; and the actual performance of the students who sat the exams. Good correlations were found between all pairs of the three measures. The conclusion, which is not controversial but needed to be established, is that computing academics do have a fairly good idea of the difficulty of programming exam questions, even for a course that they did not teach. However, the discussion highlights some areas where the relationships show weaknesses.
simulated evolution and learning | 2008
Shahrul Badariah Mat Sah; Victor Ciesielski; Daryl J. D'Souza; Marsha Berry
Photomosaics are a new form of art in which smaller digital images (known as tiles) are used to construct larger images. Photomosaic generation not only creates interest in the digital arts area but has also attracted interest in the area of evolutionary computing. The photomosaic generation process may be viewed as an arrangement optimisation problem, for a given set of tiles and suitable target to be solved using evolutionary computing. In this paper we assess two methods used to represent photomosaics, genetic algorithms (GAs) and genetic programming (GP), in terms of their flexibility and efficiency. Our results show that although both approaches sometimes use the same computational effort, GP is capable of generating finer photomosaics in fewer generations. In conclusion, we found that the GP representation is richer than the GA representation and offers additional flexibility for future photomosaics generation.
2014 International Conference on Teaching and Learning in Computing and Engineering | 2014
Mercy Maleko; Margaret Hamilton; Daryl J. D'Souza; Falk Scholer
This paper presents an analysis of first year programming student (novice programmer) messages posted on a Facebook Programming Group created to support the learning of programming by increasing novice to novice interactions anywhere, anytime. The captured messages are anonymized and analysed with a view to exploring the extent to which Facebook groups have supported the learning of programming, as well as to identify struggles and challenges faced by learners who use the environments. The results of our analysis show that interactions on Facebook groups can lead to social learning and can encourage engagement with learning. The Facebook groups are used effectively by novices to solve significant problems that would not have been discussed in their respective learning management systems. Our findings highlight some interesting problems faced by novice programmers and include useful learning analytics of novice programmer interactions within these groups.
acm multimedia | 2007
Daryl J. D'Souza; Victor Ciesielski; Marsha Berry; Karen Trist
We describe the implementation of an art installation that generates animated photomosaics of the viewer. Photomosaics are target images composed of smaller images known as tiles. When a photomosaic is viewed from afar the detail of the tiles is lost and the target image is evident. Up close, the opposite occurs: the detail of the tiles is evident and the target image is lost. Our system uses a photo of the viewer as the target and miniatures of the viewers face as tiles. Evolutionary search is used to find the best selection and arrangement of tiles. Each newly found best image is then used as the frame of a movie. The resulting animations start from a random arrangement of tiles and gradually the viewers face emerges and is clearly visible, and then gradually de-materialises into a random pattern.
international computing education research workshop | 2016
Simon; Judithe Sheard; Daryl J. D'Souza; Peter Klemperer; Leo Porter; Juha Sorva; Martijn Stegeman; Daniel Zingaro
The programming education literature includes many observations that pass rates are low in introductory programming courses, but few or no comparisons of student performance across courses. This paper addresses that shortcoming. Having included a small set of identical questions in the final examinations of a number of introductory programming courses, we illustrate the use of these questions to examine the relative performance of the students both across multiple institutions and within some institutions. We also use the questions to quantify the size and overall difficulty of each exam. We find substantial differences across the courses, and venture some possible explanations of the differences. We conclude by explaining the potential benefits to instructors of using the same questions in their own exams.